Trustworthy knowledge

Print Friendly, PDF & Email

Following on from my last blog about the usefulness of big R research literacy to systems thinking in practice (and vice versa), this post is about the trustworthiness of ‘knowledge’ produced by research, whether that is the findings or particular recommendations (conclusions) arising from them.

I’ve written before about critiques of knowledge being viewed as a product or ‘thing’ and alternatives offered by different authors.  However, for the sake of this blog, I am not sure how far I will get without treating knowledge as if it is the more formal propositional, explicit knowledge that is presented in written form or in presentations.  So bear with me!

How my systems literacy helps me think about trustworthiness

an explanation does not exist in and of itself – it is a part of a social dynamic between an explainer, an explanation (the form of an explanation) and a listener or reader.

Ison (2017, p.9)

This quote from Ison (2017) is accompanied by a cartoon where one person is saying “This happens because….” and a second person is thinking “I accept this because….”  I think that this is a great way of thinking about the trustworthiness of knowledge – any explanation a practitioner (including practitioners of research) make is evaluated by the listener or reader in order to consider whether it is trustworthy or not.

Elsewhere in the field of Systems, critical systems heuristics addresses the issue of trustworthy knowledge, through the inclusion of guarantor as an issue that arises in relation to sources of knowledge in a reference system (Ulrich and Reynolds, 2020).  Here, those who are making decisions will have a certain perspective of what does/should make good knowledge so that they can be relatively confident that when they apply that knowledge they will make progress towards achieving their purpose.  It’s what the specific CSH question 9 “where do/should we look for some guarantee of success?” (and prompts related to true guarantors, false guarantors and doubtful/potential guarantors) are trying to surface.  In practice, different sources of knowledge can provide different guarantees and there can be conflict between different types (Reynolds and Wilding, 2017).

Reynolds (2015) has elaborated on the notion of guarantor.  He introduces three particular co-guarantors – reliability, resonance and responsibility – and discusses these in relation to evaluation practice specifically.  These three Rs do seem helpful criteria for considering the quality of research and its output more generally and I like the way they could guide me as a Systems thinking research practitioner/Research thinking systems practitioner.

How my research literacy helps me think about trustworthiness

How can an inquirer persuade his or her audiences (including self) that the findings of an inquiry are worth paying attention to, worth taking account of? What arguments can be mounted, what criteria invoked, what questions asked, that could be persuasive on this issue?

Lincoln and Guba (1985, p.290)

Shifting away from specific Systems writing to research more generally.  There has been a long history of thinking about trustworthiness of findings.

Even today, Lincoln and Guba (1985) are often cited as key authors on this topic.  This is because in their introduction to the naturalistic paradigm of research, they sought to contrast it with “conventional” [positivist] research and highlight how the two paradigms contrasted with respect to the way they could address the issue of trustworthiness.  So this is perhaps a key point we can learn from research – that the critiera we use for trustworthiness need to be coherent with the paradigm used to produce the knowledge.  In short, we can’t judge knowledge generated through a naturalistic approach on say experiences of long COVID using the same criteria that we’d judge knowledge about drugs to alleviate long COVID.

For a long time, I’ve read other authors summaries of Lincoln and Guba (1985) whether in research textbooks or as a result of quick google searches.  But recently, I decided I was intrigued enough to source a second-hand copy of their original work (thank you, e-bay).  It’s helpful to read the contextual argument around what is often summarised in 4 or 5 words or a simple table.

They introduce 4 questions that inquirers find useful when thinking about trustworthiness.  This is how I am going to organise my own summary.  However, there isn’t space here for a lot of detail on the concepts introduced or concerns arising – therefore, my focus is on introducing the ideas for orientation and onward reading (on naturalist side, I find Loh, 2013 helpful and also Feroro et al, 2018 a good reflection of the ideas in practice).

(1) “Truth value”: How can one establish confidence in the “truth” of the findings of a particular inquiry for the subjects (respondents) with which and the context in which the inquiry was carried out?

(Lincoln and Guba, 1985, p.290)

In a positivist paradigm, the response to this question is “internal validity“.  As Patino and Ferreira (2018) outline, this is achieved by making sure the results are not due to methodological errors – for example in measurement.

In a naturalist paradigm, the response to this question is “credible“.  So it is important to carry out research in a way that the credibility is likely to be enhanced and also potentially to have them verified by those who were studied.  Five ‘techniques’ or strategies can increase the likelihood of credibility – prolonged engagement, persistent observation, triangulation, peer debriefing and negative case analysis.

(2) Applicability: How can one determine the extent to which the findings of a particular inquiry have applicability in other contexts or with other subjects (respondents)?

(Lincoln and Guba, 1985, p.290)

In a positivist paradigm, the response to this question is “external validity” which is concerned with the issue of generalisation.  The primary way that this paradigm works to assure external validity is by the nature of the random sampling and use of confidence intervals in statistics to consider how likely it is that the results apply to the population from which the sample was drawn.

In the naturalist paradigm, the concern is for transferability.  The researcher can do little to determine this up front because it depends on the level of similarity between the specifics of the research context and the context within which the knowledge may be used.  The primary focus then is ensuring there is a sufficient ‘thick description’ of the research context to help the audience make that judgement.

(3) Consistency: How can one determine whether the findings of an inquiry would be repeated if the inquiry were replicated with the same (or similar) subjects (respondents) in the same (or similar) context?

(Lincoln and Guba, 1985, p.290)

The answer in the positivist paradigm is “reliability“.  The main way of testing it is replication.

Within a naturalist paradigm, the concern is more for dependability.  This can arise from triangulation of methods.  But also by ‘auditing’ the research process – the audit can only take place if the study has been written up in a way that is highly transparent.  An important consideration.

(4) Neutrality: How can one establish the degree to which findings of an inquiry are determined by the subjects (respondents) and conditions of the inquiry and not by the biases, motivations, interests, or perspectives of the inquirer?

(Lincoln and Guba, 1985, p.290)

The clear answer for the positivist is ‘objectivity‘ and arguments against bias and subjectivity.   The naturalist paradigm has a problem with this though because it accepts that the inquirer is a research instrument so to remove their subjectivity completely is most likely impossible.  The key concern then is confirmability –  which also depends on a transparent and authentic account of the process.

Lincoln and Guba (1985) do advocate an additional activity to help with trustworthiness in general and particularly neutrality and that is the reflexive journal.  A good, contemporaneous set of notes and ideas about self and method is vital for producing a retrospective transparent account with appropriate thick description.


Before moving on, it is worth mentioning that issues of research quality aren’t only addressed on a broad paradigm basis.  There are also discussions of quality in the discourse related to specific research approaches.  So for example, there are a list of choice points for quality in action research (Bradbury et al, 2019). Elsewhere, Coghlan and Brannick (2010, p.22) refer to four principles that guide action research – attentiveness, intelligence, reasonableness, responsibility.



So both my systems literacy and research literacy helps me think about the trustworthiness of the claims I make.  They both reinforce the idea that trustworthiness arises in a relational dynamic.  As a ‘user’ of knowledge, I do need to play an active role in considering whether or not it is ‘trustworthy’.  And as a ‘provider’ of knowledge, I need to play an active role in enabling others to judge the trustworthiness of my explanations.

Given that systems thinking emphasises the influence of a practitioner’s traditions of understandings but also the importance of engaging with, and appreciating, others’ perspectives, it is more likely that the trustworthiness criteria of the naturalist paradigm will be helpful.  However, my concern of just looking at the criteria and the list of strategies for establishing them is that they are predominantly ‘methods’ based.  They risk a tickbox, unquestioning approach (e.g. I have done triangulation so therefore this must be credible, and so on).  My systems thinking reminds me of the active role I play in using ideas and methods and a responsibility to be aware when I do so.  If I don’t there is a risk that methods like triangulation become false guarantors – something to hide behind.  This could be even more extreme if I try to back up my claims with the criteria used in positivist research.  Coghlan and Brannick (2010) principles associated with action research particularly resonate with me as they emphasise the importance of my attributes as a practitioner, rather than just the methods.

The Systems ideas provide a strong reminder of the multiple possible audiences for research and the potential for different stakeholders to judge trustworthiness in different ways.  Those within the setting/context where research took place would be less interested in transferability than third parties reading some research.  Those who want to act on the knowledge immediately may emphasise resonance and provenance in a different way to those who are critiquing an entire field of research on a topic.  As a policy practitioner that moved into research practice, one of the most challenging elements was navigating the tension between meeting the needs of different audiences.

Moving forward, I do think my research literacy on debates about trustworthiness gives me a much better ‘grounding’ for being a systems thinking practitioner.  I have a more nuanced language to consider guarantors especially.

It’s definitely a ‘both/and’ not an ‘either/or’ that I take with me into my future practice.



Bradbury, H., Glenzer, K., Ku, B., Columbia, D., Kjellström, S., Aragón, A.O., Warwick, R., Traeger, J., Apgar, M., Friedman, V., Chuan Hsia, H., Lifvergren, S. and Gray, P. (2019) ‘What is good action research? Quality choice points with a refreshed urgency’, Action Research, 17(1), pp. 14–18. Available at:

Coghlan, D. and Brannick, T. (2010) Doing action research in your own organisation. Third Edition. London, UK: Sage Publications.

Forero R, Nahidi S, De Costa J, Mohsin M, Fitzgerald G, Gibson N, McCarthy S, Aboagye-Sarfo P. (2018) ‘Application of four-dimension criteria to assess rigour of qualitative research in emergency medicine’. BMC Health Serv Res., 18(1):120. doi: 10.1186/s12913-018-2915-2.

Ison, R. (2017) Systems practice: How to act. Second Edition. London/Milton Keynes, UK: Springer Publications/The Open University.

Loh, Jason. (2013). ‘Inquiry into Issues of Trustworthiness and Quality in Narrative Studies: A Perspective’. Qualitative Report. 18. 1-15. doi: 10.46743/2160-3715/2013.1477.

Lincoln, Y. and Guba, E. (1985) Naturalistic inquiry. London, UK: SAGE Publications Ltd.

Patino CM, Ferreira JC. (2018) ‘Internal and external validity: can you apply research study results to your patients?’ J Bras Pneumol., 44(3):183. doi: 10.1590/S1806-37562018000000164

Reynolds, M. (2015) ‘Rigour (-mortis) in evaluation, Evaluation Connections: The European Evaluation Society Newsletter, June 2015, Special Edition, pp.2-4.  Available at: (Accessed: 23 May 2024)

Reynolds, M. and Wilding, H. (2017) ‘Boundary critique: An approach for framing methodological design’, in D. de Savigny, K. Blanchet, and T. Adam (eds) Applied systems thinking for health systems research: A methodological handbook. Maidenhead, UK: Open University Press, pp. 38–56.

Ulrich, W. and Reynolds, M. (2020) ‘Critical systems heuristics: The idea and practice of boundary critique’, in M. Reynolds and S. Holwell (eds) Systems approaches to managing change: A practical guide. Second. London/Milton Keynes, UK: Springer Publications/The Open University, pp. 255–306.


Share what you think...

This site uses Akismet to reduce spam. Learn how your comment data is processed.