So what is the evidence base for the ‘call’ to be evidence based?

Print Friendly, PDF & Email

Just recently, the concept of ‘evidence-based public health’ or ‘evidence-based policy’ (and therefore, evidence-based public health policy) has started to worry me.  It’s so part of our discourse that you don’t often stop to think what does it really mean? and is it ‘really’ happening? and is it really possible?  But then when you do, you kind of realise that even the notion of ‘evidence’ is contested – what does it really mean to the people who advocate for ‘evidence-based xxxx’?

After christmas, the module on my PhD is called ‘Knowledge, evidence and theory’ so I suspect/hope I’ll have the opportunity to think of this more then, but in the meantime I’m pondering what does it mean to say that a public health initiative (policy, programme, project, service) is or isn’t underpinned by a sound evidence base?  I’ve jumped around a few books and internet searches in order to gain some initial impressions which I hope will form a basis for further inquiry into this area.

Where did the notion of ‘evidence-based’ come from?

From what I can gather (e.g. Øvretveit, 2007) the notions of evidence-based public health, evidence-based policy making and evidence-based management etc stemmed from ‘creatively swiping’ the concept of evidence-based medicine.  If we expect clinicians to make decisions in terms of medical treatment on the best and most up to date evidence, then why shouldn’t we expect the same of others in the public sector – public health; policy makers; politicians; managers and so on?

But this was a ‘normative’ concept – what should be done, what it ought to be like.  It was never an empirically derived description of reality.

The principle seems fine – if you are going to do something, then check out first whether it will work, whether there will be harms as well as benefits and for whom, whether there will be sufficient ‘benefit’ for the cost.  This is the ‘research’ or ‘scientific’ underpinning to applied sciences like medicine and public health.

(as an aside, it is actually in the interests of the academic, research community that the normative ideal of ‘evidence-based’ survives – after all, there wouldn’t be much income for research institutions if there was no purpose to research other than mere intellectual curiosity and publications in research journals).

So what is meant by evidence?

There are multiple, complex dimensions in the concept of evidence which means it is hard to just go for a simple definition (Banta, 2003).  In fact a simple definition like ‘evidence is knowledge’ only ends you up in the spiral of saying – well what do you mean by knowledge? (and anyone who reads this blog regularly will realise how much I mull that question over).

Banta (2003) highlights three perspectives on evidence – each of which raises potential questions of definition, quality, ‘validity’, feasibility and so on.

  • evidence is any useful information that serves as a basis for making decisions
  • evidence is scientific evidence
  • evidence is research based evidence

The World Health Organisation European Evidence Network defines evidence as “findings from research and any other knowledge that could be useful in decision-making on public health” (cited in Øvretveit, 2007, p137 – I can’t find the reference provided).

I’m just going to pursue the ‘evidence is research based evidence’ thread.  Not because I discount the other perspectives, like experiential knowledge and values etc but because I want to explore the research angle.


Many different types of research leading to many different kinds of evidence

In evidence-based medicine, the gold standard of evidence for a treatment (usually but not always pharmacological) is the randomised control trial (RCTs).  Because BIG PHARMA want to make more money, lots of RCTs get done (less get published but that is another matter), and because of a concern for public harm etc the way in which they get done is highly regulated.

I was once a psychology assistant on a clinical trial for a potential new drug for Alzheimer’s disease – a drug that seemed physiologically plausible but needed to be tested.  My role (as well as coordination plus patient/family liaison) was to carry out memory testing.  Like most drug trials it included a ‘double blind’ phase – neither investigators or patient/family knew whether it was an active dose or placebo.  RCTs depend on this purity.  But the more I think about it, the more a human element can come in – testing something like cholesterol changes in an RCT is one thing but even the memory testing involves a human element – how patient I felt, how long I’d wait, how distressed the patient was.  The more the human element is added in (e.g. behaviour change interventions; cognitive therapies) the more that things like relationships, attitude, rapport etc can affect the results – and that’s all on a one to one basis.

But when you are intervening in a social world (in general) RCTs and experimental designs aren’t possible.  You can’t randomly allocate half of a population to buy alcohol at minimum unit price and the other at market rates.  You can’t always study long enough to see IF a health impact arises twenty/thirty years later – apart from anything else too many other factors will ‘interfere’ on the way.

Then there is the evidence generated through qualitative research such as phenomology/interpretism – these don’t claim to create ‘valid’ and ‘generalisable’ results in the traditional sense.

This leads to talk about a ‘hierarchy of evidence’.  Fitzpatrick (2007) state “Hierarchies of evidence refer to the system of ranking developed by researchers and practitioners to identify the ‘best’ evidence they can apply to practice”.  But these are not absolute – there are values underpinning judgements about what makes ‘the best’ evidence for a particular profession.


Evidence for more of the same or evidence for innovation

IF an intervention (or something similiar to it) has been done before AND that intervention has been the subject of an evaluation (a form of research) AND that evaluation has been written up, published (preferably peer-reviewed) AND that evaluation is retrievable by someone designing an intervention elsewhere THEN the ‘new’ intervention can be said to be evidence-based.

But there is one big assumption in this – that what worked elsewhere can be generalised to different people, places, time and sometimes culture.  It assumes that the context of the evaluated intervention and the context of the planned intervention are similar enough for the lessons to be transferable.

There are also a lot of IFs – things get done in the ‘real world’ all the time, little policy experiments, that go nowhere near any sort of evaluation, that don’t get written up and if they do are rarely peer-reviewed or subject to research-rigour.  People carry on with them because they feel right.

But then comes the thing that really troubles me – what if you are innovating? making a leap that no-one else has yet done? how can that be evidence based when there is no precedent?  what do you have to call on?   A phrase in Killoran and Kelly (2010, xxii) caught my eye – they talk of “conceptual plausibility”.  If you have an understanding of the different ‘causal pathways’ that lead to good health or to a particular disease through empirical or experimental research and your new intervention is based on the theory/conceptual models etc generated – then that is the evidence on which you are basing your new intervention – it seems ‘conceptually plausible’.  In a way, although your intervention is a real world activity, it is like you are testing a hypothesis – e.g.IF it is the case that people who have a job in general have better mental health than those who don’t have a job; THEN we create jobs for people IN ORDER TO improve population mental health.  Quite a lot of the time, people don’t make these underpinning theories/assumptions explicit as they set off on doing an intervention that’s why one form of evaluation (theory driven evaluation) sets out by surfacing and making explicit these theories/assumptions (Pawson and Sridharan, 2010).


Naivety about how policy making gets done

The idea of evidence-based policy making seems to have embedded into it a particular perspective of policy making – it assumes it is value free, technically oriented and rational.  However it only takes a little bit of exploration into the world of policy making and political theory to realise that the world doesn’t work like that.

For example, the ‘argumentative approach’ to policy inquiry “rejects the idea that policy analysis can be a straightforward application of scientific principles” (Fischer and Gottweiss, 2012, 2).  This has led to a whole body of research (emerging in the early 1990s) looking into communicative practices that go on in policy making and the relationship between the empirical and the normative.  If you take this view, all research can do is provide evidence that different actors (including researchers themselves) draw on to support their arguments in the policy process.  That research can be credible to some but not to others – take for example the recent ‘evidence’ provided by Big Tobacco against standardised packaging (Hatchard et al, 2013).

In fact, those concerned with healthy public policy argue that to date public health has been too focussed on generating intelligence needs for policy that the community has neglected to use/build the evidence base of the policy process itself (Rütten, 2011; Kickbusch 2013; Clavier and de Leeuw 2013). (I am just starting to explore Clavier and de Leeuw’s edited book Health promotion and the policy process published this year so I’ll get to grips with it more over time.)

But on top of that, there is also a new ‘normative’ call coming forward.  The complex nature of urban health, the rising importance of participatory democracy and the need for a new form of governance for health (Rydin et al, 2012; Kickbusch and Gleicher, 2012) leads to a need for more of a focus on social and societal learning .  This reminds me of a quote from Schön (1973) “government as a learning system carries with it the idea of public learning, a special way of acquiring new capacity for behaviour in which government learns for the society as a whole” (p6, for more on these ideas see this previous blog).  But it also highlights that multiple perspectives and multiple forms of knowledge, not just that of ‘experts’ with a formalised research base, are very important in the policy process.  From this perspective, policy making is “conceptualised as collective puzzlement on society’s behalf, entailing not only knowing and deciding but also learning” (Fischer and Mendell, 2012, 344 in Fischer and Gottweiss, 2012).


Time to move on from ‘evidence based’?

Not everyone uses the term ‘evidence-based’ – For example the World Health Organisation talks of evidence-informed policy making.  To me, this phrase evidence-informed, relaxes the boundaries a little it seems to allow for other dimensions such as values to inform policy making.  To me it gets away from the naivety of assuming that you can say “the research says x, policy should do y, and if they don’t then it is a problem simply of knowledge transfer”.  A recent editorial, Bambra (2013), discusses the rise and fall of ‘evidence-based public health policy’ and concludes

There is undoubtedly a need for principles, values, ideologies and struggle within any democratic process. Evidence should also be part of this, but politics will, and should, always be ascendant. Whilst this can be extremely frustrating on occasions for researchers—and this summer has been one of considerable discontent for public health—we can, and should, only ever aim for evidence-informed policy (page 2)

Øvretveit (2007) argues that “evidence based public health is neither desirable nor feasible” (p.129).  He highlights that “Political and value choices will always be a key factor in public health decisions, and evidence of effectiveness will rarely be as certain as for some clinical treatments.  Public health never will be, or should be, evidence based, but it needs to become more research informed” (page 137).  He introduces the idea of ‘research-informed’ defined as “making decisions about how to improve public health using research carried out else where as well as local research” (page 138)

Just this week I was in a meeting that included a Prof from one of the local universities, he used the phrase “empirically rich and theoretically driven” – it really resonated with me in terms of aspirations – alongside Marmot’s ‘ideologically driven’ that is!



Bambra, C. (2013), Editorial: The primacy of politics: the rise and fall of evidence-based public health policy? Journal of Public Health, Advance access, pp.1–2.

Banta, D.H. (2003), Considerations in defining evidence for public health: The European Advisory Committee on Health Research, World Health Organisation Regional Office for Europe. International Journal of Technology Assessment in Health Care, 19(3), pp.559–572.

Clavier, C. and de Leeuw, E. (2013) (Editors) Health Promotion and the Policy Process.  Oxford: Oxford University Press

Fischer, F. and Gottweiss, H. (Editors) (2012), The argumentative turn revisited: public policy as communicative practice, Duke University Press: London

Fitzpatrick, J. (2007), Finding the research for evidence-based practice.  Nursing Times, 103(17), 32-33.  Open access at

Hatchard, J.L., Evans-Reeves, K.A., Ulucanlar, S., Fooks, G.J., Gilmore, A. B. (2013) How do corporations use evidence in public health policy making? The case of standardised tobacco packaging, The Lancet, Vol. 382, Page S42, DOI: 10.1016/S0140-6736(13)62467-8

Kickbusch, I. and Gleicher, D. (2012), Governance for health in the 21st century, Geneva: World Health Organisation Regional Office for Europe. Available at: [Accessed January 22, 2013].

Kickbusch, I. (2013), Foreword: We need to build a health political science, in Clavier, C. and de Leeuw, E. (Editors) Health Promotion and the Policy Process.  Oxford: Oxford University Press, pp. iii-iv

Killoran, A. and Kelly, M.P. (2010), Introduction: effectiveness and efficiency in public health. In Killoran, A. and Kelly, M.P. (Editors) Evidence-based public health: effectiveness and efficiency. Oxford: Oxford University Press, pp. xix–xxvii.

Øvretveit, J. (2007), Research-informed public health. In Hunter, David J. (Editor), Managing for Health. Abingdon, Oxon: Routledge.

Pawson, R. and Sridharan, S.. (2010), Theory-driven evaluation of public health programmes. In Killoran, A. and Kelly, M.P. (Editors) Evidence-based public health: effectiveness and efficiency. Oxford: Oxford University Press, pp. 43-62.

Rütten, A. (2012), Evidence-based policy revisited: orientation towards the policy process and a public health policy science. International Journal of Public Health, 57, pp.455–457.

Rydin, Y., Bleahu, A., Davies, M., Davila, J.D., Friel, S., De Grandis, G., Groce, N., Hallal, P.C., Hamilton, I., Howden-Chapman, P., Lai, K.-M., Lim, C., Martins, J., Osrin, D., Ridley, I., Scott, I., Taylor, M., Wilkinson, P. and Wilson, J. (2012), Shaping cities for health: complexity and the planning of urban environments in the 21st century. The Lancet, 379(9831), pp.2079–2108.

Schön, D.A. (1973) Beyond the stable state pp.30, 116 – 179. The Norton Library, W.W. Norton and Company INC, New York reprinted as Chapter 1 in Blackmore, C. (Ed) (2010) Social learning systems and communities of practice, The Open University/Springer, Milton Keynes/London, pp. 5-16.

6 thoughts on “So what is the evidence base for the ‘call’ to be evidence based?

  1. I love these reflections, Helen. would you like to add the ever increasing criticisms of peer review to your list?

    Of course, ‘evidence’ is a Good Thing. You can’t just make it up. But there is policy-based evidence as well as evidence-based policy — you only get evidence on what you look for and what you look for will depend on your ‘appreciative system’.

    Too much hubris, not enough humility. People love the idea of QED. Proven. Move on to the next puzzle to solve. But of course we know that proven and fixed from one angle can always be creating a hundred more new and different problems…

  2. Hi Arwen

    Thanks for the reminder of Vickers idea of appreciative system in connection with this. It’s made me think…

    There is a very dominant way of looking at health in epidemiology – seeking the risk factors that lead to a disease and also the risk factors that lead to other risk factors! They call it a pathogenic epistemology – answering the question “what creates disease?”

    Very few researchers turn this on its head – change their appreciative setting – to ask the question “what makes health” or “what makes wellbeing” (referred to as salutogenic approach).

    This means in a policy process we are very good at making an argument with research from a prevention of disease perspective; but not from a health promotion perspective.


  3. In the last week, since I wrote this, the Guardian included two very relevant articles which further elucidates the ‘mismatch’ in evidence based policy making.

    1) 20 things politicians need to know about science

    2) 20 things scientists need to know about policy making

    helpful stuff

  4. Pingback: Just Practicing − Policy Safari

  5. Time has moved on and I am now studying the ‘Philosophy of research’ module that I mentioned at the beginning of this blog.

    We’ve been invited to read an article on evidence-based medicine (Goldenberg, 2006)
    The first line is “To have evidence is to have some conceptual warrant for belief or action (Goodman 2003, p. 2)”.
    I like it.

    Goldenberg, M (2006) On evidence and evidence based medicine: Lessons from the philosophy of science Social Science and Medicine 62 pp 2621-2632

  6. Again a note to self – I mentioned that evidence based policy built on the evidence based medicine movement. Recent article in BMJ questions whether evidence based medicine is in crisis.
    Greenhalgh et al (2014) Evidence Based Medicine: a movement in crisis?, BMJ, 348:g3725

    It’s a great article – and these few words seemed particularly representative of the unintended consequences of both EBM and performance management combined:
    “In such scenarios, the focus of clinical care shifts insidiously from the patient (this 74 year old woman) to the population subgroup (women aged 70 to 75) and from ends (what is the goal of investigation or treatment in this patient?) to means (how can we ensure that everyone in a defined denominator population is taking statins?)”

Share what you think...

This site uses Akismet to reduce spam. Learn how your comment data is processed.