The use and abuse of measurement

Print Friendly, PDF & Email

There seems to be a bit of a thing going on this week about terms like measurement, targets, payment by results, outcomes.  It has been going on a while in conversations I have had (both face to face and on-line) and a number of systems bloggers are writing about it but it all seems to be getting a bit busier this weekend…it seems to be coming to a head.

So to start with I’ll mention all the activity that has prompted me to turn to the keyboard to add to the conversation – or if not adding to it then at least summarising where my own thinking is going to.

So here we go:

IncognitoInRome – blogged a post On Targets a little while ago.  She shared that in my OU Systems Thinking in Practice alumni linked in group.  This led to a great thread of conversation – and two further blogs by IncognitoInRome called Back on Target and Target Alternatives.

InspGuilfoyle – blogged on Spot the difference a couple of days ago.  This provides a good reminder of the difference between Targets (bad) and Measurement (good).

Toby Lowe – a friend of mine – referred to as T in a post I did in December 2010 on A systems perspective on health and wellbeing has been following through on his exploration of the problem with outcomes and payment by results.  I’ve met up with him a few times during this process and when I was doing my MSc research we swapped a few papers.  Toby has now written up his investigations and is expecting them to be published in a journal soon – I’ve been fortunate enough to have had the opportunity to read a couple of drafts.  In the meantime however, on Friday he managed to get an article into the Guardian on the idiocy of payment by results.

I then shared that article with other OU STiP alumni, which then led to a post on Wouldn’t it be nice if on If not targets, then what

So as I was out on my run this morning (target distance 10km, but got home to find only 9.31km achieved, so failed!) I was thinking about this issue of measurements and targets and outcomes.  I remembered that I had written on this at least twice since the post mentioned above inspired by my conversation with Toby – a post on Outcomes in October 2011 and a post on The social technology of (performance) measurement in June 2011.

But here I am again, with my mind getting whirred up about it all.  Maybe because the more I think about it, the more I get disturbed that it is not going away, that it is so much part of ‘how we do things’ that we can’t imagine a world of ‘proper’ management without measurement and targets.  And yet it skews so much of what we do and what we understand about what we achieve.  These are my latest thought streams.

Why do we feel we have to measure everything?

The process of measuring attaches a value (number) to something we think is valuable or important.  The number is not the thing we value – it represents it, it models it – it is the map, not the territory.  All the numbers do is give us an insight into the thing we think is important.

I value wellbeing – it is important to me personally and professionally and I think that it is a fundamental human right to be able to live your life with positive wellbeing and good health.  So I value it – but should I measure it?  There was a debate a while ago on the measurement of wellbeing in the Guardian

Can happiness be measured?

Can we measure wellbeing scientifically? Economist Richard Layard, supporter of the new national happiness index, believes we can; philosopher Julian Baggini is having none of it

Baggini’s last point in the exchange is great – we value the love we have for our spouse, but we don’t attempt to measure it.  Measuring love takes away the essence of what love is.  Baggini thinks the same is true of concepts like wellbeing – our attempt to measure it, actually changes what we understand it to be.  To some extent that has happened with the concept of wellbeing – it is actually a multi-dimensional concept – but the ONS work on the national wellbeing index has led people to equate it with happiness or emotional wellbeing.

Nevertheless, in my work, I am involved in the ‘measurement’ of wellbeing in all it’s dimensions (emotional, social, physical, financial) and in the ‘measurement’ of the factors that affect people’s wellbeing and health.  I am involved in work to pull together ways of representing all the numbers in a way to facilitate and enable policy making and service planning.  It intends to give people an insight into the territory in which they are working, but not everyone reads it that way, they want to focus in what is important or what they think they can do something about.  We’ve used lots of source data to pull the picture together, but the picture is long so people want to rationalise it – it’s too much for people to cope with.  The problem is the more you take away, the more richness you loose, the more you reduce the complexity, the more skewed the picture becomes. The more it just gets reduced into a high level number (like life expectancy or disability free life expectancy or the answer to a happiness question) the less people can see how their work, what they do, can contribute to making a difference.

How do we make sense of a measurement?

A single number is nothing in isolation. If I run my 9.31km is that good? It may look fab to someone who doesn’t run, but a marathon runner may not be so impressed.  It could be good for me if I’ve never run that far before, or it could be a failure.  So context is everything, as is the perception of the person making an evaluative judgement.

Context tends to be given to numbers about wellbeing in one of two ways – a comparison to other places or a comparison to what it was before (trend data).  Neither of these tell us whether wellbeing is present or not – just whether it is better or worse than another place or better or worse over time.  Some trends can only be seen over years – we all know that life expectancy has gone up over the years but it isn’t always the case that it has gone up every year, the graph wobbles and blips.  Data about service activity needs context too – it is no good saying 200 people have used a service for carers without also acknowledging the size of the target population – if there are only 200 carers then fine, if there are 2000 then it makes you question the performance.

There is another thing too – a lot of measures taken at population level (like life expectancy or satisfaction with housing) hides variation because they are averages.  Increasingly we are finding that even when an average goes up, hidden behind this is a growing ‘gap’ between the ‘best’ and the ‘worst’.  The King’s Fund for example looked at lifestyle data and found that on average the England population is having a healthier lifestyle but behind the headline good news is a growing split and clustering of unhealthy behaviours in some sub-sets of the population.  We have the same issue with economic growth – GDP can go up, but if the distribution of the benefit is only felt by those who already have money, then economic wellbeing of the vast bulk of the population hasn’t necessarily changed – and comparative economic wellbeing has got worse.

The evaluative dimension – is this performance good?

So far I have talked about measurement in terms of how it informs – helps people gain insight into what is going on so that they can look forward and think about what improvements to make.

The thing is often the same numbers are often used to look back – to answer the question, how have we done? or, is our performance good enough?

Here we get into the practice of valuing and evaluation, which I touched on before in this blog in December 2010.   Evaluation can be formative – used to support the learning of those involved in the activity (this is the frame in which Vanguard tends to talk about measurement) or it can be summative – used to account for performance to someone else (e.g. a funder, elected members, the tax-payer, your management).  It is through the summative lens that performance management rears its head – with tools such as outcomes based accountability.  We now stop seeing the multiple dimensions of wellbeing, we start talking of ‘outcomes’ instead which distorts the map of the territory even more and makes us forget we are talking about people and their lives.

Value arises in a social dynamic – it is the eye of the beholder that decides whether something is good enough, there is nothing ‘objective’ about it at all. If someone feels unsafe when out at night, they won’t see any value in being told that crime levels are down – they may even claim they are being lied to, if the numbers jar with your own experience you are likely to doubt them regardless of how much the statisticians talk of statistical validity (also the reason they feel unsafe may have nothing to do with crime levels – crime is just one of our proxy measures for how unsafe an area is; people may feel unsafe because they can’t see the pavement well enough to worry about not falling).

I think that Toby’s point in the Guardian article (link above) is important.  At a service level, we’ve been so pent up on collecting data for summative evaluation that it actually gets in the way of doing the job properly.  We spend more time filling in forms and templates to help count or report what we do, that it has become a whole industry in itself.  Not only does this skew practice at the front-line and potentially make us less effective – it COSTS LOADS OF MONEY too in collecting, processing and reporting measurements of what has been going on.  Jobs and departments and careers are built on this industry of collecting, processing and reporting to the point where it becomes self-perpetuating.  And the more data there is, the more questions people ask, it doesn’t satisfy people, just ups the curiosity.

The claim game and the blame game

In some ways I am now going back to the original conversation I had with Toby which inspired the post I linked to above.  Wellbeing – health – community safety – social justice – and all those other policy issues – are complex facets of our society – in themselves each policy issue can be thought of as a different ‘lens’ on society and how well it is doing.

Regardless of which ‘lens’ we use – we can see the situation as characterised by interdependencies – an intervention in one place sets off ripples that we can’t predict or model in advance.  This also means it is very, very difficult to attribute a positive or negative change to a particular intervention (or lack of a particular intervention).  As discussed in my post on Outcomes – value and benefit arise as the result of interactions of many activities happening over time, they can’t be delivered.

Nevertheless, we still ‘claim’ or ‘blame’….

“People feel unsafe, crime must have gone up, tell the police to work harder”.

“People feel safer, must be because we put in new street lights in”.

“John receives services from 10 agencies, he has got better, it was us that did it”

“John isn’t getting better, must be because the doctors aren’t trying hard enough, what we’re doing is fine”

If we accept a situation is characterised by interdependencies, then we have to re-think attribution.  Attribution, that doing x has led to y (or not doing x has led to y) is based on deterministic linear cause-effect thinking (I give this medication, blood pressure will go down) or potentially probabilistic cause-effect thinking (He gives up smoking, probabilities say he is less likely to get lung cancer) – not all causalities can be ‘scientifically determined’ in this way, to make a claim like this you have to ‘prove’ that there has been no interfering factors.  That’s why medicine likes double blind controlled experiments – but you don’t have that luxury in doing policy – every step is an experiment – a real live one that affects people’s lives positively or negatively.  That’s why we need to watch closely, look for interdependencies, and inquire and learn continuously.  The blame game and the claim game make us blinkered, stop us inquiring, stop us learning.  And only using measures (numbers) as the insights into what is happening gives us too poor a picture to learn.

Using measures to motivate people to do well and work hard

This is where we think target setting helps.  If we set a target, then people have a goal, they know what success looks like. It motivates them.  If we don’t pay them unless they reach that goal then they will be even more incentivised.  As Toby and InspGuilfoyle describe so well.  This is a bad mentality.  It skews work, it skews learning, it skews what is focussed on.  We can hit a target, but miss the point.

None of this seems to fit with the rise of asset based practice either – if you ‘miss’ a target, you end up concentrating on what went wrong (e.g. if only I’d turned left, rather than right at that junction I would have done the extra 700m to achieve my 10km), rather than what went well (e.g. I did 9.31km that’s pretty good given I haven’t been able to run much whilst there was snow and ice on the ground – and I survived it well).  Missing targets make you think of problems and what went wrong (why did we help 10 people not 20?) – it blinkers the learning opportunity (what can we learn about the way that we helped these 10 people, so that we are better placed to help more people in the future?).  Deficit based management practices can be as demoralising and constraining to the growth of organisations as deficit based community work can be demoralising and constraining to growth of communities.

We don’t need targets or numerical measurements to act purposefully and be motivated to achieve things.  In my experience acting purposefully, being motivated from within leads to better quality.  A sense of purpose comes from really understanding what you are there for and if we spent as much time working with staff on that shared understanding, shared purpose, high quality knowledge as we do on collecting and presenting the numbers, then we would get somewhere.  As I quoted in my last post –

‘‘Knowledge is information combined with understanding and capability: it lives in the minds of people . . . Knowledge guides action, whereas information and data can merely inform or confuse’’ (Groff and Jones 2003)”

Numbers can help as part of a broader understanding of a system – as Wouldn’t it be nice if… mentions (link above) both viable system model and critical system heuristics offer us different ways of using measurement as part of our systems thinking.

So where has the rambling taken me..

I do think measurement plays a part in what we do. But we need to really understand what it is that we do when we are doing it.  Is it for learning?  Is it for summative evaluation?  Is it for looking ahead?

We need to remember that the measures are only a model, only insights into what is happening.  Better knowledge about something doesn’t necessarily come from having more data.  It can come from depth of understanding and quality of conversations.

We need to remember there is nothing objective about measurement.  Choices are made about what is the best measurement or the best proxy.  And, evaluation of whether performance is good enough is a judgement made by a person.  Those choices are formed through our ideologies and what we value.  In a radio programme, Marmot said something like – “arguments about evidence are often arguments about ideologies” – so too arguments about measurements are arguments about what we choose to value, how we choose to measure that and how we choose to use them as ‘tools’ in helping us do what we do.  They can be instruments of learning or instruments of control.

So the next time someone touts the next greatest performance measurement/management tool – caveat emptor – buyer beware – I’ll go in, learn about the assumptions that underpin it and learn from its good bits, but I won’t see it as a panacea to distract me from the hard work of learning and understanding.

5 thoughts on “The use and abuse of measurement

  1. Good post – You highlight some really important points, such as the importance of context. As Donald Wheeler points out:

    “Data have no meaning apart from their context”

    Also the difficulties in deciding what and how to measure, along with the efficacy of measures at different levels is often a major obstacle to understanding data from a systems perspective. Another good quote is this one from Einstein:

    “Not everything that counts can be counted, and not everything that can be counted counts”.

    Keep up the interest and remember that targets are always bad. 😉

  2. Pingback: Management based on outcomes makes good people do the wrong thing!? | In Defence of Youth Work

  3. Really enjoyed this post, which speaks to dilemmas we have been facing in youth work across the last decade and more. In our case the imposition of prescribed outcomes on the unruly world of informal education with young people, which we describe as ‘volatile and voluntary, creative and collective – an association and conversation without guarantees’. I’ve done a short post on our site drawing our supporters’ attention to this piece and Toby Lowe’s article in the Guardian. I’m in the midst of writing a critique of the Young Foundation’s ‘Framework of Outcomes for Young People’, a model shaped profoundly by neo-liberal ideology.

    Thanks again for the stimulus.

    • Hi Tony
      So glad you found it helpful – it is amazing how many people seem to be exploring this dilemma when you look into it. I can’t remember who said ‘change happens one conversation at a time’ – maybe the fact that more and more people are having conversations about it (albeit virtual ones) means that change will emerge. Wonder how we’ll measure that outcome!

      Helen

  4. Just a note to self really.

    ‘Virtual’ conversations on this issue are continuing. The LinkedIn Systems thinking in practice alumni group of which I am part has had a thread going for a while now.

    It’s interesting that we’ve shifted from the world and speak of ‘performance management’ and ‘payment by results’ etc to seek lessons and insights from the world of ‘evaluation’. It seems that the world of ‘evaluation’ has developed in a direction that has more synergies with systems thinking, with learning, with reflexivity and with praxis. This is a more comfortable place than performance mgmt with its top-down, command control connotations.

    So – some ideas and additions I don’t want to lose..
    Developmental evaluation – http://www.mcconnellfoundation.ca/en/resources/publication/a-developmental-evaluation-primer – which is billed as a ‘third’ way of thinking about evaluation (in addition to summative and formative above) and is good for situations of complexity or early social innovations.

    Integrative monitoring – http://www.inprogressweb.com/downloads/inProgress_Monitoring_Manual_v1.pdf – a way of thinking about monitoring that seems to balance the upward accountability (to funders or managers), the ‘downward’ accountability to beneficiaries and the need for those directly involved to be learning and reflecting.

    And finally (for now) a whole website – http://betterevaluation.org/ – seeking to improve evaluation practice and theory by sharing information about options (methods or tools) and approaches.

Share what you think...

This site uses Akismet to reduce spam. Learn how your comment data is processed.