The social technology of (performance) measurement

Print Friendly, PDF & Email

The other day I was in a relatively informal meeting with 4 others when I found myself in a rant about ‘measuring’ – outcome indicators; performance measures; targets – whatever you want to call them, in essence they are all attempting to describing something in numerical terms.

The trigger for my rant was someone saying the recent corporate line – that we needed to agree on a small number of measures that would be used as Outcome Indicators (means of quantifying the achievement of an outcome) for our ‘priority’ outcomes – in fact we are to agree on precisely 3 measures for each priority outcome.

My rant started with the words “But why” and my argument fell into two broad areas.

Firstly, the notion that we only measure our ‘priorities’.  How do we know they are our priorities if we aren’t keeping track of other issues too – we’d have no way of comparing or checking what is going on elsewhere.  I used the example of the oil light on the car dashboard – most of the time it is useless because it is not a priority to re-fill the oil but when the oil light shows there is a problem then it is a priority.  The oil light is there as a way of alerting us of a change in priorities.  In a similiar way, we need a whole ‘dashboard’ of ways of measuring the wellbeing and health of people in Newcastle – and we need to keep track of trends etc in things that are not presently our priorities so that we can re-assess our prioritisation as and when needed.  Even traditional management performance tools such as Balanced Scorecard advocate this sort of approach.

Secondly, the idea that we should artificially restrict the measures we use.  I went on about the fact that each measure reveals some things and obscures others so we actually need to learn to deal with a wide range of measures and use each of the ‘key holes’ they give us on the world.  If we limit the number of key holes then we limit what we can understand.

The rant ended up with me saying – the world is messy, let’s enjoy it’s messiness rather than trying to neaten it up with numbers.

The others nodded a lot.  One said ‘okay you have convinced me’ and then the conversation moved onto something else.

I think I surprised myself more than anything.

I have been part of some work to ‘organise’ our whole tranche of different measures so they are slightly more manageable.  It’s a bit of a tedious task and quite detailed so I flit in and out of it.  Most of the time I wish we can rationalise them, reduce all that diversity of numbers down to a simple list – stop having to juggle so many different measures and so many different perspectives on which measures are ‘the best to use’.

However, the main value I have got from the work has been talking to those who know the relevant measures and learning about each of their strengths and weaknesses – what they show and what they reveal.  Most of the time people use measures without an understanding of how they are derived and an understanding of their strengths and weaknesses.  We go around citing a lot of numbers in a way that is detached from what they have been developed to describe.  Ultimately, even ‘traditional’ management decision making models would tell us this leads to bad decisions.  In a systems world, I’d say it could lead to systemic failure and unintended consequences.

In the world of systems thinking, there are warnings around the technology of measurement.  The John Seddon quote etched into my brain is ‘people do what is counted not what counts, targets make things worse’.  There are stories around unintended consequences of measurement – like the game playing that occurs to manage hospital waiting lists or A&E wait times.  And, even today I was looking at my latest book purchase ‘Growing wings on the way‘ and reading Rosalind Armson’s explanation of performance measurement being ‘reductionist’ (page 38/39).  In spite of all this however, no-one ever seems to say do away with measurement altogether.

So, I now have this urge to draw up “10 ‘notes to self’ on using measures as a systems thinker”.  I have no idea if there will be 10 or not, but it is a useful target!

1) Use viable system model to help understand what I am measuring.  Both in terms of performance measures (system 3 relating to system 1) and in terms of ‘intelligence’ (system 4 relating to the environment) these have to be in line with the level of recursion (system/subsystem/subsubsystem etc)

2) Don’t use a single value in isolation – understand it in its own context – such as a trend line over time or comparison between different areas.

3) Learn how to work with ‘suites’ of measures on a particular topic – rather than selecting a single value.  This is particularly true of public health data which gives ‘insight’ into the health of a population.  Treat each measure as a different perspective worth engaging with – rather than trying to rank which one is ‘best’.

4) Only use a measure when I have some understanding of how they were derived – so I can be aware of the assumptions built into them – and the possibilities and constraints they offer.

5) Avoid setting targets at all costs – if I have to, then try and get away with general trajectories for trends, rather than set point targets.

6) Be prepared to look beyond a ‘headline’ – averages and totals hide variation – and inequalities.  It’s very reductionist but ‘drill down’.

7) Never rely entirely on measures to understand the world – use systems devices as part of my inquiry too.

Only 7, oh well.

One thought on “The social technology of (performance) measurement

Share what you think...

This site uses Akismet to reduce spam. Learn how your comment data is processed.