Back on target

My post On Targets sparked off a hugely rich discussion with my Systems Thinking in Practice LinkedIn group. It is difficult to summarize such rich conversations so I hope I do it justice in eight points.

1. Publication targets are suitable for systems thinking. One thing i was wondering in my original post was whether I was overcomplicating things. One of the first exercises we learner systems thinkers ever did was considering whether situations were suitable for systems thinking (alleviating poverty – Yes; buying a chocolate bar – No). I am concerned that having a Systems Thinking “hammer”, every situation looks like a “nail”. If you see what i mean. The reflections that emerged convinced me that publication targets are part of a complex not a simple situation. And so systems thinking is a good approach.

2.  Does publishing in peer-reviewed journal articles address the right problem? The problems we are addressing at work are threefold: alleviating poverty, improving food security and protecting the environment. Published research in these fields is greatly needed but is pretty distant along any kind of “impact pathway”. What is the nature of the journal you publish in? Donald Schon, reflective systems thinker, wrote about the “high ground” of theoretical research as contrasted to the “swamp” of real human situations. Indexes which measure the impact of journals are not, to my knowledge, based on the impact that they actually have on the ground, but an impact connected with a peer group of scientists. Which journal are you publishing in? Who reads it?

3. Skewing the system? A work friend read my blog and said “What about fundraising? Maybe we should have targets for fundraising?” Indeed. Some of the scientists are brilliant fund raisers and maybe publish less. Some have a talent for leading and managing vibrant productive teams (and as a result might put themselves as second author to let some young scientist get her first paper out). In brief, what constitutes good performance is complex. It also depends where you are with your project cycle (How much can you publish if all your projects are new?). The thrust in my organization at the moment seems to be on the importance of relationships – internal, external, with people who can use our research, with people who will invest in us to do it. How do publication targets help with that?

4. The Golden Thread. My correspondents from the UK made me aware of this concept gaining good currency over there. It seems to be the idea of achieving alignment between our personal goals and our organization goals. From this perspective, they said “Well what is wrong with that? If the scientists are happy and it supports the aims of the organization, it is a win-win situation”. Indeed, the scientists on my train (n=2) were perfectly happy with the idea of this publication target and quite surprised to see my reaction. But I happen to know that publishing academic research is their “happy place”. Is it enough?

5. The difference between targets and indicators. Peer-reviewed papers could be a way of finding out “How do we know if we are doing well?”. I have to confess that I don’t know what other targets are being proposed; so far i have only heard of this one. Perhaps if there are also other ‘targets’ which focus on other indications that we are doing well, and we relabel them indicators not targets and try to lift the vision of the system up a level, then we might be onto something? Stephen Cavender said: Sound innovative research will probably lead to more publications; the reverse is probably not true.

6. Parliamentary select committee on peer review. I discovered (h/t John Holland) that there has been a parliamentary select committee on peer review in the UK. Well fancy that! It concluded:

“We therefore have concerns about the use of journal Impact Factor as a proxy measure for the quality of individual articles. While we have been assured by research funders that they do not use this as a proxy measure for the quality of research or of individual articles, representatives of research institutions have suggested that publication in a high-impact journal is still an important consideration when assessing individuals for career progression.”

This came from a rather controversial blog post about publishing targets for academia.

9. Arbitrary and pointless. Russell Plummer noted the arbitrary nature of targets. Two publications. Two, not three or one. What is that based on? If you publish two, can you have the rest of the year off?  There is a systemic police officer – can you believe it? – who has been collecting examples of bad performance measurements. I wonder what he would say about academic publications. Whatever. He is very, very funny. http://inspguilfoyle.wordpress.com/2012/10/18/bad-performance-measurement-on-tour-1/

8. Alternatives? So what could we do instead? I was pointed towards a paper on the difference between a “metrics driven approach” and a “theory-driven approach” (h/t Helen Wilding)

“The importance of the theory-driven approach for the analysis of collaborative public policy is that it proceeds from the causalities that connect different purposes to intended outcomes, so that the assessment or measurement of performance involves operationalizing a particular theoretical position (Skelcher 2008). This is the converse of the metric-driven approach to performance measurement widely used in the public sector. Here, the focus is on identifying what can be measured, typically employing quantitative indicators, and then working backwards into the question of causal attribution. As a result, the measures cannot be assumed to reflect the causal theory behind policy designers, and neither are the underlying assumptions or limitations of the set of measures explicit.”

I’m not 100% sure what this is talking about. But it feels like a keeper.

8. Alternatives 2 A second alternative is to reiterate what Vanguard do. (Vanguard is “an organisation, which helps service organisations change from a ‘command-and-control’ design to a ‘systems’ design”). They invite people in the situation to engage, learn, appreciate systemic complexity and agree purpose. On the back of this, measures of performance can be agreed – but these are very different to targets even though at another systemic level the measure may contribute to achievement of a higher level purpose (h/t Ray Ison).

One comment

  1. Hi,

    Glad you think I’m funny. 😉

    Anyway, being as you were pondering what I’d say on the subject, here are my thoughts…

    I’m going to use the Purpose-Measures-Method model to frame my points.

    When attempting to assess the impact of a particular journal article (or anything else for that matter), it is necessary to first understand its purpose, i.e. what is it there to achieve? This could be to solve an operational or societal problem (e.g. alleviating poverty), to raise the profile of an issue, to spark debate, to add to existing knowledge, influence policy, initiate further research and so on. Therefore, it could be argued that a research paper intended to influence policy in a developing country has not achieved its purpose if it is only read by a handful of scholars in the UK. Conversely, a paper intended to provoke discussion amongst academics in a specialist field would attain purpose if that’s what it was trying to achieve.

    Purpose can be a broad aim or goal. Deming said ‘A system must have an aim’. It’s therefore important to disentangle the term ‘target’ (as in ‘numerical target’) from the notion of ‘purpose’. Often this is nothing more that people’s use of language, but it makes it clearer for me. Purpose is essential; targets are bad.

    Then we come to measures. I find it helpful to make the distinction between measures and targets. Measures are necessary for understanding how a system performs. As long as the measures are derived from purpose, it is absolutely proper to measure performance, so as to identify opportunities for improvements. Targets, on the other hand, tell you nothing about how a system is performing, plus they invite gaming and other horrible side-effects. In short:

    1. All numerical targets are arbitrary.
    2. No numerical target is immune from causing dysfunctional behaviour.

    Therefore, in my opinion, numerical targets for the number of journal articles published per year are simply arbitrary and ridiculous.

    Finally, once you know what the purpose of producing an article is, and have devised useful measures that tell you if it achieves that purpose, you can go ahead an initiate a method for attaining your aim.

    Numerical targets not only fail to achieve this, but guarantee distortions and constrain the successful attainment of purpose.

    Take that from my systems thinking hammer!

Leave a comment