TL:DR — It is impossible for organisations to “demonstrate their impact” if they work in complex environments. Asking them to do so requires them to create a fantasy version of the story of their work. This corruption of data makes doing genuine change work harder because it is difficult to learn and adapt from corrupted data.
Everyone who works to make public service operate more effectively wants to impact the world. Having an impact is a great thing. It’s an important North Star for a person, team or organisation. What we do should make a difference in the world.
However, too often, the way that the concept of “impact” is used by organisations gets in the way of the real-world change that people are trying to create. In previous work, we have assembled evidence which helps to show that using outcomes as performance management tools doesn’t work, and this is similar territory. The idea that we can measure our impact to understand how well we are performing is seductive, but fundamentally flawed.
Facing an uncomfortable truth — in complex environments, it is impossible to “demonstrate your impact”
Using “impact” for any form of performance management makes it much more difficult for organisations to do impactful work. This is the uncomfortable truth that leaders must face if genuine impact is to be created in the world.
This truth is uncomfortable because it would be really, really handy for managers and leaders if we could use impact as a performance management tool. It would make the job of managing public service (or any form of social action) much, much easier. This is why it is such a seductive idea.
However, part of the responsibility of leadership is facing uncomfortable, difficult truths. So, let’s examine the evidence around “demonstrating impact”.
Demonstrate “your” impact
Many teams or organisations seek to demonstrate “their” impact — the difference their work makes in the world. They are often asked to do this by those who fund the work. Sometimes they do it because they want to help their staff see the difference that they make.
There’s only one problem with this. Almost all useful social change is achieved as part of a complex system. In other words, your work is a small part of a much larger web of entangled and interdependent activity and social forces.
The systems map of the outcome of obesity illustrates this perfectly — it shows all the factors contributing to people being obese (or not), and all the relationships between those factors.
This is the reality of trying to make impact in the world — your actions are part of a web of relationships — most of which are beyond your control, many of which are beyond your influence, quite a few of which will be completely invisible to you.
All of these things combine with your actions to create impact in the world. Let’s work this example through using the obesity systems map. Say that you’re one of the people operating in the bottom right corner of this system — you’re providing “healthcare and treatment options” to address obesity. Let’s say you’re delivering weight loss programmes in neighbourhoods. How would you distinguish the impact of your weight loss programme from the influence of all the other factors in this system?
Short answer — you can’t. Someone on your programme sees a film that changes their perspective on the meals they cook. Someone on your programme changes jobs, to a place with a canteen where they only serve healthy options. Someone is made redundant, so they can’t afford to buy organic food. What was the impact of your programme in these situations?
This reveals a fundamental truth about the nature of complex systems. In a complex system, it is impossible to distinguish the effect of particular actors on the overall pattern. This is because complex systems produce emergent, nonlinear behaviour. The tiniest change in input variables creates potentially huge changes in results. Consequently, you can’t produce a reliable counterfactual in a complex system. (You can’t say what would have happened if X wasn’t present). And if you can’t produce a reliable counterfactual, then you cannot reliably identify the impact of your activity.
Contribution analysis — doesn’t really solve the problem
It is increasingly recognised that it is impossible to reliably attribute impact to a particular intervention in a complex system. This is why you will increasingly hear people say, “we’re interested in contribution, and not attribution”.
At this point, most people nod and think — “yeah, they get it”.
But what does “contribution not attribution” mean in practice? (Next time you hear someone say that, why not ask them what they mean?). Because in order to be a meaningful switch, the question has to mean: does X contribute to change as part of a complex system?
But that’s not the question that (for example) Contribution Analysis asks. It asks, “how does X contribute to the desired outcome?”.
“The CA approach we examine here adds to existing (although not always well implemented) Theory of Change practice by ‘zooming in’ on critical causal links in the impact pathways in order to assess how an intervention contributes (or fails to contribute) to change” (ODI, 2020, p 4)
This has a number of problems from a complexity perspective:
- Any work that relies on reference to predetermined “critical causal links” isn’t really taking complexity seriously. It is using causal links identified in one context and applying them to another, without examining how changes in the context affect the appropriateness of those causal links. Consequently, it is also blind to emergence — the foundation of causation in complex systems.
- Secondly, using “critical causal links in the impact pathways in order to assess how an intervention contributes (or fails to contribute) to change” sounds an awful lot like seeking attributable change. In what way is the previous sentence different from “we are seeking to attribute the extent that X made Y happen”? How exactly is this different from attribution?
This is not to say that “Contribution Analysis” has no place. It has been demonstrably useful for enabling adaptation in programme delivery. But, as per our conversation above, it seems to have been useful when its purpose is starting dialogue, rather than drawing conclusions. The ODI paper above makes this very clear when it says that Contribution Analysis (and Theory of Change) must be separated from contract management to be effective. In short, you can’t use Contribution Analysis for performance management.
Impact isn’t “delivered”
If we want to achieve impact in the world, the crucial uncomfortable truth that must be faced (from the perspective of traditional management thinking) is that impact isn’t something that can be “delivered”. In fact, the whole “delivery” mindset is damaging to creating impact in the real world.
We have been encouraged to believe a fantasy — that we can “deliver” impact by a linear planning process (like the programme logic model above). This appeals to us for many reasons: it makes us feel more in control of the world than we actually are — it creates a comforting illusion of certainty. It is fine to seek comfort. It is not fine to pretend that things work in one way, when they absolutely do not. And looking at the actual evidence on how outcomes are made (like the systems map of obesity), we can see that this kind of programme logic model does not represent an accurate or robust portrayal of how outcomes are really made (unless the logic model goes to a very high level of abstraction).
When we think of impact as something we can ‘deliver’, we are pretending to ourselves to make the task of managing social change easier. And the purpose of good management is not to make the task of management easier, it is to confront the uncomfortable messiness of how the world actually works. If we care about making impact in the real world, we need to stop pretending.
Why does this matter?
This kind of pretending matters because it makes the work of achieving real change in the world harder to do.
At its least worst, it wastes everybody’s time attempting the impossible — time spent “demonstrating your impact” or creating linear programme logic models is essentially time spent inventing a fantasy. (But that’s ok, because everyone working in public service/social change has got loads of time on their hands, right?).
But the truly pernicious aspect of “demonstrating your impact” or linear programme planning is when it is used for accountability or governance purposes. When people/teams/organisations are rewarded for demonstrating impact — when funding is given to those who can ‘prove’ their impact, or contracts awarded on this basis, or promotions/pay rises are secured in this way — it corrupts the information we need to improve how things are working.
We know this is the case because this is what the evidence tells us happens (overwhelmingly, unarguably) when we seek to use “impact” (or outcomes) for accountability/governance/performance management purposes. If you’re interested to read more, here are just a fraction of the key pieces of research in this area. (Here, here, here and here). The key point is summed up in Campbell’s Law:
“The more any quantitative (and some qualitative ) social indicators are used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” (Campbell 1976)
When people/organisations are asked to provide data which demonstrates their impact, they create a fantasy version. This is obviously terrible — because if we’re not telling each other the truth about the work we do, how does any of this work get any better?
And to be 100% clear — the people/organisations who create these fantasies are not doing so because they’re bad people — it’s because they’re being asked daft questions, where the only sensible response is to craft good-looking fantasies. This systematic fantasy-making is the responsibility of those asking the questions, not those providing the answers.
What can we do about it?
The great thing about this problem is that it has a really straightforward solution that will make a huge difference to how effective social change/public service work is. It is entirely within the capacity of people to do, and it’s really simple.
If you’re responsible for resources for public service or social action (if you make decisions on how resources are allocated) stop asking people/organisations to demonstrate their impact.
It makes things worse. Please stop now. Like right now: if you’ve got a policy or strategy like this — tear it up. If you’re writing this into how you work — step away from the keyboard.
Philanthropists and other charitable funders have a significant role to play in making this shift — as people in this context can move more quickly to change how they fund activity.
That is simple. But a very reasonable question comes rapidly into view…. If we’re not asking organisations to demonstrate their impact, how can we create accountability for spending resources well?
This is a great question. Fortunately, it also has a couple of straightforward answers.
- Remember — asking people/organisations to demonstrate their impact doesn’t create accountability — it creates a bunch of fantasy data. So, we don’t have accountability right now, we have the performance of accountability. Accountability is great — let’s create some.
- Ask people/organisations to be accountable for experimenting and learning together — collaboratively. Create accountability for enabling the healthy systems, which are how positive outcomes are actually achieved in the real world. If you don’t know how to do this, there’s lots of good advice here and here, and lots of examples of organisations doing this in practice here.
The key shift to make here is to move from funding for “demonstrable” impact (because this — paradoxically makes real impact harder to achieve) to funding for collaborative learning and adaptation. The evidence shows, this is how real impact is made.
And if all this sounds like the right way to go, but you’d like some help with thinking through how you do it, please do get in touch.