Co-Missioning: moving on from the idea that outcomes can be “delivered”
Sam Magne’s well-researched and argued paper on the pursuit of outcomes should finally help us move on from conversations around Payment by Results (PbR) and Social Impact Bonds (SIBs). Her piece highlights the “internal logics” of the cases for PbRs and SIBs, and the “commissioning constraints” which prevent those logics from being enacted in the “real world”.
In this piece, we shall make explicit a conclusion that Sam approaches implicitly. PbR/SIBs fail in the real world because their “internal logics”-the premises that underpin them-are flawed.
There are two critical flaws in the logics of PbR/SIBs which we wish to highlight. In order for PbR/SIBs to make sense, both of these things need to be true:
- Outcomes can be “delivered” by teams, organisations, projects or programmes
- Outcome-Based Performance Management — using outcome targets to manage the performance of people, organisations, projects or programmes — supports the creation of positive outcomes in the lives of people
As we will explore, the evidence is clear on both points. Neither of them is true. Sam’s piece helps us see that PbR and SIBs are stuck in their flawed theory. They are hamstrung by their inability to respond to the complex reality of how outcomes are created. As such, they create convoluted solutions to problems that are artefacts of their worldview. Once we escape from the prison of their worldview, we can liberate public service to support the creation of real outcomes in the world.
Finally, to help us move on from these conversations, Sam’s piece also helpfully points to a future of how funders and commissioners can genuinely support the creation of real-world outcomes — the idea of “Co-Missioning”. We will briefly explore what this looks like in practice.
The flawed logics of Payment by Results and Social Impact Bonds
The whole concept of Payment by Results rests on a provably incorrect premise-that outcomes are “delivered” by organisations or programmes of work (and so can be purchased by funders or commissioners). Once this incorrect premise is revealed, the whole edifice of PbR/SIBs crumbles.
The crucial research, which demonstrates beyond reasonable doubt that outcomes aren’t “delivered”, was undertaken by the UK Government Office for Science in 2007 in its work to create a systems map of obesity.
The systems map identifies 108 different factors that lead to the outcome of obesity (or its absence). Later, researchers grouped factors into areas such as:
- Early life experiences
- Food production and supply
- Macro-economic drivers
- Education
- Media
- Technology
- The nature of work
- Built environments, recreation and transport
- Healthcare and treatment options
Only one of these — healthcare and treatment options — which represent four factors out of 108, is something that Governments (either local or national) would commission to address the outcome of obesity.
This demonstrates clearly that a reduction in obesity cannot be “delivered” by any of the “healthcare and treatment options”. For example, suppose you are delivering healthy eating programmes in deprived neighbourhoods. In that case, you have no control or influence over how the media portrays ‘healthy eating’ or how macro-economic drivers impact people’s ability to have the time and income to cook and eat healthy food or to exercise appropriately.
Framing a reduction in obesity as something which can be “delivered” by commissioned organisations is simply a bad theory. It is an invalid assumption based on provably incorrect logic.
The Government Office for Science’s research clearly demonstrates that the outcome of obesity (or its absence) is an emergent property of a complex system. This is true of all significant real-world outcomes. The idea that organisations or programmes deliver outcomes is a reductionist fantasy. It is wishful thinking designed to make the process of management easier.
When we apply this understanding to Sam’s paper on the “logics” of SIBs, we can see how it helps us to recognise that the premises underpinning a PbR approach to commissioning are similarly incorrect. Commissioners cannot “purchase” outcomes because that is not how outcomes are made. It is worth noting that there are additional conceptual and practical flaws with PbR and SIBs (what is measurable in terms of “outcomes” are not the same as the outcomes people experience, and performance data is routinely gamed and corrupted), which you can explore further if interested.
Sam identifies the following four “logics” that make PbR “attractive” to a commissioner:
- Cashable savings: PbR allows commissioners to contract a preventative service that can’t otherwise be afforded in the present”.
- Attributed results: PbR allows commissioners to derive value for money from contracting a service to tackle complex social issues where outcomes are highly uncertain”.
- Quality Management: PbR allows commissioners to derive value from contracting providers to replicate an evidence-based social impact model… when the delivery context is stable enough for effective replication of a service… to be feasible and therefore for fidelity to be incentivised and tested by blueprint KPIs”.
- Flex & Adapt: PbR allows commissioners to support the commissioning of personalised, flexible, and adaptive services when contracts state that delivery of specified outcomes will trigger payment”.
We can now explore each of these in turn.
Cashable savings: “PbR allows commissioners to contract a preventative service that can’t otherwise be afforded in the present.”
There is so much flawed thinking to unpick in this premise. Firstly, let’s deal with “unaffordable”. This premise arises from a set of political choices. The absence of money for prevention/early action is a political choice, not an objective state. There is money for early intervention if people want there to be money. It is also worth remembering that the public sector austerity was a response to the huge bailouts demanded by the banking industry in the aftermath of the 2008 financial crisis. This is a problem created by the financial sector.
Secondly, even if there is a current shortage of capital, it is far cheaper for governments to borrow this money than for public service to access it via SIB mechanisms. This problem is an artefact of a failed set of political choices around austerity, for which SIBs are a needlessly expensive workaround. Not only is the capital available via SIBs more expensive than governments can borrow elsewhere, the high set-up and running costs of SIBs add to this expense.
Thirdly, as Sam’s piece highlights, the SIB promise of “cashable savings” largely didn’t materialise.
Finally, this premise rests on the faulty assumption that prevention activity which “doesn’t work” to prevent future demand (under the terms of PbR) is free to commissioners (and so they only have to spend money on prevention activity which “works”). This is incorrect because:
- The high probability of gaming means that even activity which doesn’t “work” will need to be paid for.
- The cost of activity, which genuinely doesn’t help, will show up in more expensive demand later.
- Providers will start to factor in the cost of “failed” PbR work into their overall costs-if work has happened, it has to be paid for somewhere in the delivery ecosystem.
Attributed results “PbR allows commissioners to derive value for money from contracting a service to tackle complex social issues where outcomes are highly uncertain.”
The premise is just plain wrong — as it is based on a failure to understand the dynamics of complex systems. Research has already shown us that complex systems create outcomes. In complex systems, it is impossible to reliably prove that a particular intervention has led to a specific impact. This is because it is impossible to create reliable counterfactuals in complex systems, and hence, one cannot reliably know what change would have happened without an intervention.
If we follow the logic of requiring “proof” that particular interventions solve complex social issues, we would never address complex issues, continuing to bear their costs forever. This problem is an artefact of a failed linear worldview. This problem with this logic was explicitly outlined in the National Audit Office (2015) paper on PbR. It stated that PbR should only be used in cases with a provable link between an intervention and a desired outcome. In complex systems, this means never.
Quality Management: “PbR allows commissioners to derive value from contracting providers to replicate an evidence-based social impact model… when the delivery context is stable enough for effective replication of a service… to be feasible and therefore for fidelity to be incentivised and tested by blueprint KPIs.”
This premise is also mistaken when we understand the dynamics of complex systems. Replicating approaches across contexts doesn’t produce reliable results because the exact nature of the interactions of those particular actors and factors in that place and time created the results in that system. Replicating the approach elsewhere will have no guarantee of success, and therefore, commissioners are simply mistaken to be reassured by the idea of replicability.
Furthermore, a systematic review of the evidence on using KPIs as performance management mechanisms indicates that they create gamed or falsified data rather than genuine results. In other words, the evidence says that KPIs obscure commissioners’ ability to understand public service performance. Once more, it is simply an error for commissioners to believe that providers can be “incentivised and tested by blueprint KPIs that remain viable as proxies for assurance of knock-on outcomes”. The evidence directly contradicts this belief. If we believe that it is important to be evidence-informed in their practice, then we must challenge that belief.
Flex & Adapt: “PbR allows commissioners to support the commissioning of personalised, flexible, and adaptive services when contracts state that delivery of specified outcomes will trigger payment.”
Once more, this premise is simply false. Evidence (for example, this paper, this paper and this paper) shows that specifying outcomes in contracts undermines the practice of creating personalised, bespoke services. Furthermore, many case studies demonstrate that the way to enable the kind of personalised, bespoke provision that creates real outcomes is for commissioners to commission for systemic experimentation and learning rather than for “results”. For instance, here and here.
Rather than support creating bespoke, personalised services, PbR does the opposite. Suppose you have a complex problem requiring specialised bespoke support. In that case, PbR providers actively discriminate against you, preferring to offer help to those who will enable them to hit their outcome targets more easily. The evaluation of the UK Work Programme (delivered on a PbR basis) makes this clear:
“…the available evidence to date suggests that providers are engaging in creaming and parking, despite the differential payment regime. Providers routinely classify participants according to their assessed distance from work, and provide more intensive support (at least as measured by the frequency of contact with advisers, for example) to those who are the most ‘job-ready’. Those assessed as hardest-to-help are in many cases left with infrequent routine contact with advisers, and often with little or no likelihood of referral to specialist (and possibly costly) support, which might help address their specific barriers to work.” ( Newton 2012)
The evidence we presented regarding these last two logics highlights the second key flawed assumption of PbR/SIBs: Outcome-Based Performance Management (OBPM) does not routinely improve outcomes as they are experienced in people’s lives. The evidence is very clear: OBPM creates gaming — it turns everyone’s job into the production of good-looking performance data.
From this analysis, one simple conclusion can be drawn — there is no valid logic for commissioners to use PbR. The problems that PbR/SIBs seek to solve are either problems arising from a set of failed political choices (which have other, cheaper solutions) or created by the inability of PbR’s worldview to respond adequately to the challenges of complexity.
As Sam’s paper makes clear, commissioners have not been blind to this reality. SIBs have only been enthusiastically explored by commissioners when accompanied by “top-up” grants.
SIBs — solutions in search of problems
That PbR and SIBs are based on bad theory echoes the history of their birth and iterations, which Sam’s paper also helpfully charts. From the outset, SIBs were a financial solution seeking a social purpose problem. Part of the purpose of their creation was to help rehabilitate the finance industry in the wake of the 2008 banking crash. SIBs weren’t created to help solve a problem that public service had. They were created to solve a problem that people involved in the world of finance were experiencing — “what can we do with our money and skills?”
The changing “logics” of SIBs, which Sam’s piece charts, are further evidence of this. When the original rationales/logics of SIBs proved incorrect, the SIB experiment wasn’t halted; instead, people looked for alternative rationales. SIBs have always been a solution in search of a problem.
How to commission for outcomes
Sam’s concept of “Co-Missioning” is a constructive way of summarising a range of evidence we have about the role of commissioners in supporting the creation of real-world outcomes (rather than commissioning which generates good-looking outcome data). Bringing those with resources together with those who undertake public-facing activities to explore their “co-mission” and to allocate resources to achieve that shared mission is a necessary starting point.
We know that commissioners can support the creation of real-world outcomes by:
- Commissioning organisations that provide bespoke, relational support to the people they serve.
- Commissioning organisations to experiment and learn together rather than to achieve “results” — i.e. to fund explicitly for shared learning and experimentation.
- Build a learning culture and infrastructure — where people share their data and experiences and make sense of them together.
- Governing for learning — shift the focus of governance and accountability to “are organisations adapting and learning rigorously and authentically?”
Undertaking action research to explore what this form of commissioning looks like in practice is exactly what the Human Learning Systems approach to public management has been doing. We have learnt that this requires a shift in thinking and practice from commissioners; they are no longer purchasers of services. Instead, they are “ systems stewards “ — people who use their financial resources and skills to convene actors to continuously experiment collaboratively. Links to examples of what this form of commissioning looks like in practice are embedded in the bullet point list above, including sample tenders and governance arrangements.
Explicit in the idea of “Co-missioning” and one of the common features of the Human Learning Systems approach is the creation of shared principles as mechanisms to coordinate actors around a purpose, help them to identify what “good” looks like in their work, and provide a reference for governance and accountability questions. For example, a home-care service for older people might be based on principles such as “taking a whole-person approach”, “focusing on wellbeing and human flourishing”, and “strengthening relationships”. Such principles can then be used for evaluation and governance through mechanisms such as Principles-Focussed Evaluation — mechanisms to help actors in a system reflect on whether they are genuinely living these principles.
We now know that if we really care about outcomes in the world, an alternative “Co-Missioning” approach is needed. One in which commissioners purchase the capacity for continual collaborative experimentation and learning around shared principles and steward the infrastructure and cultures required to make this happen. We have enough examples to understand what that looks like in practice. Let’s focus our energy and attention on making that possible for all.
Originally published at https://www.centreforpublicimpact.org.