You are currently browsing the measurement category.
…Still frustrated by ‘payment by results’ funding. Even more so when someone from Barclays bank decides to explain to charities how to make it work. Because it won’t, and we need to make that clear. Its costs will be significant, if we let it become the standard for public funding.

Diego Rivera w/ a monkey: better than payment by results
I’m going to offer David McHattie the benefit of the doubt and assume his recent piece on how charities should prepare for payment by results (PBR) funding was based on a naive pragmatism, rather than a more cynical attempt to make public services run more like the disgraced bank he works for.
There are so many fundamental and damaging problems with the Payment by Results model, that no one article could give them all the space they need. From crowding-out smaller organisations who can’t afford the financial risk, to encouraging exactly the types of ‘gaming’ approaches that target-driven funding has long-fostered, and ignoring the unpredictable complexity of social problems (that most funding regimes are guilty of), PBR is a powder keg for the voluntary sector and anything shy of an outright denouncement can only lend it a legitimacy it doesn’t deserve.
What McHattie has done is offered some seemingly innocuous steps for voluntary organisations to begin adopting the same toxic metric culture that has recently put his own employer into disrepute for fixing interest rates.
…Let me explain.
To start, for all of its claims of being ‘outcome funding,’ PBR is still target funding. But with bonuses attached.
Here’s why:
- An organisation receives funding based on achieving its outcomes
- Those outcomes are measured by outputs – ‘x’ number of ‘y’ achieved = outcome
- The number of outputs deemed to represent the completion of an outcome are set in advance
- Outputs set in advance, and required to achieve funding, are targets.
With this in mind, all the arguments against target funding continue to apply to this supposedly new system. PBR is no improvement on what has come before. The addition of bonuses – much like at Barclays and the other big banks – will only worsen the effects of older target-based approaches.
The core of what’s wrong with both the old and the new target-driven funding regimes, is what former Bank of England director Charles Goodhart called ‘Goodhart’s Law’; that when numbers are used to control people (whether as bonuses, targets, or standards), they will never offer the improvements or accountability they are meant to. David Boyle of the New Economics Foundation has gone a step further, arguing that such systems create worse results than not having them in place, as a range of dishonest means are inevitably devised by those being judged on their abilities to create particular numbers, to make sure those numbers are created!
If your job is on the line over the number of people who have received work-readiness training, you will find a way to make those numbers add up to what they need to, to keep yourself in a job. The training might get shortened, 1 full-day course might become 2 half-day courses, people might be counted multiple times for what are essentially the same efforts, those who are more difficult to reach will be ignored in favour of the easiest recipients. Whatever the definitions set, you will find ways around them. And so will your organisation.
When this happens, learning opportunities are lost, accountability is destroyed, and those who are meant to be helped become numbers to be gamed.
These problems are also reinforced by a reality many of our organisations struggle to admit: that we live in a world far too complex to be able to say in advance that ‘a’ will lead to ‘b’. Even in broad-brush terms this kind of organisational fortune telling is hit-and-miss, but when it gets taken a step further (‘this many ‘a’ will lead to this many ‘b’), we are truly taking the piss. We are giving ourselves (and those who fund us) false illusions of control over situations that are the emergent results of countless interdependent factors beyond our organisational reach, whether individuals’ family lives, the economy, or the communities they are a part of, to name but a few.
And if we acknowledge that we can only play a partial role in preventing even one former inmate from reoffending (to draw on McHattie’s example), then the rest of the PBR/targets house-of-cards comes crumbling down. The only ways to keep it standing are through luck or dishonesty.
And dishonesty has been a hallmark of similar systems at Barclays and other banks. The impacts that ‘bonus culture’ has had on the financial sector were made clear by the 2008 economic collapse; from the most local level, to the most global, bonuses incentivised not ‘better performance’ but a range of quasi-legal and outright fraudulent activity designed to benefit particular individuals, rather than whole systems.
This is an inevitable result of what Dan Pink describes as ‘if/then’ motivators (‘if you do this, then you get that’). Whether as bonuses for individual bankers reaching sales targets, or bonuses for charities hitting targets supporting former inmates to stay out of prison, the results will be the same: more dishonesty, less accountability. The paperwork might tell us that ‘more is being achieved for less,’ but the on-the-ground reality will tell us otherwise.
Taking charitable advice from a bank is like taking health advice from a fast food chain, and our sector deserves better than to quietly apply the models that have brought so many problems to the rest of the world, to the practicalities of our own critical work.
Going along with PBR might feel like a necessary evil in the interests of those we serve, but we have far too much evidence to the contrary to honestly think that might be the case. This is a system that needs to be scrapped, not ‘navigated.’ The people we exist to serve deserve nothing less.
This is the 3rd in our unexpected series on the issues of Payment by Results funding:
Give Trust, Get Accountability
Bonzo funding: Payment by Results
In the last post, Paul Barasi took the recent government move towards ‘Payment by Results’ funding to task. Today we introduce a radical alternative means of achieving funding accountability: Trust!

Kittens: Better than ‘Payment by Results’
‘Payment by Results’ is the UK government’s latest attempt to achieve greater accountability with how public money is spent. Or so they say.
In practice, they’ve decided to apply it only to certain questionable public services, but as Paul pointed out the other week, the Olympics, Trident, and recent wars will not be held to the same standards of ‘we pay you when you have delivered what you said you would.’
David Boyle’s powerful paper against the approach, demonstrates that it poses the very same problems as ‘target-based funding,’ encouraging ‘gaming’ of the system to the detriment of all involved. You lose honesty, you lose learning, and you lose the accountability the whole system was created for.
We’ve long used numbers as a replacement for trust – ‘you said you would do ‘x’ but how can I know you did it?’ By measuring it!
Which is kinda ok for some tasks, but in many others – let’s say you’re trying to measure ‘improved wellbeing’ – there is an infinite number of ways you can fiddle the definitions to make sure your counting ‘succeeds.’
But beyond this, even if the definitions were fixed (presumably by government, but by anyone, really) they would immediately fall afoul of the first rule of complexity: things change. Therefore, as Paul wrote in relation to women feeling safer walking around their council estate in at night, fixed aims – whether as ‘targets’ or ‘results’ – will fail to take into account many of the most important impacts of a project, because they weren’t specifically what the funding was meant to achieve. And thus their value is lost on the funding systems that help enable them.
Learning to trust each other again
So what’s the alternative? You can never sew up all the loopholes and opportunities to ‘game’ a system to someone’s advantage, so let’s go back to the drawing board and give ‘trust’ the opportunity to reclaim the space ‘numbers’ stole from it, way back when…
We don’t usually associate trust and money, but a lot of people have begun experimenting with the combination lately, as the shortcomings of compliance-based accountability are gradually becoming clear. Here are a few anecdotes:
In 2010 I met Paul Story in Edinburgh – an author who had maxed-out his credit card printing 10,000 copies of his novel, Dreamwords: The Honesty Edition. His business model? Give the books to people, in the streets, in bookstores, at events, with a request to a) pay for the book online if they liked it, or b) pass it along to someone they think might enjoy it, asking them to pay for it if they liked it. Two years on he’s just published part two of his series…
Also in 2010, Toronto Star journalist Jim Rankin gave five prepaid credit cards worth $50-$75 to five different homeless people, encouraging them to get what they needed. Two were returned to him, partly used; one was never used or returned; one was stolen; and one was partially used, but never returned. For the people with maybe the most reason to exploit Rankin’s generosity, these seem like pretty good results. Imagine the costs that could be saved by adapting certain elements of organisational homelessness service provision along similar lines?
The other day I discovered Mgnetic Music, who pride themselves on a business model for independent musicians that means not having to “sue your fans to make money from your music!” Their approach? Let people download your music and pay for it if they want to. If they like it, they might pay you. If they don’t pay, you still have a new fan who will likely support and promote you in other ways. If they don’t like it, they won’t pay and wouldn’t have otherwise. Let them make the choice – it’s a lot less hassle for you, as a musician!
Now these are small and far from perfect examples, but our current systems can only pretend to be working by digging their heads deeper into the sands of compliance measures that simply allow abuses to be more thoroughly hidden in endless numbers.
Trust-based funding?
A while back, Paul Barasi, Veena Vasista and I started exploring ‘Trust-based funding.’ While it never got past an initial conversation with a funder and another with a law firm working on public sector commissioning processes, we began to imagine it what it might look like at the different stages of the grant process:
‘What we want to support’ (Guidance)
- Providing very loose definitions, perhaps starting from a ‘these are the things we definitely DO NOT want to fund’ perspective to weed-out those who are absolutely unqualified, without boxing those who might be qualified into terms they don’t fit
‘What you want to achieve’ (Application)
- Jointly-developing means of demonstrating impact, to give funded groups a real sense of ownership over the process and a sense of responsibility to themselves, as well as those funding them
- Not creating any direct relationship between what is stated initially and what is expected later, leaving room for changes and on-the-ground learning, as the most effective projects tend to do, but often have to hide from those funding them
‘What you are delivering’ (Delivery)
- Recognising that funders and recipients are working towards the same goals and must hold each other to account throughout the process, helping create a relationship in which ‘funding’ and ‘delivery’ are seen as two equal parts of a joint-process, where both parties can constructively challenge each other, without retribution
- Trusting people who have been through an appropriate application process to do what they say they will with the money they are given, offering support and connections, rather than oversight and one-way accountability
‘What we have achieved together’ (Evaluation)
- Emphasising the qualitative impact of services, shifting the inclination from ‘box-ticking’ and ‘target-chasing’ by both parties
- Assuming that recipients will spend their money appropriately, and asking them to provide the story of their work in whatever ways they feel best conveys its full breadth
- Weighting valuable, but unexpected/unplanned outcomes on par with predetermined ones
We also added a further stage:
‘How those we support are better prepared for the future’ (Potential)
- Ensuring that at the end of the funding period, recipients are better placed to continue doing good work, viewing the process as developmental, rather than simply about the fixed funding period
Putting it to work
What’s above is barely a skeleton of an approach, but no matter how much work we put into it, someone putting it into practice is going to have to stick their neck out if it is going to get a fair hearing.
Paul, Veena and I have put forward an idea, with a tiny bit of meat on the bones, but now we’d like to turn it over to you.
In this spirit of the guidance stage above, what we DON’T want is simply highlighting the things that could go wrong. These are largely no-brainers. The trust approach accepts that ‘things inevitably go wrong’ in any system, and with that in mind it is not worth perpetually trying to mitigate against them, by dragging down all the honest people to a ‘lowest common denominator’ compliance model.
So here’s the question for you:
What would make the idea we’ve outlined above BETTER than it currently is?
Looking forward to seeing where you might take this…
Paul Barasi spent eleven years developing the Compact – an agreement between government and the voluntary sector to help both sides work better together. But recent government plans to bring back ‘payment-by-results’ funding for services are about as far from a ‘more like people’ approach as you can get. Paul takes their hypocrisy to task in his first Concrete Solutions blog.
Raiders of the Lost Compact

Paul Barasi
The Compact was first conceived in a chat on a train between local activists and MPs and led to the 1998 agreement for ‘Getting It Right Together’ between the Voluntary Sector and Government. It eventually graduated to a more holistic ‘Compact Way of Working,’ yet could be buried to government officials singing ‘Never Mind the Cash Flow, we’ve got Payment By Results.’
Around five years ago, many local partnership relationships peaked with the emergence of ‘a Compact way of working.’ This approach transcended a Ten Commandments-style written declaration. It was about far more than just following the rules. It meant living the shared values like treating partners fairly; working together from the start on issues affecting the voluntary sector; and above all, trust.
Fast forward to the Coalition Compact and we can still hear such hits as “Social action over state control and top-down Government-set targets,” “Shifting power away from the centre,” “Equal treatment across sectors,” “Proportionate Risks” and that chart-topper: “Payment in Advance”. But recently the tune has changed; instead we are hearing “Retrospective payment” which will reward Efficiency through professional top down control and take us back to a More Like Paper approach.
But will the voluntary sector be able to match government professionals in delivering pre-set results on time and within budget?
And why should the voluntary sector have to play by one set of rules, when the lion’s share of government spending seems to have none of the same stipulations attached?
Games with results
The London Olympics taxpayers’ subsidy rocketed tenfold from £1bn – with results measured by what: 29 UK gold medals for £10bn? Number of unethical sponsors or school playing fields sold? Who decides success? Imagine if the voluntary sector tried to play by these rules!
Wars with inhuman results
Afghan and Iraqi wars were a snip for the UK at just £20bn. Who’d know they’d be no weapons of mass destruction – as if the 2m demonstrators, dismissed as misguided by Blair, had been any advance indication. Who bothered to define what success would look like: maybe keeping the human cost of liberation down below 300,000 civilian deaths. Who pays for failure?
Subs and planes
Or the hopelessly misnamed “Astute” nuclear submarine: just £1bn over budget and delivered 4 years late. That makes the £100m cost of the May 2012 U-turn on picking Navy fighter jets hardly worth mentioning.
(OK, our subs won’t know where they are without US navigation satellites nor could these launch the leased Trident ‘independent’ nukes without the Yanks, but hopefully the jets will be able to do u-turns and somersaults in mid-air before more of our cash disappears into thin air.)
Rewarding Government efficiency?
The Home Office could get paid on the basis of how many Brits are extradited to the US or how many decades this takes or how much it spends on legal costs to do it, or not to do it?
It’s not just officials getting bonuses instead of the sack, but would anyone trust either of these government departments to do their weekly shopping?
Thatcherite Retrials
The crude payment by results regime that government wants to impose seems a throw-back pre-dating even the 1990s. Back then the Department of Health was experimenting with Outcomes Funding for alcohol counselling which valued not just the number who achieved total salvation but the progress people made along the way. After all those battles over sustainability, not funding on the cheap (rebranded more for less), full cost recovery, unfair claw back, down-pricing contracts, is government returning to rip-offs like a supermarket displaying one price and charging another?
What counts in the community?
I remember one housing estate project which achieved the wonderful result of women no longer being afraid to go out after dark. It didn’t count, as government hadn’t included this as a pre-set target. I recall a street theatre group destroyed by funders making it not just perform but have performance targets, and board meetings, too. Or take a project for young volunteers who cleaned up the environment: they made lots of new friends, were more likely to volunteer again, and acquired skills and confidence to do new things – what a result!
Saying goodbye by shaking the crap off our feet
The dehumanising organisational culture of the Civil Service can’t even compare with the traditional voluntary sector, let alone new grassroots social movements, in terms of its understanding of what kinds of systems will help people to realise their potential and make change happen. Trust-based funding is the right way forward (more on this model to come). This way, funders accept an element of risk, knowing projects will fail, and trusting the intentions of those doing the work to do it with the right intentions and define their impacts in the ways they feel are most appropriate. Payment-by-results is a backward step and if government funding can’t pass the More Like People test, the voluntary sector should walk out, walk on.
Today an excellent article about a campaign I have been very active in was printed in a newspaper not-at-all known for its progressive tilt.
It will introduce the campaign to a massive and influential audience who would likely have never heard of the issues before today. This could potentially shift (or spark!) UK public debate on something which has thus far been a fringe interest for a relatively niche part of the activist world.
In other words, I’m pretty damned excited about it!
While not mentioned, I can say with confidence that I played a significant role in the story appearing as it did. As it happens, I was lucky enough to play that role in a paid capacity for one of the organisations involved in the broader campaign.
The organisation I was working with – like all of the other organisations involved, actually – was not mentioned in the article, however, part of the framing of the story, and some of the people whose’ stories were told, came from dialogue I had had with the author, and introductions I had made over the last few months.
I’m not saying this in any way to promote myself, but that when I was working with that organisation, there was a regular emphasis (while much subtler than it can be in most NGOs) on getting the name into the press. This was for obvious reasons most of us will have experienced – building recognition, reporting to funders, etc. All sound reasons, in their own right.
Organisationally – this news story didn’t check any boxes, won’t appear in any reports, or secure future support for their work. Yet it spent resources paying me to help make it what it was.
From a movement perspective though, this story is big news. As I write, it has over a thousand Facebook ‘likes’, demonstrating a reach well-beyond that of most of the organisations involved.
When this many new people become aware of an issue, it makes work much easier for the activists involved, as they are not having to explain it from scratch in every conversation and interview, because the knowledge base of the people you are talking to has expanded overnight.
Further, when this many new people become aware of an issue via an incredibly sympathetic introduction (like this article), it goes that much further to building public pressure for the kinds of changes you hope to see…
But our organisations don’t have an investment in this kind of change. In many workplaces, as soon as it became clear that there would be no direct benefit to the organisation of me putting several hours into emails, research and introductions, I would have been expected to re-direct my attention to a more pressing organisational priority.
Luckily in this organisation, this wasn’t the case.
But the fact that it so often is, highlights the very uncomfortable reality that so often occurs when we create social change organisations: from the moment they exist as separate entities, their interests are not always those of the causes they were set up to fight for; in fact, sometimes they are at odds with those causes.
Work that focuses on maintaining and growing the organisation itself – recruitment, publicity, fundraising, marketing, human resources, IT, among others – are all, at best, tertiary to supporting the cause; if we ‘x’, than we can ‘y’, and hopefully ‘z’ will happen.
But like in the case of this article, ‘z’ might not be aligned with doing ‘x’, meaning it is not where organisational effort will necessarily be prioritised. Which might be inconsequential, but might also be a massive missed opportunity for the movement.
I only mention this, as a reminder, as we are sitting in our organisations, to spend less time emphasising the organisational outcomes, and more emphasising those relevant to the broader causes we exist to serve.
If we cannot prioritise the cause over the organisation, than we have lost our reason for being and are draining effort and resources from places that might use them better.
More positively, how can we make sure that ‘organisational priorities’ don’t trump ‘movement priorities’?
What might help us to remind ourselves about the real reasons we are doing what we do in our organisations, to avoid becoming self-perpetuating machines, detached from the causes that initially sparked our passion?
This is a slightly adapted post I made to the (ever-awesome!) eCampaigning Forum email list today, in reply to an email about tools for measuring social media metrics… and why I think it’s about as useful as counting the number of kisses you share with your partner in a given week.
Hey there –

…now evaluate them!
(warning: bit of a rant to follow…)
This may not be exactly the kind of suggestion you’re looking for, but the greatest benefits of social media are rarely the ones you plan for (and thus which can be objectively evaluated against your plans). They may be:
- A crucial new volunteer emerging from the Twitter woodwork, to make a significant difference for the organisation,
- Developing a new relationship with someone who might later be able to support the organisation in the future,
- Receiving pro bono support from an expensive professional who replies to a call for help on your Facebook page,
- Opening someone in your network up to an aspect of your organisation’s issues they were previously unaware of,
- An interaction between two people in your network who have never had a chance to engage in dialogue before, around something you shared…
The list could be endless, which is exactly the point – measuring social media primarily by generic metrics will only tell you a minuscule fraction of the value it has provided, in all kinds of unexpected ways.
The judgment on if it is providing ‘value for money’ needs to be made subjectively – do we think this range of anecdotes – often seemingly of minimal significance, when seen on their own, but cumulatively massive and often with a stand-out story or two along the way – are important enough to keep doing it?
I know that some senior managers and funders who don’t understand social media will focus on the numbers, but we are doing them a disservice to not challenge the logic that underpins these demands.
One of the strongest arguments I’ve used with organisations on this front, is asking a senior manager to provide metrics to justify their face-to-face networking activities;
- How many networking/schmoozing events have you attended this quarter?
- How many people have you met at these events?
- How many people that you have met at these various functions have become ongoing organisational contacts?
- How many have led to future additional contacts/meetings?
- How much has the time you spent at these events cost the organisation?
There is an acceptance of the value of networking, even though it is often random, serendipitous and not about specific preconceived outcomes. Social networking needs to be seen in a similar light, if an organisation is going to use it to its potential.
Imagine if a small fraction of everyone in the organisation’s time (not just senior managers) was regularly engaged in the kind of activity that produces the benefits that senior managers know comes from attending a Parliamentary reception, or the launch of a new report?
Some will only worry about what this means for both job titles/descriptions and/or the value of senior management, but others will be excited by the infinite possibilities it offers…
It’s just a thought. I get quite tired of being asked to provide numbers for questions that numbers can’t really answer. Another approach might be to ask whoever wants the data, to evaluate their intimate relationship based on the number of kisses they receive each week… though by the time the figure tells them anything useful, it’ll probably be too late to do anything about it…
Ta from sunny Oaxaca!
Liam
Scientific method – the process of establishing ‘proof’ by attaining the same results in multiple controlled experiments which came to prominence during the Scientific Revolution – has brought us many things. Countless critical gains have been made, but in the process of assuming that a rational process of deduction is always the best way of ‘knowing’ something, we may have undermined some of our most critical human instincts and understandings. But what is the alternative?
_______________________________________________________________________________________________
If last week’s Twitter response to this notion was any indication, I might ruffle a few feathers with this blog. Contextually, I’m coming from a few days working on a project on a First Nations’ reservation in Northern Alberta.

Beaver Lake, Alberta
This is a community with relatively little in the way of formal education, but a vast amount of a different kind of knowledge, passed down through generations, emerging from a close connection to the land they have lived on and with for so long. Sometimes described as wisdom, it’s something we’ve often lost – and actively discredited – in the modern Western world, particularly within our formal institutions.
The Twitter debate began with my observation that much of what the scientific community has been recommending in regards to climate change in recent years (or perhaps decades), was deeply embedded in the cultural practices of many First Nations communities, hundreds and thousands of years ago. The basic principle of ‘respect Mother Earth’ – and more specifically ‘make decisions with the impact they will have on the next seven generations in mind’ – has underpinned many of these communities’ practices since long before colonialism. They didn’t know about carbon footprints, embedded emissions or even climate change itself, but they knew that it wasn’t a good idea to pillage nature and natural resources. When Europeans arrived, they were warned by their hosts about overhunting buffalo, damming rivers, clear-cutting trees; all without the scientific knowledge we have today that tells us all these things are problematic.
Science eventually came to the same conclusions that the Cree, Haida, Ojibway and others had millennia previously. Unfortunately, during the time it took science to figure out what Indigenous peoples already knew, we basically destroyed the planet.
I’m not saying science doesn’t come up with the right answers, only that there is always a considerable lag between when people start to study a phenomena (whether climate change or organisational change) and when it figures out what many have already known long before hand.
Art and health
What about the impact of art and creativity on peoples’ health and wellbeing, Artists have for ages seen and promoted a positive relationship between the two, yet only in recent years has the evidence base reached a place where schools, government funds, or health strategies have begun to recognise it, party politics aside. Even still, it is mostly marginalised as a ‘luxury’ or a ‘frill’, in comparison to the ‘important’ subjects or disciplines of maths, science, business. Arts practitioners will know all too well the impacts this has had across societies, but without the evidence of that impact, it can feel like a lost cause.
Learning about learning
…Or notions of learning? Chinese proverbs dating back a fair ways told us ‘Tell me and I’ll forget, show me and I may remember, involve me and I’ll understand’. Yet schools and universities in the Western world have been absolutely wed to the notion of the lecture as central to all formal education. Again, as the evidence base has gradually developed, and shown that lecturing is generally one of the poorest methods of facilitating learning, our institutions are very slowly beginning to play catch-up, but still thousands of years after the Chinese philosophers had this one figured out. (This article fails to get into differences of learning styles, but highlights the shortcomings of the sacred institution of lecturing quite well).
In the office
At a much more mundane level, I think of an old job. When I started, I quickly realised that, at only a few quick skims of a database, we provided almost no support to organisations that weren’t large, London-based, national organisations; a tiny percentage of those we could have been helping. I raised this, and was told I needed to ‘demonstrate the need’; I said ‘scan the database for two minutes’. This wouldn’t suffice.
I spent the following month categorising every organisation in a 1,600 entry database, by their size, their location and their reach. Eventually this told me, depending on classification, at least 85% of those we supported came from a pool of less than 2% of potential beneficiaries… which is what I’d said a month earlier. By the time this was written into an acceptable report, we’d lost 2 months of my work, in pursuit of an ‘evidence-base’ which added little or nothing to my initial observation.
There was nothing especially remarkable about my observation, except my belief that it was trustworthy in its own right (a position I’m sure most of us have found ourselves in at one time or another). When you consider my salary and overhead costs, this meant several thousand pounds was spent to ‘know’ what I already knew.
‘Proof’
So whether artists, First Nations communities or ancient Chinese philosophers, the knowledge held by all three was widely available to the scientific methodologists since long before the near-universal western adoption of the scientific method during the Scientific Revolution. Yet, in each case, scientific rationalism dismissed or actively discredited each of the above as ‘superstitious’, ‘unsubstantiated’, or ‘without methodological rigour’… until they eventually drew the same conclusions themselves!
The problem was, in the respective mean times, people created potentially irreversible climate change, health and wellbeing were collectively sacrificed, and learning has been a rote drill, instilling a hatred of education in countless millions for several centuries.
What are we missing right now?
I’ve mentioned a few examples where science has (eventually) caught up with earlier forms of knowledge. What current questions do previous kinds of ‘knowing’ provide answers to that science still completely ignores or discredits? Quantum physics has begun to identify a level of connectivity between all forms of life (with wide-ranging implications), that has previously only been captured by notions of ‘oneness’ found in many religions and spiritualities (without getting into that kettle of worms!). Much of what the world of post-Enlightenment rationalism has previously determined to be true or false, has gradually been seen to be otherwise. Yet, while science is clearly adaptive (it’s fundamental strength), we cling to its current state of progress at any given time, as if it represents an absolute, rather than a step towards greater understanding. The same experiment, carried-out a hundred years apart, will invariably reveal different things, as technology – but more importantly perception – change during that time.
Acknowledging different kinds of knowledge
If you were asked how you knew the world was round, and how you knew your mother loved you, you would probably approach each question very differently. I’m sure you’d agree, the lack of scientific rigour in your second answer would in no way diminish your knowledge of your mother’s love; it would probably still be something you know more than you know that the world is round (as this is still an abstraction in most of our minds, very few of us having seen the Earth in its entirety, firsthand!).
These are extreme examples, but they have to be, as there are so few places where our culture still accepts the merits of knowledge grounded in experience, feeling and intuition.
How about if I asked how you know how safe or unsafe you are in your neighbourhood? Would you produce a list of ward rankings on violent or petty crime? If so, would it be in relation to your city? Your country? The rest of the world? Other places you’d lived? Or would you explain how you feel when you walk down the street at night?
This isn’t a binary choice…
As I said at the start, this is not to discredit the innumerable gains that the principles of the scientific method have offered the world – these are well-known and documented – but instead to highlight the things this method has missed (or ignored) – even when the answers have been right under its nose. The costs of doing so have also been vast.
While we obviously don’t want to throw away the scientific rationalism that has created so many critical breakthroughs in so many fields, we also don’t want to continue to doom ourselves to repeat its omissions, late acknowledgments and incomplete narratives on the world we live in.
When do we trust non-scientific knowledge?
I don’t know where exactly we draw a line, but I do know that it has currently ended-up much too far in one direction, undermining some of our most significant knowledge in the process.
So maybe we start with our own attitudes; we acknowledge that there is fundamental knowledge that we all hold, that may, at times, be greater than the scientific knowledge we have available to us at the moment. Once we have made this acknowledgement, hopefully it will open the door to discussions around more specifics as they arise. At this point, our kneejerk response is to collectively discredit anything that has not undergone a very particular process of examination. By acknowledging that some of our most important knowledge has undergone no such process, maybe we can begin to relearn the potential of intuition, instinct, experience and feeling to help us make better decisions, address issues in a more timely way, and appreciate the ideas of people and cultures less-wed to the scientific method?
*Question: does this piece fit the ‘helping organisations to be more like people’ theme of our work, or should this have gone elsewhere? I’m aware that the direct implications for organisations are pretty abstract, but thought it was worth discussing here, nonetheless… any ideas on what these ideas mean for voluntary, community and non-profit organisations?
We may not celebrate our successes particularly well in the voluntary and community sectors, but maybe that’s because we’ve stopped believing them? Perhaps if we spent more time actively admitting what’s gone wrong, as the ground-breaking new website, AdmittingFailure.com encourages, we would feel more inclined to celebrate when things really do go well?
________________________________________________________________
There have been many recurring themes in my time in the voluntary and community sectors. One of these has been the repeated mantra of ‘we are terrible as celebrating our successes!’
On some level, I’ve always agreed with this – for those of us slugging away in often thankless jobs ‘doing good’ in the world, a party, a pat on the back, or some other affirmation of our value is important and shouldn’t be easily dismissed.
However, I’d also like to unpick this one a bit; maybe we fail to celebrate our successes because we declare that everything we do in this sector is ‘successful’? And maybe, when we do so, we stop believing it? And when we stop believing it, maybe we don’t want to make a big deal of each and every supposed success, because doing so would highlight the reality that we’ve been distorting our own narrative, supposedly for funders and donors for so long?
‘Doomed to succeed’
My colleague Titus Alexander once described our sector as ‘doomed to succeed’ – that as soon as our organisations are given money to do something, we are expected to not only achieve, but pass with flying colours, one hundred percent of the time.
And as our income usually hinges on doing so, invariably, we find ways of showing that we do; sometimes this means ‘double-counting’, sometimes cherry-picking ‘easy-win’ beneficiaries, sometimes highlighting one or two of those we’ve supported as being more representative than they really are… whatever it is, we’ve got our ways of making sure whatever we do ‘succeeds’ – at least on paper.
The dangers here are ones I’ve discussed in several blogs before, but primary among them is the impact this has on our ability to learn from our mistakes – namely because we often pretend they aren’t there, or we gloss over them with a selectively told story of what we did working – and working entirely.
The problem is, if we were to read a random selection of most of our organisations’ annual reports, evaluations or publicity documents, we would get the impression that nothing we’ve ever done had not gone perfectly to plan.
Which is basically impossible. But some combination of real and perceived funder/donor pressure tends to keep us from acknowledging this impossibility, allowing us to continue living a whole series of stretched, distorted or otherwise manipulated truths in our working lives.
The research on the importance of mistakes, trial-and-error and learning from things that don’t work is extensive and the conclusions are fairly clear: if you’re afraid of either making or acknowledging your mistakes, you will never do anything new or groundbreaking.
Admitting Failure?
With all of this in mind, my jaw dropped when I read Monday’s Guardian story on Canadian NGO, Engineers Without Borders’ decision to publish a ‘Failure Report’, and launch a website for the international development/aid sector more broadly called, AdmittingFailure.com. It reads:
“By hiding our failures, we are condemning ourselves to repeat them and we are stifling innovation. In doing so, we are condemning ourselves to continue under-performance in the development sector.
Conversely, by admitting our failures – publicly sharing them not as shameful acts, but as important lessons – we contribute to a culture in development where failure is recognized as essential to success.” – AdmittingFailure.com
The site also invites other development/aid orgs around the world to submit their own failures, the idea being that an easily searchable and sharable ‘failure bank’ will emerge, providing a user-generated resource for those looking to, say, implement a change management project in Burkina Faso.
Admitting failure everywhere else in life
At this point I add the critical disclaimer that I’m not just picking on non-profit organisations; the inclination to deny our mistakes and failures is much more widespread than that. We teach it to our kids in schools, our governments do it almost pathologically and the pressures in the private sector to push profit margins all create a similar distorting effect.
Some recent online conversations have got me involved in creating WeScrewedUp.com – a site based on the same principles as AdmittingFailure.com, but applying to our personal lives (work, relationships, families, etc).
We’re also thinking about a similar forum and blog for non-profit/voluntary causes more widely, allowing an honest discussion of things that haven’t worked, to help all of us get closer to those that might.
Do let us know if you’re interested in contributing, are doing something similar, or know of something along these lines that already exists…
This blog is a part one of two (not that anyone’s counting), picking apart the issues with the ways we (over-)use stats and figures in the voluntary sector and beyond. It’s for anyone who ‘doesn’t believe it until they see the numbers’. Part one focuses on what I see as the false correlation between ‘numbers’ and ‘evidence’ and how this conflation undermines trust and creates less-than-honest results. The second will look at the dehumanising effects of using numbers as descriptors, rankings or value measures of people, relationships and social change.
________________________________________________________________________
The problem with numbers is not numbers, per se; it’s where they fall in our order-of-operations.
Too often we see them as an end point – the holy grail of research, evaluation, analysis, planning – rather than a step along the journey of better understanding. When numbers become the end game, the pressure to manipulate their journey, fiddling, adjusting and otherwise reconfiguring them is immense. And as much as we might like to pretend they represent an infallible scientific rigour, those of us who’ve ever filled a funder’s monitoring form know that even a figure calculated to the Nth decimal place still has significant room for interpretive flexibility, when you need it to.
Number as replacement for trust
No method of compliance can effectively replace the kind of accountability that mutual trust provides in relationships. The work created in attempts to do so is immense. Numbers have traditionally been seen as an alternative when trust doesn’t exist, providing a way of measuring whether someone has done what they said they would. Or so we tell ourselves.
Unfortunately, as this became the norm for contracts, evaluations, grant monitoring and organisational audits, we have taken the assumption of dishonesty that underpins the push for numbers, and trumped it… with more dishonesty!
And this dishonesty appears wherever we have imposed what David Boyle calls ‘The Tyranny of Numbers’. When voluntary organisations need to hit targets to maintain their funding, they double-count beneficiaries and shift budget lines; when government needs to justify ideology-driven service cuts to the public, they pick and choose the statistics that will help them to do so, ignoring those that don’t; when FTSE CEOs want to receive bigger bonuses, they hide liabilities and inflate profits to produce short-term gains in stock prices… They create numbers that succeed only in hiding the truth and most of the time we have no practical way of telling the difference!
In doing so, each of these examples create long-term problems in their wake; organisations and funders fail to adequately learn from both success and failure; governments are not held to account on ideologically-driven decisions; companies suffer when the bubbles so many questionable bonuses have been built upon, invariably burst…
Across the sectors
So these practices occur, with more and less altruistic intent, across all types of institutions. And it is impossible to effectively gage their true prevalence, as when they are fiddled they look (at least superficially) pretty much the same as when they are honest, and thus there is no simple and reliable way of checking if people are fiddling the system, without digging considerably deeper, by which point it may be too late to affect change.
Headline figures are underpinned by statistics, which have consolidated totals beneath them, and tallies and raw data from sample surveys still lower down in the process. Most of us don’t see, or are unable to understand these numbers on top of numbers, making it impossible (within most of our means) to effectively refute them. Yet they justify most of the decisions affecting our lives and the lives of those we support.
In the sea of numbers we may cross paths with on any given day, distinguishing between the ‘authentic’, the ‘questionable’ and the ‘wrong’ is an unfeasible task. One of my favourite recent finds, via Henry Mintzberg, looks at the creation of statistics which justified British World War II aircraft expenditure:
“As Eli Devons (1950:Ch. 7) described in his fascinating account of planning for British aircraft production during World War II, ‘despite the arbitrary assumptions made’ in the collection of some data, ‘once a figure was put forward… it soon become accepted as the “agreed figure”, since no one was able by rational argument to demonstrate that it was wrong… And once the figures were called “statistics”, they acquired the authority and sanctity of Holy Writ’ (155).” [Mintzberg, The Rise and Fall of Strategic Planning, The Free Press, 1994]
For another example of the futility of finding meaningful numbers, think of the London demonstration against the War in Iraq in February 2003. It seems fair to assume biases coming from both sides, as police declared the march at 750,000, while Stop The War organisers claimed 2,000,000. Even in counting a single tally, the most important variable, evidently, is who is doing the counting. And while one could of course argue that an objectively ‘correct’ number exists, who is in a position to ‘prove’ that theirs is it? So in practice, the numbers from both sides mean very little beyond ‘a considerable number of people don’t want this war’; a conclusion any casual observer of the event could likely have made, avoiding the unnecessary ambiguity numbers added to the situation.
Newspapers on top of newspapers
One response to these pitfalls is to produce new numbers which serve to either validate or disprove the old ones. In doing so, we are placing new newspapers over the old newspapers that we used to cover up the spot where the dog peed in the corner. The pee is still there, but we don’t have to acknowledge it anymore.
…And the new layer seems effective for a period, but then the damp begins to soak through and the stench begins to sneak around the edges, as we find yet more resourceful ways to manipulate the new system and achieve the numbers we wanted in the first place. The examples of this approach are endless: crimes get regrouped, ‘impact’ redefined, local boundaries redrawn, titles reclassified, and we’re back to square one, with little idea of what we have done, whether or not it has actually worked and how it compares with what we did before.
Trusting relationships don’t produce this kind of effect, but requiring numbers to achieve accountability comes from a mistrusting place, and thus the behaviour that follows is likely to reinforce this insinuation. What if Stop The War Coalition had shown the images from February 15th and let people judge for themselves the importance of the day, rather than try to quantify the historic mobilisation?
Building trust
My inclination (perhaps unsurprisingly to regular readers) is to place our focus on building trusting relationships, rather than trust in numbers. This is of course a mammoth task, to frame it conservatively, yet one which I feel is at the core of better and more meaningful learning, accountability and understanding. Raising trust invariably raises questions of power, but without venturing into such depths, our results will invariably be shallow ones.
How can trust change the dynamics between those with more and those with less power in the world of social change?
In communities groups I’ve worked with, when you ask the question ‘how do you know you’ve made a difference?’ it is common to hear from those most in tune to local issues: ‘We just know – we can tell’.
The professional voluntary sector tends to scoff at this response for the whole range of obvious reasons you might expect; namely that it’s ‘not evidence-based’ (see: ‘Show me the numbers’).
But often within this seemingly simple response, can be a series of profound truths, whose detail and subtlety is not easily translated into the worlds of reporting. It’s often a series of small changes, anecdotes, stories; the things you notice when you know the ins-and-outs of a community, its strengths and its problems, like the back of your hand. These anecdotes create a broader ‘feeling’ which may well serve as a more effective gage than any metrics ever can, of the shifts taking place in an area.
The challenge
So funders, lead partner organisations, councils, universities: why don’t we ask the people involved in local efforts how they know what kind of impact they’ve made and how they would choose to show us? Why don’t we also ask them what they’ve learned during the process?
And the bold part? We accept what they tell us.
When we ask for numbers, we undermine the judgments of those who do the work. If we give them the chance, without the pressure to produce figures (not stopping them if they feel numbers do help to tell their story), we may find that we have encouraged a more honest understanding of the issues.
This approach shifts the power dynamics by offering trust; giving them the chance to provide a narrative that makes sense within their experience, rather than the frameworks we have created for our own convenience or preference. Those who are trusted are more likely to be trust-worthy. When people you fund, research, support or evaluate are trust-worthy, you’re more likely to hear the important stuff from them, rather than a finely-tuned propaganda piece, invariably filled with the kinds of selective numbers which succeed only in giving us the false impression of knowing what’s going on.
The follow-up will focus on the more value-driven argument against a number-centric approach; how numbers can dehumanise those involved or affected by our work, undermining our core missions and principles in the process.
Charities that support cuddly animals invariably receive more than their fair share of the public donations pie, given their contributions to society (compared to say, a refugee support group or a rape crisis centre). But is a ‘charity ranking system’ a good way to shift this imbalance? If our giving choices are indeed ‘visceral’ and ‘irrational’, is a measured, rational system likely to change them?
______________________________________________________________________
On Wednesday, Martin Brookes, CEO of New Philanthropy Capital, spoke at the RSA on ‘The Morality of Charity’, arguing for a charity ranking system to help the public decide which organisations are more worthy of their donations than others. At the core of his speech, he said, were moral judgments on:
- the value of particular causes over others;
- the ability of some organisations to deliver more effectively on those causes than others.
His hope was a system that could divert sparse resources to the most deserving, rather than the most popular causes.
On one level, I can appreciate the sentiment here; those who know me know I often bemoan the vast reserves sitting in the bank accounts of a small number of ultra-large national organisations. However, there seem too many trade-offs associated with the proposal, trade-offs which may deeply undermine public trust in charities, as well as the sector’s broader independence and individual donors’ right to choose.
I’ve purposely avoided the question of practical difficulties, as I feel Sophie Hudson has already summarised the argument, but also because I’m keen to avoid the rhetoric of ‘let’s not do it because it seems ‘impossible’. My approach looks at the risks I see as inherent in making such judgments about the value of the truly vast range of charitable efforts, and the complexity of their contributions to society.
All causes were not created equal…
Martin makes the example of charities that have traditionally delivered services which, retrospectively have been deemed damaging (cigarettes for soldiers, blood letting, etc), as a justification for a ranking system, to discourage money from reaching such groups. However, he didn’t mention the examples of charities which were ‘ahead of their time’ and whose services may not have been formally recognised as critical when they were established, but have since come to be seen as integral in their field. A ranking system, without the benefits of hindsight, would only have current ‘fact’ – that which is already ‘proven’ (versus that which is essentially being trialled by a charity who strongly believes in a new approach), on which a judgment could be passed. This creates an imperative for organisations to stick to established methods, shunning risk and innovation, for fear of lowering their ranking with a yet unproven means of delivery. This seems like a formula for the calcification of a sector, de-incentivised to push beyond established practices, due to concern over lowering their ranking, and thus, their income.
What about politics?
While I would agree that there is an unfair allocation of resources towards ‘sexy’ – and broadly widely agreeable causes, those who are most in need (if I can indeed make such a judgment) are often those least likely to receive public donations. Undercutting this reality, as uncomfortably as it sits with much of the charity world, is politics. People won’t agree on the most deserving causes because their underpinning political beliefs will answer this question differently. Refugees and asylum seekers are often among the most harshly treated groups in the country, yet many will argue against their right to be here at all, let alone to have money to support them.
As long as political divides exist, we will view different charities as differently ‘worthy’, regardless of what information we are given about their value. If we don’t talk about politics, we are unlikely to get very far in this discussion.
Conversely, if we do acknowledge political differences in such a system, it seems we will end up with either rankings that reinforces the political status quo (a dangerous choice, as discriminatory as it is), or a system so watered-down, that only donkeys, cancer and football will qualify for support, as the only causes not (arguably) steeped in political baggage.
Campaigning?
Speaking of politics, what about if an organisation is working to influence broader social or governmental forces? Their impacts may be much harder to see than those exclusively delivering services. In many cases, the broader influencing work will be ultimately more important, holding the key to changing a systemic injustice creating the need for services in the first place, but how could this be ranked alongside groups whose efforts are based totally on addressing immediate, visible need?
It’s a complex, complex world…

Martin Brookes, CEO of New Philanthropy Capital
We live in a complex world in which an arts charity may be vastly improving the life prospects of cancer patients and a youth football project may be significantly reducing local violent crime. This means that many of the best organisations cannot be categorised according to (as Martin suggests as an option), Maslow’s Hierarchy of Needs (with food and water as the base, and arts and leisure activities at the peak).
Maslow’s hierarchy doesn’t address the complex inter-relationships between work affecting different parts of the pyramid described above. Parallels to the arts or football examples above likely exist in every voluntary or community organisation that doesn’t supply food and water to sub-Saharan African villages, making classification a broadly meaningless activity, which would likely just encourage groups to distort their categorisations to rank more highly than they otherwise might, in the interests of maintaining the impression of public value. Much like currently imposed systems of monitoring and evaluation, groups will find ways to fill in the forms to give themselves preferential results. And this would be a completely understandable thing to do, if you knew your future income was dependent not on your work, per se, but on the perception of your work you were able to create amongst donors or funders.
Valuing ‘effectiveness’
I feel I mostly addressed this one in May when NPC’s work in this area first came to my attention. Any system which attempts to make a blanket evaluation of the overall effectiveness of different organisations, will inevitably lose the nuance that makes a cattery different from a rape crisis centre, or youth music programme. If the currently established systems of organisational evaluation are anything to go by, they will not begin to capture the full value offered by most charities.
Even on an issue as seemingly straightforward as how money is spent and overhead costs, these lines can be incredibly blurred, depending on the how distinctions are drawn between frontline staff and management, or if fundraising budgets can be justified, based on their cash return, though they might look disproportionate to the objective outsider.
Better allocation of too few resources?
As for this bigger question, I wonder why we are asking it the way we are. Would we try to regulate who people become friends with, because there are some people who don’t have enough friends in their lives, and some who have many? Those with the most friends may be popular, funny, but ultimately, less reliable as friends than some of their less-popular alternatives; but will this stop people from gravitating towards them?
It’s not ideal, but systems are notoriously bad at addressing these things on any scale. Charity is a deeply personal issue for many people and outside information is unlikely to sway someone’s visceral response to an issue they have come to care about.
Further, if we try to do so, we run the (I feel) inevitable risk of:
- alienating or confusing current and future donors who feel judged for the issues they support
- encouraging dishonesty from organisations looking to find ways to boost their ranking
- devaluing the critical work that is done by charities to influence broader systemic change
- reinforcing the status of large charities with specialised staff to address grading requirements
- wasting vast sums of money to cram complex issues into insufficiently complex categorisations
For all of Martin’s reminders that people are not rational in their giving habits (he is a self-confessed donkey sanctuary donor), he seems convinced that a rational system of ranking is what is needed to convince us to give differently. If it is feeling and instinct that drive our current donations, why not look at how feeling and instinct could help to shape new ones, rather than creating a system which tries to undermine these things? Not a challenge any easier than NPC’s, but maybe one with a greater precedent for success?
The sooner we can dispel the institutional myth that you can count, measure and rank complex social efforts, as you would a football league table, or a budget deficit, the sooner we can get on to really understanding the value they do or don’t provide.