Thursday, January 20th, 2011
We may not celebrate our successes particularly well in the voluntary and community sectors, but maybe that’s because we’ve stopped believing them? Perhaps if we spent more time actively admitting what’s gone wrong, as the ground-breaking new website, AdmittingFailure.com encourages, we would feel more inclined to celebrate when things really do go well?
________________________________________________________________
There have been many recurring themes in my time in the voluntary and community sectors. One of these has been the repeated mantra of ‘we are terrible as celebrating our successes!’
On some level, I’ve always agreed with this – for those of us slugging away in often thankless jobs ‘doing good’ in the world, a party, a pat on the back, or some other affirmation of our value is important and shouldn’t be easily dismissed.
However, I’d also like to unpick this one a bit; maybe we fail to celebrate our successes because we declare that everything we do in this sector is ‘successful’? And maybe, when we do so, we stop believing it? And when we stop believing it, maybe we don’t want to make a big deal of each and every supposed success, because doing so would highlight the reality that we’ve been distorting our own narrative, supposedly for funders and donors for so long?
‘Doomed to succeed’
My colleague Titus Alexander once described our sector as ‘doomed to succeed’ – that as soon as our organisations are given money to do something, we are expected to not only achieve, but pass with flying colours, one hundred percent of the time.
And as our income usually hinges on doing so, invariably, we find ways of showing that we do; sometimes this means ‘double-counting’, sometimes cherry-picking ‘easy-win’ beneficiaries, sometimes highlighting one or two of those we’ve supported as being more representative than they really are… whatever it is, we’ve got our ways of making sure whatever we do ‘succeeds’ – at least on paper.
The dangers here are ones I’ve discussed in several blogs before, but primary among them is the impact this has on our ability to learn from our mistakes – namely because we often pretend they aren’t there, or we gloss over them with a selectively told story of what we did working – and working entirely.
The problem is, if we were to read a random selection of most of our organisations’ annual reports, evaluations or publicity documents, we would get the impression that nothing we’ve ever done had not gone perfectly to plan.
Which is basically impossible. But some combination of real and perceived funder/donor pressure tends to keep us from acknowledging this impossibility, allowing us to continue living a whole series of stretched, distorted or otherwise manipulated truths in our working lives.
The research on the importance of mistakes, trial-and-error and learning from things that don’t work is extensive and the conclusions are fairly clear: if you’re afraid of either making or acknowledging your mistakes, you will never do anything new or groundbreaking.
Admitting Failure?
With all of this in mind, my jaw dropped when I read Monday’s Guardian story on Canadian NGO, Engineers Without Borders’ decision to publish a ‘Failure Report’, and launch a website for the international development/aid sector more broadly called, AdmittingFailure.com. It reads:
“By hiding our failures, we are condemning ourselves to repeat them and we are stifling innovation. In doing so, we are condemning ourselves to continue under-performance in the development sector.
Conversely, by admitting our failures – publicly sharing them not as shameful acts, but as important lessons – we contribute to a culture in development where failure is recognized as essential to success.” – AdmittingFailure.com
The site also invites other development/aid orgs around the world to submit their own failures, the idea being that an easily searchable and sharable ‘failure bank’ will emerge, providing a user-generated resource for those looking to, say, implement a change management project in Burkina Faso.
Admitting failure everywhere else in life
At this point I add the critical disclaimer that I’m not just picking on non-profit organisations; the inclination to deny our mistakes and failures is much more widespread than that. We teach it to our kids in schools, our governments do it almost pathologically and the pressures in the private sector to push profit margins all create a similar distorting effect.
Some recent online conversations have got me involved in creating WeScrewedUp.com – a site based on the same principles as AdmittingFailure.com, but applying to our personal lives (work, relationships, families, etc).
We’re also thinking about a similar forum and blog for non-profit/voluntary causes more widely, allowing an honest discussion of things that haven’t worked, to help all of us get closer to those that might.
Do let us know if you’re interested in contributing, are doing something similar, or know of something along these lines that already exists…
Tuesday, October 5th, 2010
Charities that support cuddly animals invariably receive more than their fair share of the public donations pie, given their contributions to society (compared to say, a refugee support group or a rape crisis centre). But is a ‘charity ranking system’ a good way to shift this imbalance? If our giving choices are indeed ‘visceral’ and ‘irrational’, is a measured, rational system likely to change them?
______________________________________________________________________
On Wednesday, Martin Brookes, CEO of New Philanthropy Capital, spoke at the RSA on ‘The Morality of Charity’, arguing for a charity ranking system to help the public decide which organisations are more worthy of their donations than others. At the core of his speech, he said, were moral judgments on:
- the value of particular causes over others;
- the ability of some organisations to deliver more effectively on those causes than others.
His hope was a system that could divert sparse resources to the most deserving, rather than the most popular causes.
On one level, I can appreciate the sentiment here; those who know me know I often bemoan the vast reserves sitting in the bank accounts of a small number of ultra-large national organisations. However, there seem too many trade-offs associated with the proposal, trade-offs which may deeply undermine public trust in charities, as well as the sector’s broader independence and individual donors’ right to choose.
I’ve purposely avoided the question of practical difficulties, as I feel Sophie Hudson has already summarised the argument, but also because I’m keen to avoid the rhetoric of ‘let’s not do it because it seems ‘impossible’. My approach looks at the risks I see as inherent in making such judgments about the value of the truly vast range of charitable efforts, and the complexity of their contributions to society.
All causes were not created equal…
Martin makes the example of charities that have traditionally delivered services which, retrospectively have been deemed damaging (cigarettes for soldiers, blood letting, etc), as a justification for a ranking system, to discourage money from reaching such groups. However, he didn’t mention the examples of charities which were ‘ahead of their time’ and whose services may not have been formally recognised as critical when they were established, but have since come to be seen as integral in their field. A ranking system, without the benefits of hindsight, would only have current ‘fact’ – that which is already ‘proven’ (versus that which is essentially being trialled by a charity who strongly believes in a new approach), on which a judgment could be passed. This creates an imperative for organisations to stick to established methods, shunning risk and innovation, for fear of lowering their ranking with a yet unproven means of delivery. This seems like a formula for the calcification of a sector, de-incentivised to push beyond established practices, due to concern over lowering their ranking, and thus, their income.
What about politics?
While I would agree that there is an unfair allocation of resources towards ‘sexy’ – and broadly widely agreeable causes, those who are most in need (if I can indeed make such a judgment) are often those least likely to receive public donations. Undercutting this reality, as uncomfortably as it sits with much of the charity world, is politics. People won’t agree on the most deserving causes because their underpinning political beliefs will answer this question differently. Refugees and asylum seekers are often among the most harshly treated groups in the country, yet many will argue against their right to be here at all, let alone to have money to support them.
As long as political divides exist, we will view different charities as differently ‘worthy’, regardless of what information we are given about their value. If we don’t talk about politics, we are unlikely to get very far in this discussion.
Conversely, if we do acknowledge political differences in such a system, it seems we will end up with either rankings that reinforces the political status quo (a dangerous choice, as discriminatory as it is), or a system so watered-down, that only donkeys, cancer and football will qualify for support, as the only causes not (arguably) steeped in political baggage.
Campaigning?
Speaking of politics, what about if an organisation is working to influence broader social or governmental forces? Their impacts may be much harder to see than those exclusively delivering services. In many cases, the broader influencing work will be ultimately more important, holding the key to changing a systemic injustice creating the need for services in the first place, but how could this be ranked alongside groups whose efforts are based totally on addressing immediate, visible need?
It’s a complex, complex world…

Martin Brookes, CEO of New Philanthropy Capital
We live in a complex world in which an arts charity may be vastly improving the life prospects of cancer patients and a youth football project may be significantly reducing local violent crime. This means that many of the best organisations cannot be categorised according to (as Martin suggests as an option), Maslow’s Hierarchy of Needs (with food and water as the base, and arts and leisure activities at the peak).
Maslow’s hierarchy doesn’t address the complex inter-relationships between work affecting different parts of the pyramid described above. Parallels to the arts or football examples above likely exist in every voluntary or community organisation that doesn’t supply food and water to sub-Saharan African villages, making classification a broadly meaningless activity, which would likely just encourage groups to distort their categorisations to rank more highly than they otherwise might, in the interests of maintaining the impression of public value. Much like currently imposed systems of monitoring and evaluation, groups will find ways to fill in the forms to give themselves preferential results. And this would be a completely understandable thing to do, if you knew your future income was dependent not on your work, per se, but on the perception of your work you were able to create amongst donors or funders.
Valuing ‘effectiveness’
I feel I mostly addressed this one in May when NPC’s work in this area first came to my attention. Any system which attempts to make a blanket evaluation of the overall effectiveness of different organisations, will inevitably lose the nuance that makes a cattery different from a rape crisis centre, or youth music programme. If the currently established systems of organisational evaluation are anything to go by, they will not begin to capture the full value offered by most charities.
Even on an issue as seemingly straightforward as how money is spent and overhead costs, these lines can be incredibly blurred, depending on the how distinctions are drawn between frontline staff and management, or if fundraising budgets can be justified, based on their cash return, though they might look disproportionate to the objective outsider.
Better allocation of too few resources?
As for this bigger question, I wonder why we are asking it the way we are. Would we try to regulate who people become friends with, because there are some people who don’t have enough friends in their lives, and some who have many? Those with the most friends may be popular, funny, but ultimately, less reliable as friends than some of their less-popular alternatives; but will this stop people from gravitating towards them?
It’s not ideal, but systems are notoriously bad at addressing these things on any scale. Charity is a deeply personal issue for many people and outside information is unlikely to sway someone’s visceral response to an issue they have come to care about.
Further, if we try to do so, we run the (I feel) inevitable risk of:
- alienating or confusing current and future donors who feel judged for the issues they support
- encouraging dishonesty from organisations looking to find ways to boost their ranking
- devaluing the critical work that is done by charities to influence broader systemic change
- reinforcing the status of large charities with specialised staff to address grading requirements
- wasting vast sums of money to cram complex issues into insufficiently complex categorisations
For all of Martin’s reminders that people are not rational in their giving habits (he is a self-confessed donkey sanctuary donor), he seems convinced that a rational system of ranking is what is needed to convince us to give differently. If it is feeling and instinct that drive our current donations, why not look at how feeling and instinct could help to shape new ones, rather than creating a system which tries to undermine these things? Not a challenge any easier than NPC’s, but maybe one with a greater precedent for success?
The sooner we can dispel the institutional myth that you can count, measure and rank complex social efforts, as you would a football league table, or a budget deficit, the sooner we can get on to really understanding the value they do or don’t provide.
Friday, May 28th, 2010
Significant numbers of voluntary organisations and think-tanks have been singing from government’s hymn sheet in recent years, demanding that civil society organisations prove their value for the funding they receive by submitting to constantly more rigorous methods of monitoring and evaluation. The chorus has become so loud that it is increasingly difficult for those who challenge its dominant orthodoxy to be heard, even when it is apparent that these methods are often equivalent to trying to measure the water in the ocean with a ruler.
Before going further, I want to stress that I think accountability in civil society is good; I think that most of the methods we use for supposedly achieving it, however, are not.
My latest thinking on this comes from a brief piece in Third Sector and an editorial in the Guardian about a YouGov/New Philanthropy Capital survey that suggests significant support for a ‘grading system’ for charities, to demonstrate how effective a particular organisation has been in achieving its aims.
Let’s put independence from the state and overall drops in sector-income aside for a moment and look at some of the ways accountability is currently seen by most funding bodies – governmental and non-governmental alike – in predicting what said grading system might look like…
The problem with grant funding accountability
First, let’s look at funding applications as the first institutional process of evaluating an organisation. We need to acknowledge that the text of a funding application is a ‘theory of change’ – we (as applicants) believe if we deliver a, b and c (measurables), we will achieve x, y and z (social outcomes). There is no way to know (beyond a well-educated guess) that delivering a series of events, interventions, support sessions, or other outputs, will definitively lead to the change we say it will. There is nothing wrong with acknowledging this uncertainty, but standard funding formulas suggest we (as organisations) claim to know our actions will create pre-determined impacts, even if this is only a possibility. This is the fundamentally flawed premise on which most funding accountability is based: it treats complex human and social problems as complicated ones with both fixed variables and predictable answers.
“According to …Glouberman and …Zimmerman, systems can be understood as being simple, complicated, complex. Simple problems… may encompass some basic issues of technique and terminology, but once these are mastered, following the “recipe” carries with it a very high assurance of success. Complicated problems, like sending a rocket to the moon, are different. Their complicated nature is often related not only to the scale of a problem…, but also to issues of coordination or specialised expertise. However, rockets are similar to each other and because of this following one success there can be a relatively high degree of certainty of outcome repetition. In contrast complex systems are based on relationships, and their properties of self-organisation, interconnectedness and evolution. Research into complex systems demonstrates that they cannot be understood solely by simple or complicated approaches to evidence, policy, planning and management. The metaphor that Glouberman and Zimmerman use for complex systems is like raising a child. Formulae have limited application. Raising one child provides experience but no assurance of success with the next. Expertise can contribute but is neither necessary nor sufficient to assure success. Every child is unique and must be understood as an individual. A number of interventions can be expected to fail as a matter of course. Uncertainty of the outcome remains. You cannot separate the parts from the whole.”
…But if we ignore for a minute that it is impossible to apply recipe-modelled approaches to complex social change, let’s look at the next stage of the process: the tenuous correlation between outputs and impact. Most funders will suggest that once the deliverables of our funding have been determined (x events, x hours of support, x people served, etc.), that achieving those deliverables equates to success (or not achieving them equates to the failure). This may not reflect the bigger picture. Some funders will be more flexible in determining the actual relationship between output and outcome, but there is still often worry on the part of a funded organisation that they will lose the money they need to keep delivering, if the numbers don’t add up. And as everyone who has worked under such pressures knows, if your work is dependent on making numbers add-up, you’ll make sure those numbers add up! This may mean ‘double-counting’ events which are funded by other funders, it may mean counting different parts of a service accessed by the same person as different outputs… we have plenty of creative ways of doing these things, and really, who can blame us? If the money we need to keep doing something with positive social value is contingent on forms that don’t reflect that value, what would be the greater crime: some loosely-adjusted paperwork, or a vital service shut-down due to an overly-rigid system?
How much flexibility is needed?
Let’s look at a concrete example: if you receive a grant to deliver counselling services to 150 children who are victims of bullying in one-year, you will have a fixed number of hours you can dedicate to each child. However, you may find that a particular group of children have higher needs, meaning that the scope and ongoing nature of their problems will require considerably more effort than your team have allotted them. Now some funders will recognise the need for flexibility in such a situation, but this is usually a question of how many degrees they are willing to shift. The qualitative impact of supporting 50 kids through a particularly difficult time may be far more socially valuable than providing basic support for the full-150 pre-determined total. And for a child in a difficult scenario, 3 hours of support for every hour provided to a less-vulnerable child would not be an unexpected ratio. Yet, from many funders’ perspectives, this is still a 67% margin of error, and likely inexcusable from a traditional ‘accountability’ perspective.
The cynics of my cynicism might say: “well, this organisation just needs to quantify the additional impact of the 3:1 support ratio provided to the most vulnerable children served, and it will justify their reduced output.” But think about the complexity of measuring such an impact – the differences that inevitably exist between different children’s developmental progress, the amount of at-home support experienced by some but not others, any other additional circumstances that might make some children’s experiences of bullying more traumatic… Not to mention the additional time that would need to be spent by staff attempting to collect the data that would rationalise this reallocation of funds, and how that would likely be pulling them away from delivering services they were funded to deliver in the first place.
…And if grant funding is not difficult enough, target-based government contracting methods just accentuate all of the worst aspects of this process, expecting, almost without exception, numbers to be achieved, on the dubious premise that ‘numbers = impact’.
The bigger risk…
The bigger risk, from my perspective, lies in the almost inherent bias of asking organisations to provide certain types of information as evidence of their impact, that are clearly more-accessible to larger charities, with dedicated research staff and knowledge of public data retrieval, than they are to the vast majority of voluntary and community groups who will be held to the same measures.
As has always been the case, there will be organisations whose greatest skills are in manipulating data, writing in an appropriate style and knowing how to speak to the right power-brokers. These groups will invariably rank well in a system that plays to all of these strengths, regardless of their actual quality of work. If a charity grading system is implemented in the ways it seems likely it would be, it will not be grading effectiveness, but, like so many other practiced measures of social value, will actually be measuring an organisation’s ability to effectively handle paperwork and bureaucracy.
My radical concept for improved accountability? Trust.
Too much of what goes on in the name of accountability is an ongoing assumption of organisational guilt, until innocence can be determined by those making the judgment in the first place. A concept that has been central to historical ideas of accountability – trust – has been all-but-forgotten in the current culture of market-dominated methods of measurement in the social sphere.
What if funders were to take a more personally motivated attempt to ‘get to know’ their recipients? What if a significant part of the time spent on paperwork, was spent working with grant recipients to find out how they felt they knew they had achieved impact in their work? What if this meant funders being sent a video of a recent street dance performance, featuring interviews with the participants? What if it meant an invitation to attend a presentation and board meeting of the organisation in question? What if it meant direct contact between grant administrators and those who had received services as a result of their funding?
Personal interactions (as opposed to institutional ones, such as form letters and other generic communications) can fundamentally change the dynamics between a funder and those who receive funding. Just as most of us are more likely to pay back money we have borrowed from a friend on time, than we are if we borrowed it from a credit card company (penalties aside), the likelihood of improved accountability if a relationship is developed between funder and recipient works along similar lines. Trust is a powerful motivating force; more so even than the ongoing sense of worry that tends to characterise most funding relationships, from the recipient’s perspective.
A mandatory grading system would, by its very nature, violate any notion of trust-based accountability. If organisations were forced to be graded, just like with our creatively-defined output measures described earlier, many would find (questionable) ways to check the necessary performance indicator boxes, diverting valuable attention away from actually improving their performance.
A voluntary grading system that organisations could opt-in to (with different types of performance indicators reflecting different conceptions of performance), on the other hand, could become a point of pride within the sector – something to aspire to. Those who undertook it would have chosen to do so, increasing the odds of more honest and holistic measurement, with a more genuine desire to improve (than if they had been simply told by an authority figure that they ‘had to’).
I personally don’t believe there is a need for such a system at this point in time, but hopefully some of the discussions it sparks will begin to shift thinking across the sector of the ways in which we measure performance and gauge accountability more broadly. If the threat of a heavy-handed imposition of grading is what it takes to realise how far we have allowed and encouraged trust to erode in our sector, than perhaps there is a silver lining to the NPC proposals. If we are told that ‘trust is not going to cut it with the new government’, perhaps it is the relationship we aim to have with that government which we need to be questioning, before reverting to taking shots at each others’ integrity as organisations.