Measuring water with a ruler
Significant numbers of voluntary organisations and think-tanks have been singing from government’s hymn sheet in recent years, demanding that civil society organisations prove their value for the funding they receive by submitting to constantly more rigorous methods of monitoring and evaluation. The chorus has become so loud that it is increasingly difficult for those who challenge its dominant orthodoxy to be heard, even when it is apparent that these methods are often equivalent to trying to measure the water in the ocean with a ruler.
Before going further, I want to stress that I think accountability in civil society is good; I think that most of the methods we use for supposedly achieving it, however, are not.
My latest thinking on this comes from a brief piece in Third Sector and an editorial in the Guardian about a YouGov/New Philanthropy Capital survey that suggests significant support for a ‘grading system’ for charities, to demonstrate how effective a particular organisation has been in achieving its aims.
Let’s put independence from the state and overall drops in sector-income aside for a moment and look at some of the ways accountability is currently seen by most funding bodies – governmental and non-governmental alike – in predicting what said grading system might look like…
First, let’s look at funding applications as the first institutional process of evaluating an organisation. We need to acknowledge that the text of a funding application is a ‘theory of change’ – we (as applicants) believe if we deliver a, b and c (measurables), we will achieve x, y and z (social outcomes). There is no way to know (beyond a well-educated guess) that delivering a series of events, interventions, support sessions, or other outputs, will definitively lead to the change we say it will. There is nothing wrong with acknowledging this uncertainty, but standard funding formulas suggest we (as organisations) claim to know our actions will create pre-determined impacts, even if this is only a possibility. This is the fundamentally flawed premise on which most funding accountability is based: it treats complex human and social problems as complicated ones with both fixed variables and predictable answers.
“According to …Glouberman and …Zimmerman, systems can be understood as being simple, complicated, complex. Simple problems… may encompass some basic issues of technique and terminology, but once these are mastered, following the “recipe” carries with it a very high assurance of success. Complicated problems, like sending a rocket to the moon, are different. Their complicated nature is often related not only to the scale of a problem…, but also to issues of coordination or specialised expertise. However, rockets are similar to each other and because of this following one success there can be a relatively high degree of certainty of outcome repetition. In contrast complex systems are based on relationships, and their properties of self-organisation, interconnectedness and evolution. Research into complex systems demonstrates that they cannot be understood solely by simple or complicated approaches to evidence, policy, planning and management. The metaphor that Glouberman and Zimmerman use for complex systems is like raising a child. Formulae have limited application. Raising one child provides experience but no assurance of success with the next. Expertise can contribute but is neither necessary nor sufficient to assure success. Every child is unique and must be understood as an individual. A number of interventions can be expected to fail as a matter of course. Uncertainty of the outcome remains. You cannot separate the parts from the whole.”
…But if we ignore for a minute that it is impossible to apply recipe-modelled approaches to complex social change, let’s look at the next stage of the process: the tenuous correlation between outputs and impact. Most funders will suggest that once the deliverables of our funding have been determined (x events, x hours of support, x people served, etc.), that achieving those deliverables equates to success (or not achieving them equates to the failure). This may not reflect the bigger picture. Some funders will be more flexible in determining the actual relationship between output and outcome, but there is still often worry on the part of a funded organisation that they will lose the money they need to keep delivering, if the numbers don’t add up. And as everyone who has worked under such pressures knows, if your work is dependent on making numbers add-up, you’ll make sure those numbers add up! This may mean ‘double-counting’ events which are funded by other funders, it may mean counting different parts of a service accessed by the same person as different outputs… we have plenty of creative ways of doing these things, and really, who can blame us? If the money we need to keep doing something with positive social value is contingent on forms that don’t reflect that value, what would be the greater crime: some loosely-adjusted paperwork, or a vital service shut-down due to an overly-rigid system?
Let’s look at a concrete example: if you receive a grant to deliver counselling services to 150 children who are victims of bullying in one-year, you will have a fixed number of hours you can dedicate to each child. However, you may find that a particular group of children have higher needs, meaning that the scope and ongoing nature of their problems will require considerably more effort than your team have allotted them. Now some funders will recognise the need for flexibility in such a situation, but this is usually a question of how many degrees they are willing to shift. The qualitative impact of supporting 50 kids through a particularly difficult time may be far more socially valuable than providing basic support for the full-150 pre-determined total. And for a child in a difficult scenario, 3 hours of support for every hour provided to a less-vulnerable child would not be an unexpected ratio. Yet, from many funders’ perspectives, this is still a 67% margin of error, and likely inexcusable from a traditional ‘accountability’ perspective.
The cynics of my cynicism might say: “well, this organisation just needs to quantify the additional impact of the 3:1 support ratio provided to the most vulnerable children served, and it will justify their reduced output.” But think about the complexity of measuring such an impact – the differences that inevitably exist between different children’s developmental progress, the amount of at-home support experienced by some but not others, any other additional circumstances that might make some children’s experiences of bullying more traumatic… Not to mention the additional time that would need to be spent by staff attempting to collect the data that would rationalise this reallocation of funds, and how that would likely be pulling them away from delivering services they were funded to deliver in the first place.
…And if grant funding is not difficult enough, target-based government contracting methods just accentuate all of the worst aspects of this process, expecting, almost without exception, numbers to be achieved, on the dubious premise that ‘numbers = impact’.
The bigger risk, from my perspective, lies in the almost inherent bias of asking organisations to provide certain types of information as evidence of their impact, that are clearly more-accessible to larger charities, with dedicated research staff and knowledge of public data retrieval, than they are to the vast majority of voluntary and community groups who will be held to the same measures.
As has always been the case, there will be organisations whose greatest skills are in manipulating data, writing in an appropriate style and knowing how to speak to the right power-brokers. These groups will invariably rank well in a system that plays to all of these strengths, regardless of their actual quality of work. If a charity grading system is implemented in the ways it seems likely it would be, it will not be grading effectiveness, but, like so many other practiced measures of social value, will actually be measuring an organisation’s ability to effectively handle paperwork and bureaucracy.
Too much of what goes on in the name of accountability is an ongoing assumption of organisational guilt, until innocence can be determined by those making the judgment in the first place. A concept that has been central to historical ideas of accountability – trust – has been all-but-forgotten in the current culture of market-dominated methods of measurement in the social sphere.
What if funders were to take a more personally motivated attempt to ‘get to know’ their recipients? What if a significant part of the time spent on paperwork, was spent working with grant recipients to find out how they felt they knew they had achieved impact in their work? What if this meant funders being sent a video of a recent street dance performance, featuring interviews with the participants? What if it meant an invitation to attend a presentation and board meeting of the organisation in question? What if it meant direct contact between grant administrators and those who had received services as a result of their funding?
Personal interactions (as opposed to institutional ones, such as form letters and other generic communications) can fundamentally change the dynamics between a funder and those who receive funding. Just as most of us are more likely to pay back money we have borrowed from a friend on time, than we are if we borrowed it from a credit card company (penalties aside), the likelihood of improved accountability if a relationship is developed between funder and recipient works along similar lines. Trust is a powerful motivating force; more so even than the ongoing sense of worry that tends to characterise most funding relationships, from the recipient’s perspective.
A mandatory grading system would, by its very nature, violate any notion of trust-based accountability. If organisations were forced to be graded, just like with our creatively-defined output measures described earlier, many would find (questionable) ways to check the necessary performance indicator boxes, diverting valuable attention away from actually improving their performance.
A voluntary grading system that organisations could opt-in to (with different types of performance indicators reflecting different conceptions of performance), on the other hand, could become a point of pride within the sector – something to aspire to. Those who undertook it would have chosen to do so, increasing the odds of more honest and holistic measurement, with a more genuine desire to improve (than if they had been simply told by an authority figure that they ‘had to’).
Tags: accountability, charity. voluntary, evaluation, social impact, trust