more like people

helping organisations to be more like people

Measuring water with a ruler

Significant numbers of voluntary organisations and think-tanks have been singing from government’s hymn sheet in recent years, demanding that civil society organisations prove  their value for the funding they receive by submitting to constantly more rigorous methods of monitoring and evaluation.  The chorus has become so loud that it is increasingly difficult for those who challenge its dominant orthodoxy to be heard, even when it is apparent that these methods are often equivalent to trying to measure the water in the ocean with a ruler.

Before going further, I want to stress that I think accountability in civil society is good; I think that most of the methods we use for supposedly achieving it, however, are not.

My latest thinking on this comes from a brief piece in Third Sector and an editorial in the Guardian about a YouGov/New Philanthropy Capital survey that suggests significant support for a ‘grading system’ for charities, to demonstrate how effective a particular organisation has been in achieving its aims.

Let’s put independence from the state and overall drops in sector-income aside for a moment and look at some of the ways accountability is currently seen by most funding bodies – governmental and non-governmental alike – in predicting what said grading system might look like…

The problem with grant funding accountability

First, let’s look at funding applications as the first institutional process of evaluating an organisation.  We need to acknowledge that the text of a funding application is a ‘theory of change’ – we (as applicants) believe if we deliver a, b and c (measurables), we will achieve x, y and z (social outcomes).  There is no way to know (beyond a well-educated guess) that delivering a series of events, interventions, support sessions, or other outputs, will definitively lead to the change we say it will.  There is nothing wrong with acknowledging this uncertainty, but standard funding formulas suggest we (as organisations) claim to know our actions will create pre-determined impacts, even if this is only a possibility.  This is the fundamentally flawed premise on which most funding accountability is based: it treats complex human and social problems as complicated ones with both fixed variables and predictable answers.

“According to …Glouberman and …Zimmerman, systems can be understood as being simple, complicated, complex. Simple problems… may encompass some basic issues of technique and terminology, but once these are mastered, following the “recipe” carries with it a very high assurance of success. Complicated problems, like sending a rocket to the moon, are different. Their complicated nature is often related not only to the scale of a problem…, but also to issues of coordination or specialised expertise. However, rockets are similar to each other and because of this following one success there can be a relatively high degree of certainty of outcome repetition. In contrast complex systems are based on relationships, and their properties of self-organisation, interconnectedness and evolution. Research into complex systems demonstrates that they cannot be understood solely by simple or complicated approaches to evidence, policy, planning and management. The metaphor that Glouberman and Zimmerman use for complex systems is like raising a child. Formulae have limited application. Raising one child provides experience but no assurance of success with the next. Expertise can contribute but is neither necessary nor sufficient to assure success. Every child is unique and must be understood as an individual. A number of interventions can be expected to fail as a matter of course. Uncertainty of the outcome remains. You cannot separate the parts from the whole.”

…But if we ignore for a minute that it is impossible to apply recipe-modelled approaches to complex social change, let’s look at the next stage of the process: the tenuous correlation between outputs and impact.  Most funders will suggest that once the deliverables of our funding have been determined (x events, x hours of support, x people served, etc.), that achieving those deliverables equates to success (or not achieving them equates to the failure).  This may not reflect the bigger picture.  Some funders will be more flexible in determining the actual relationship between output and outcome, but there is still often worry on the part of a funded organisation that they will lose the money they need to keep delivering, if the numbers don’t add up.  And as everyone who has worked under such pressures knows, if your work is dependent on making numbers add-up, you’ll make sure those numbers add up!  This may mean ‘double-counting’ events which are funded by other funders, it may mean counting different parts of a service accessed by the same person as different outputs… we have plenty of creative ways of doing these things, and really, who can blame us?  If the money we need to keep doing something with positive social value is contingent on forms that don’t reflect that value, what would be the greater crime: some loosely-adjusted paperwork, or a vital service shut-down due to an overly-rigid system?

How much flexibility is needed?

Let’s look at a concrete example: if you receive a grant to deliver counselling services to 150 children who are victims of bullying in one-year, you will have a fixed number of hours you can dedicate to each child.  However, you may find that a particular group of children have higher needs, meaning that the scope and ongoing nature of their problems will require considerably more effort than your team have allotted them.  Now some funders will recognise the need for flexibility in such a situation, but this is usually a question of how many degrees they are willing to shift.  The qualitative impact of supporting 50 kids through a particularly difficult time may be far more socially valuable than providing basic support for the full-150 pre-determined total.  And for a child in a difficult scenario, 3 hours of support for every hour provided to a less-vulnerable child would not be an unexpected ratio.  Yet, from many funders’ perspectives, this is still a 67% margin of error, and likely inexcusable from a traditional ‘accountability’ perspective.

The cynics of my cynicism might say: “well, this organisation just needs to quantify the additional impact of the 3:1 support ratio provided to the most vulnerable children served, and it will justify their reduced output.”  But think about the complexity of measuring such an impact – the differences that inevitably exist between different children’s developmental progress, the amount of at-home support experienced by some but not others, any other additional circumstances that might make some children’s experiences of bullying more traumatic… Not to mention the additional time that would need to be spent by staff attempting to collect the data that would rationalise this reallocation of funds, and how that would likely be pulling them away from delivering services they were funded to deliver in the first place.

…And if grant funding is not difficult enough, target-based government contracting methods just accentuate all of the worst aspects of this process, expecting, almost without exception, numbers to be achieved, on the dubious premise that ‘numbers = impact’.

The bigger risk…

The bigger risk, from my perspective, lies in the almost inherent bias of asking organisations to provide certain types of information as evidence of their impact, that are clearly more-accessible to larger charities, with dedicated research staff and knowledge of public data retrieval, than they are to the vast majority of voluntary and community groups who will be held to the same measures.

As has always been the case, there will be organisations whose greatest skills are in manipulating data, writing in an appropriate style and knowing how to speak to the right power-brokers.  These groups will invariably rank well in a system that plays to all of these strengths, regardless of their actual quality of work.  If a charity grading system is implemented in the ways it seems likely it would be, it will not be grading effectiveness, but, like so many other practiced measures of social value, will actually be measuring an organisation’s ability to effectively handle paperwork and bureaucracy.

My radical concept for improved accountability? Trust.

Too much of what goes on in the name of accountability is an ongoing assumption of organisational guilt, until innocence can be determined by those making the judgment in the first place.  A concept that has been central to historical ideas of accountability – trust – has been all-but-forgotten in the current culture of market-dominated methods of measurement in the social sphere.

What if funders were to take a more personally motivated attempt to ‘get to know’ their recipients?  What if a significant part of the time spent on paperwork, was spent working with grant recipients to find out how they felt they knew they had achieved impact in their work?  What if this meant funders being sent a video of a recent street dance performance, featuring interviews with the participants?  What if it meant an invitation to attend a presentation and board meeting of the organisation in question?  What if it meant direct contact between grant administrators and those who had received services as a result of their funding?

Personal interactions (as opposed to institutional ones, such as form letters and other generic communications) can fundamentally change the dynamics between a funder and those who receive funding.  Just as most of us are more likely to pay back money we have borrowed from a friend on time, than we are if we borrowed it from a credit card company (penalties aside), the likelihood of improved accountability if a relationship is developed between funder and recipient works along similar lines.  Trust is a powerful motivating force; more so even than the ongoing sense of worry that tends to characterise most funding relationships, from the recipient’s perspective.

A mandatory grading system would, by its very nature, violate any notion of trust-based accountability.  If organisations were forced to be graded, just like with our creatively-defined output measures described earlier, many would find (questionable) ways to check the necessary performance indicator boxes, diverting valuable attention away from actually improving their performance.

A voluntary grading system that organisations could opt-in to (with different types of performance indicators reflecting different conceptions of performance), on the other hand, could become a point of pride within the sector – something to aspire to.  Those who undertook it would have chosen to do so, increasing the odds of more honest and holistic measurement, with a more genuine desire to improve (than if they had been simply told by an authority figure that they ‘had to’).

I personally don’t believe there is a need for such a system at this point in time, but hopefully some of the discussions it sparks will begin to shift thinking across the sector of the ways in which we measure performance and gauge accountability more broadly.  If the threat of a heavy-handed imposition of grading is what it takes to realise how far we have allowed and encouraged trust to erode in our sector, than perhaps there is a silver lining to the NPC proposals.  If we are told that ‘trust is not going to cut it with the new government’, perhaps it is the relationship we aim to have with that government which we need to be questioning, before reverting to taking shots at each others’ integrity as organisations.

Tags: , , , ,

Posted in accountability and Uncategorized.

15 comments

15 Replies

  1. Liam Barrington-Bush May 29th 2010

    Hi there –

    I think the idea of having a funder/donor on the board, or (as funding relationships change), invited to participate in board meeting could be a really good way to build trust… would likely, as you say, have obstacles on both sides, but could start to break down the current ‘us-them’ mentality that is often associated w/ funding relationships…

    The disclaimer I didn’t initially put in, was that we are at a point where trust and accountability have come to be seen as such disconnected ideas, that the work to re-engage them is considerable… this could definitely be a future post, in itself, but wanted to start with the concept and the advantages it might-offer, before getting into the depth of methodology… perhaps we should chat some time?

    Thanks for the comment!

    Liam

  2. Monitoring and evaluation is a good phrase – for funders who don’t know the difference, do little of the latter, and may lack staff with the training, time or inclination to get stuck into it.

    Accountability needs to be value-driven, not least that –

    ● Groups should be more accountable to their members and to the community than to their funders.

    The regime applied by funders impacts on outcomes and so –

    ● There ought to be reciprocal evaluation between funders and groups (that would seem both fairer and more productive)

    All accountability mechanisms applied by funders are presumably motivated by concerns to manage risk and ensure value for money but widen the power imbalance and can easily threaten independence by being over-prescriptive, imposing control and compromising the group’s mission and creativity, with ubiquity as the price of securing money –

    ● Accountability frameworks should not be imposed by funders but properly negotiated and agreed with funded groups.

    Trust certainly is a vital aspect of the funding relationship: how far does the funder trust the group to know both the how and the what of delivery –

    ● It is hard to balance qualitative aspects when evaluating, the views of service users and the community may not be well reflected, and playing the numbers game is a silly way of having to do it.

    Having said all this, has the voluntary sector really shown enough initiative in showing how it wants accountability to work or are too many groups complicit? As they say, unless we challenge, nothing changes.

  3. Liam Barrington-Bush May 29th 2010

    I couldn’t have put it better myself Paul – your last 2 sentences really nail it!

    Thanks for the thoughts… nothing to add 🙂

  4. Oh dear, I feel moved to add more points on evidence of impact:

    (1) If the voluntary sector is expected to fall into line then why not government departments and public bodies? Imagine applying this to NHS computer projects, Trident, wars … – especially as this presumably requires both evidence before and after funding.

    (2) Low risk can equal low outcome in many circumstances. If we only do what we know works, won’t this stifle innovation, leaving us as a stuck-in-the-mud sector?

    (3) I guess (no evidence!) that there is a serious skills gap in the sector – after all, voluntary organisations haven’t exactly leapt into Full Value to demonstrate the worth of what they do.

    (4) The damage at the grassroots could be a serious blow to community action, why, even the Big Society – which really depends on us *all* getting out of the way of small groups being able to do all the things that they can do. Indeed, Billy Taylor’s song “I wish I knew how it would feel to be free,” written back in 1954 should be our guide, with the lyrics including:

    I wish I could give all I’m longing to give
    I wish I could live like I’m longing to live
    I wish that I could do all the things that I can do

    (5) I remember two contrasting tales:

    One was about a street theatre group that could get council funding only if they adopted a constitution and appointed a chair, secretary and treasurer – all right and proper except by turning into something else they lost their soul.

    The other one was how Barclays got groups to evidence their funded community projects. They asked them and then they went with the answers. I think included: photos; how many new friends people made; and whether they felt more likely to volunteer for another community project.

    So what do you think? Evidence for all or none? Time for funders to show they trust small groups and to take a more positive view of risk to gnerate what used to be called social capital?

  5. Interesting post Liam. I’ve cited it in a blog post I’ve done on this question of the efficiency of volunteering:

    http://jocote.org/2010/05/the-efficiency-of-volunteering/

    I think trust is a really key issue in this debate. The new thinking around the ‘Big Society’ is a good opportunity to raise these issues and challenge some of the assumptions implicit in the ‘league table’ approach to the voluntary sector.

  6. Liam Barrington-Bush Jun 2nd 2010

    Paul – I think your 2nd point on low-risk ventures stifling innovation is a really good one. As the contrast between genuine learning and accountability is often never addressed. In a book I just read, ‘Getting to Maybe: How the World is Changed‘, the authors argue that:

    “Many philanthropic funders say they value learning and want to know what works and doesn’t work, then, in the next sentence, they reaffirm their bottom-line thinking about accountability: “You (and we) will ultimately be judged by whether you attain your goals and achieve results.” This tension between learning and accountability is seldom recognized, much less openly discussed. Accountability messages trump learning messages every time. As sure as night follows day, this attitude leads those who receive funds to exaggerate results and hide failures–the antithesis of genuine reality testing and shared learning.”

    The Barclay’s example is a really good one – I think it is something a range of funders could adopt, which could help place trust back at the centre of accountability.

    Thanks for the comments!

  7. Liam Barrington-Bush Jun 2nd 2010

    Hi Patrick –

    Excellent post as well – thanks for citing this one… particularly enjoyed this;

    “The problem with introducing hard outcomes at the social level, is that they crowd out the factors that motivate individuals to achieve those outcomes on the personal level.”

    http://jocote.org/2010/05/the-efficiency-of-volunteering/

    Spot on!

    I will be referencing you in the next post I’m writing now 😉

    Thanks for the comments!

    Liam

  8. We’re a small Social Enterprise with only a bit of experience of both the softer & harder outcomes when applying for funding. The softer outcomes allow us to move on projects a lot quicker, get things moving and find out whether something may need tweaking etc. Technology allows us to easily keep a record of what we’re doing, publicly wherever possible, and this also allows comment, suggestions etc from whoever (funders?)
    On the question of trust – We’re much more engaging with the funders who have a more relaxed approach. Happy to support, promote and get involved with other projects / funding and general sharing of knowledge – In turn promotes collaboration, partnerships, new ideas etc

    The ‘less trusting’ funding options really stifle from the outset – The amount of times we find ourselves questioning why on earth some information is required and really having to keep focussed on why we’re applying for a particular pot of money in the first place.

    Not blind to accountability and very aware of mis-use of funding but for small set ups such as ours, frustration with the regulation, tick box, 200 words exactly, hourly monitoring form approach, means we don’t apply for funding that would be right up our street because of all the hoops we’d need to jump through.

  9. Liam Barrington-Bush Jun 2nd 2010

    Much appreciated Stuart… this reminds me of a crucial (though obvious) point that is sometimes forgotten in the non-trust-based, semi-adversarial dynamics that can exist between many funders and those they fund: *we are working towards the same goals here*
    If groups like yours don’t feel like they can apply for funds for these kinds of reasons, both your organisation (for obviously not getting the funding) and that funder (for not having your work to demonstrate as part of the ‘picture of change’ it aims to paint) loose out… this status quo is to no one’s benefit and needs to be addressed if funders and funded groups are to be able to work together to challenge the complex problems we face…

  10. How do you know you can or can’t get the funding if you don’t even ask!? If you don’t ask you don’t get, it’s that simple.

  11. Liam Barrington-Bush Jun 4th 2010

    Hmmm… I believe myself and most of the others who have commented have been through the process of asking countless times, but thanks for your suggestion.

  12. TURNING EVALUATION INTO A TIME MACHINE

    Outcomes-driven funding models can hide valuable lessons from failure, reducing success by dealing only with travel from the past (application stage) to the present (project completion stage) rather than onwards into the future.

    3 funding models:

    (1) We tell you what outcomes to achieve + probably how to do it

    [We trust ourselves that delivery *to* the community will prove effective. Risk is reduced to competence]

    (2) You tell us what outcomes you will achieve + how you will do it.

    [We share trust with the funded group to work effectively *for* the community, pre-setting project design and locking this into delivery. Risk is still low, being systems-controlled]

    (3) You ask the community what people want + gain their active support in coming up with ideas on how to do it

    [We trust the group to work effectively *with* the community. Risk is high and lets rip all that innovative, imaginative, learning-through-doing stuff.]

    Maybe, there is some need to spread the idea among funders that to fund is to invest trust in a funded group *and* in the community.

    Perhaps too, that success is a journey of discovery: with value not being just end outcomes but is also in the doing.

    Only “what works” matters. Biological evolution contains both success and failure – biology learns from both. The attitude of funders to evaluation devalues and hides failure. What seems beyond doubt is that (a) there is a huge cultural down on admitting – and far less – celebrating what failure contributes; (b) paradoxically, success eludes us on the big issues – like poverty, unemployment …

    Even, that actually there are no genuine end outcomes but simply people having expanded vision to see more stepping stones to more opportunities for social change, and people building their confidence, commitment and skills to take bigger jumps together.

    If so, measuring the past and even the present is inadequate and restrictive. Funders also need to look at what groups can tell them about changing the future. Evaluation could therefore benefit from shifting the core question from “What have you done (with our money – the past) and where have you got to (present achievement)” to “Where are you now able to go?” (future potential).

    The goal should be to change the future. Evaluation should be an aid to time travel in which altering that future for the better is made possible by communities seeing it and going for it. Clearly, there are sustainable funding issues in this approach, as one-off funding may not resource communities to pull this off. By stretching horizons and timelines, communities can take control of their destinies and shape their own evolution – a biological and therefore human approach.


Leave a Reply

Time limit is exhausted. Please reload CAPTCHA.


More Like People is an association of freelance consultants, facilitators and trainers, working primarily in the voluntary, community and campaigning sectors in the the UK and elsewhere.