Numbers don’t lie… but they may ignore our most important truths (Part 1)
Friday, December 3rd, 2010
This blog is a part one of two (not that anyone’s counting), picking apart the issues with the ways we (over-)use stats and figures in the voluntary sector and beyond. It’s for anyone who ‘doesn’t believe it until they see the numbers’. Part one focuses on what I see as the false correlation between ‘numbers’ and ‘evidence’ and how this conflation undermines trust and creates less-than-honest results. The second will look at the dehumanising effects of using numbers as descriptors, rankings or value measures of people, relationships and social change.
________________________________________________________________________
The problem with numbers is not numbers, per se; it’s where they fall in our order-of-operations.
Too often we see them as an end point – the holy grail of research, evaluation, analysis, planning – rather than a step along the journey of better understanding. When numbers become the end game, the pressure to manipulate their journey, fiddling, adjusting and otherwise reconfiguring them is immense. And as much as we might like to pretend they represent an infallible scientific rigour, those of us who’ve ever filled a funder’s monitoring form know that even a figure calculated to the Nth decimal place still has significant room for interpretive flexibility, when you need it to.
Number as replacement for trust
No method of compliance can effectively replace the kind of accountability that mutual trust provides in relationships. The work created in attempts to do so is immense. Numbers have traditionally been seen as an alternative when trust doesn’t exist, providing a way of measuring whether someone has done what they said they would. Or so we tell ourselves.
Unfortunately, as this became the norm for contracts, evaluations, grant monitoring and organisational audits, we have taken the assumption of dishonesty that underpins the push for numbers, and trumped it… with more dishonesty!
And this dishonesty appears wherever we have imposed what David Boyle calls ‘The Tyranny of Numbers’. When voluntary organisations need to hit targets to maintain their funding, they double-count beneficiaries and shift budget lines; when government needs to justify ideology-driven service cuts to the public, they pick and choose the statistics that will help them to do so, ignoring those that don’t; when FTSE CEOs want to receive bigger bonuses, they hide liabilities and inflate profits to produce short-term gains in stock prices… They create numbers that succeed only in hiding the truth and most of the time we have no practical way of telling the difference!
In doing so, each of these examples create long-term problems in their wake; organisations and funders fail to adequately learn from both success and failure; governments are not held to account on ideologically-driven decisions; companies suffer when the bubbles so many questionable bonuses have been built upon, invariably burst…
Across the sectors
So these practices occur, with more and less altruistic intent, across all types of institutions. And it is impossible to effectively gage their true prevalence, as when they are fiddled they look (at least superficially) pretty much the same as when they are honest, and thus there is no simple and reliable way of checking if people are fiddling the system, without digging considerably deeper, by which point it may be too late to affect change.
Headline figures are underpinned by statistics, which have consolidated totals beneath them, and tallies and raw data from sample surveys still lower down in the process. Most of us don’t see, or are unable to understand these numbers on top of numbers, making it impossible (within most of our means) to effectively refute them. Yet they justify most of the decisions affecting our lives and the lives of those we support.
In the sea of numbers we may cross paths with on any given day, distinguishing between the ‘authentic’, the ‘questionable’ and the ‘wrong’ is an unfeasible task. One of my favourite recent finds, via Henry Mintzberg, looks at the creation of statistics which justified British World War II aircraft expenditure:
“As Eli Devons (1950:Ch. 7) described in his fascinating account of planning for British aircraft production during World War II, ‘despite the arbitrary assumptions made’ in the collection of some data, ‘once a figure was put forward… it soon become accepted as the “agreed figure”, since no one was able by rational argument to demonstrate that it was wrong… And once the figures were called “statistics”, they acquired the authority and sanctity of Holy Writ’ (155).” [Mintzberg, The Rise and Fall of Strategic Planning, The Free Press, 1994]
For another example of the futility of finding meaningful numbers, think of the London demonstration against the War in Iraq in February 2003. It seems fair to assume biases coming from both sides, as police declared the march at 750,000, while Stop The War organisers claimed 2,000,000. Even in counting a single tally, the most important variable, evidently, is who is doing the counting. And while one could of course argue that an objectively ‘correct’ number exists, who is in a position to ‘prove’ that theirs is it? So in practice, the numbers from both sides mean very little beyond ‘a considerable number of people don’t want this war’; a conclusion any casual observer of the event could likely have made, avoiding the unnecessary ambiguity numbers added to the situation.
Newspapers on top of newspapers
One response to these pitfalls is to produce new numbers which serve to either validate or disprove the old ones. In doing so, we are placing new newspapers over the old newspapers that we used to cover up the spot where the dog peed in the corner. The pee is still there, but we don’t have to acknowledge it anymore.
…And the new layer seems effective for a period, but then the damp begins to soak through and the stench begins to sneak around the edges, as we find yet more resourceful ways to manipulate the new system and achieve the numbers we wanted in the first place. The examples of this approach are endless: crimes get regrouped, ‘impact’ redefined, local boundaries redrawn, titles reclassified, and we’re back to square one, with little idea of what we have done, whether or not it has actually worked and how it compares with what we did before.
Trusting relationships don’t produce this kind of effect, but requiring numbers to achieve accountability comes from a mistrusting place, and thus the behaviour that follows is likely to reinforce this insinuation. What if Stop The War Coalition had shown the images from February 15th and let people judge for themselves the importance of the day, rather than try to quantify the historic mobilisation?
Building trust
My inclination (perhaps unsurprisingly to regular readers) is to place our focus on building trusting relationships, rather than trust in numbers. This is of course a mammoth task, to frame it conservatively, yet one which I feel is at the core of better and more meaningful learning, accountability and understanding. Raising trust invariably raises questions of power, but without venturing into such depths, our results will invariably be shallow ones.
How can trust change the dynamics between those with more and those with less power in the world of social change?
In communities groups I’ve worked with, when you ask the question ‘how do you know you’ve made a difference?’ it is common to hear from those most in tune to local issues: ‘We just know – we can tell’.
The professional voluntary sector tends to scoff at this response for the whole range of obvious reasons you might expect; namely that it’s ‘not evidence-based’ (see: ‘Show me the numbers’).
But often within this seemingly simple response, can be a series of profound truths, whose detail and subtlety is not easily translated into the worlds of reporting. It’s often a series of small changes, anecdotes, stories; the things you notice when you know the ins-and-outs of a community, its strengths and its problems, like the back of your hand. These anecdotes create a broader ‘feeling’ which may well serve as a more effective gage than any metrics ever can, of the shifts taking place in an area.
The challenge
So funders, lead partner organisations, councils, universities: why don’t we ask the people involved in local efforts how they know what kind of impact they’ve made and how they would choose to show us? Why don’t we also ask them what they’ve learned during the process?
And the bold part? We accept what they tell us.
When we ask for numbers, we undermine the judgments of those who do the work. If we give them the chance, without the pressure to produce figures (not stopping them if they feel numbers do help to tell their story), we may find that we have encouraged a more honest understanding of the issues.
This approach shifts the power dynamics by offering trust; giving them the chance to provide a narrative that makes sense within their experience, rather than the frameworks we have created for our own convenience or preference. Those who are trusted are more likely to be trust-worthy. When people you fund, research, support or evaluate are trust-worthy, you’re more likely to hear the important stuff from them, rather than a finely-tuned propaganda piece, invariably filled with the kinds of selective numbers which succeed only in giving us the false impression of knowing what’s going on.
The follow-up will focus on the more value-driven argument against a number-centric approach; how numbers can dehumanise those involved or affected by our work, undermining our core missions and principles in the process.