by Tris Lumley
The rise of the impact movement over the past decade has triggered a major shift in the fabric of the nonprofit world.
It would be difficult to find a social sector leader today who said that measuring impact wasn’t an important aspect of their work. Nonprofits must be able to hold themselves accountable to those they exist to serve, and how can they possibly do that without the data that tells them whether they’re doing the job effectively?
Yet there has been a growing sense of unease over recent years about the results of the rise of impact measurement and evaluation. On the whole, it’s not clear that impact measurement is being used to drive changes in strategies, programs and service design. It’s not clear that it’s being used by funders and investors to make decisions on where to invest. And it’s clearly not being used by organizations to account to those they aim to serve.
Two notable figures in the impact movement have recently gone so far as publishing an article claiming that the impact ‘revolution’ has gone wrong. Writing for Alliance Magazine, Ken Berger and Caroline Fiennes claim that “the ‘whole impact thing’ went wrong because we asked the nonprofits themselves to assess their own impact.” They argue that organizations shouldn’t assess their own impact because the incentives created by the need to fundraise for their work mean they produce “skewed and unreliable research.” Nonprofits don’t have the resources or skills to do such research properly anyway. Instead, Berger and Fiennes say, nonprofits should use the existing research to inform their program design and find other tools to gather data that can help them learn about and improve their work.
An anti-social sector?
It is hard to argue with much of this backlash against the current state of impact measurement practice in the nonprofit sector. If organizations are primarily in the business of raising money to sustain their work, it’s unsurprising that they end up using all the tools they can, including impact measurement, to be effective in that endeavor. Our : charities have told us that while they are increasingly measuring their impact, it’s much more likely for the purpose of meeting the demands of funders than improving services.
As I’ve , the nonprofit sector is at risk of prioritizing accountability to funders over accountability to those they aim to serve. In the extreme, these tensions between mission and sustainability leave us sleepwalking into becoming an anti-social sector, where the real beneficiaries are no longer the people we exist to help, but instead ourselves and our organizations.
But the view that nonprofits shouldn’t be in the business of trying to measure their impact is much too simplistic for my taste. If you are in the social impact business, you’d be negligent not to collect data to manage what you do. And relying exclusively on the existing research literature simply won’t cut it. What we need instead is to incorporate everything that’s being learned in research into complex systems in the nonprofit sector’s approach to impact measurement.
Towards an impact ecosystem
We can think of the nonprofit sector as an ecosystem made up of funders and investors, nonprofits and those they serve. In that ecosystem, there’s a flow of money and resources that enables us to work to improve people’s lives. To guide those flows of money, we also want to have flows of information and knowledge to guide how resources are allocated. That has been the motivation behind the impact movement from the beginning.
What we tend not to appreciate is that different parts of the ecosystem might need to be responsible for different pieces of the impact jigsaw. Academic research can play a pivotal role in working out what types of programs seem to work for different groups of people in different contexts. But this won’t always be an exact fit. The experimental approach that works for medical research, for example, is not well tailored to the complex, everyday systems and contexts in which people and communities exist. So rather than expecting nonprofits to use “proven interventions” in the same way that doctors prescribe drugs, academics might instead produce guidance on recommended practices: based on what we have observed in these contexts with these groups of people, we recommend this approach.
Funders and investors, having the luxury of being able to take a broad view across the sector, might do well to use that research to form general insights into what sorts of programs they want to fund, given their particular goals. They might specify their preferred practices in their application guidance to help nonprofits assess whether they’re a good fit for these philanthropists’ work.
Nonprofits themselves could use the existing research, guidance on recommended practices, and funders’ guidelines to help inform their work. But perhaps they should take a design approach, too—understanding in detail the lives of the people they aim to serve and developing approaches around user pathways that reflect their beneficiaries’ reality.
While organizations are delivering their services and products, they should be gathering data and feedback that helps them to manage performance, work out where services aren’t following the user pathways they’re expected to, and make improvements and corrections as quickly as possible. But if they want to learn and improve as much as possible, they’ll also want to share their emerging insights with other practitioners. This learning can be on a spectrum from quantitative to qualitative and shared on a corresponding scale from benchmarking to action learning sets and communities of practice.
In cases where organizations are doing something that is genuinely novel, where we don’t know what an organization’s impact might be, then organizations do need to do research. In these situations, we have to be aware of the challenges that Caroline and Ken raise of incentives and skills. We shouldn’t give up yet on charities doing research. Instead, if we are to start to address these challenges, we should recommend that organizations use external evaluators where appropriate, be transparent in their methodologies, and encourage external audit of their work. And all of these improvements in impact measurement need to feed back into the knowledge ecosystem so that at the field level we can combine academic research with practitioner level learning, and benchmarking with constituent feedback.