Nowadays you can hardly throw a rock at the nonprofit sector without hitting the words "data driven," "outcomes based," "evidence based," or some other terminology referring to the desire to understand the efficacy of a given intervention. It makes perfect sense: given the roughly $30 billion / year that is donated to social service organizations, it's critical that we know if those funds are making a difference. However, in the rush to quantify impact, something extremely fundamental has been forgotten: it is shockingly expensive to collect, analyze and report this type of data, and the vast majority of nonprofits lack the funding to do so.
The simple fact of the matter is that when we talk about impact, we are usually referring to outputs: 'X' number of people receiving loans or 'Y' number of budgets built. We may go a step further and say that the average increase in FICO score of our clients is 75 points, which is true (and we are proud of that), but it's also misleading because, as far as we know, that's only true of the clients we are able to reach for follow up surveys. In other words, selection bias--the people most motivated to improve their credit are also the most likely to stay in touch long enough to do a survey--skews the numbers in our favor.
Fundamentally speaking, most nonprofits don't enjoy the data systems, logic models, survey methodologies and personnel needed to do robust social impact reporting— ourselves included. In fact, the only reason why we know this so well is that we've been running a randomized control trial for the past year, and therefore have much in the way of firsthand understanding of the challenges.
But don't take my word for it. Let's talk through how hard it is. You first have to collect data at intake. Sounds easy enough, right? Just give a survey to each client starting an intervention and then enter that data into some sort of database. On the ground, however, things gets messy. Not all the people doing intake surveys are properly trained, or with everything going on they may forget to administer them. Clients may have terrible handwriting, misunderstand questions, skip sections or feel uncomfortable filling them out. Still, entering the data is where things really start to fall apart, because so few nonprofits have good databases; they are hard and expensive to build, and require in-house expertise to maintain and use. We've spent a year and tens of thousands of dollars switching to Salesforce.com for all our data needs, and there are still bugs, training issues and ongoing costs.
But where things really get difficult is when it comes time to do follow up surveys. For our Coaching, we collect data at intake and six, twelve and 24 months later. Given the hundreds of people we coach, doing all these surveys requires that we have dedicated staff who will email, call, meet with and write letters to clients. Many clients have moved, or had their phones shut off, or don't return our calls, or we simply end up playing phone tag with them. As a result, the cost per survey completed is VERY high--all that time spent just completing one survey has to be accounted for--and we have never received funding specifically for a person to make these calls. I
By the time we are asked to provide outcome data to a funder, we have several things going against us: the quality of intake data is less-than-perfect; we are severely understaffed to do follow up surveys; only a fraction of our total clients completes these surveys; and given how overworked and unpaid everyone is, even generating the reports can become onerous. What this means is that the highest-quality data we have is demographic: it's not hard to report on the average income of our clients, or their household size, banking status, credit score, and the like. But the stuff that really matters--increases in savings, reductions in debt and stress, overall well being...in short, changes over time resulting from our products and services--gets lost in the mix.
The last part—"result from"—is absolutely critical. Unless you conduct a randomized control trial (RCT), you can't actually say whether your actions resulted in an outcome. For example, your client's credit score may have gone up even without you, or maybe another client got a job, not because of your training, but rather because the economy got better. By randomly assigning people to receive your service or to simply be tracked, you can look for a causal relationship between the program and the outcome. That said, doing an RCT costs a lot of money. Not only are you paying the costs of serving clients, but you must also pay to track the control group (those that don't receive a service).
None of these problems are insurmountable. Unfortunately, the solutions come down to resources, by which I really mean money. It's one thing to say "We granted you $5,000 and in six months expect a report on how many people you served, their average increase in savings, and so on," but it's another thing entirely for us to be able to provide credible and interesting data. As a nonprofit we are expected to do twice the work for half the pay, while operating in a space that is stressful and often rife with despair. It is utterly unrealistic to expect that even as 90 cents of every dollar goes right into programs, and the rest barely covers rent, utilities, executive staff, insurance, etc., we are going to somehow squeeze out of that a social impact measurement infrastructure.
Here's the good news, though. There is a strong overlap between the need to track data and the need to maintain relationships with clients over time. In other words, the more readily we communicate with our clients, the more impact we have and the easier it will be to uncover that impact. We know how to do this! Provide us with funds to hire low-income community members--people who truly understand our clientele--to recruit participants, do surveys with them in their native language and follow up with them over time. Give us grants to build top-notch systems that enable us to make sense of all those surveys, and be sure to include a line item for the people in charge of maintaining and updating those systems. Consider funding an incentive structure to encourage completion of surveys—perhaps every survey one completes generates a raffle ticket, and once a month we raffle off $250 to go into a client's savings account. Be willing to give larger grants to fewer nonprofits, with fewer strings attached, and with more time to experiment and grow. And finally, understand the amount of time it takes to report back. For many grants, we spend 10 cents of every dollar writing the grant, and another 10 to twenty cents tracking and reporting on it. The smaller the grant, the less justifiable that becomes!
My goal here is not to be a downer. Rather, I want to be open and honest about what I've seen with us and other nonprofits when it comes to impact data. Most importantly, I think it's time to have an authentic dialogue with funders and the public. I couldn't agree more that it's critical we understand what works and what doesn't; I would even view it as a victory if our RCT showed that what we do doesn't work, because we would then be able to make adjustments or shut down the program. Yet I feel it is imperative that we stop pretending like what we report in splashy annual reports and pamphlets (and by 'we,' I mean Capital Good Fund too) is very meaningful. There are all sorts of issues with these reports, ranging from selection bias to poor sample size, unrepresentative samples, a confusion between correlation and causality and other basic statistical errors.
In short, we can solve this. But only if we are willing to put our brains and our wallets where our mouths and best intentions are.
Photo Credit: refinerysource.com |
The simple fact of the matter is that when we talk about impact, we are usually referring to outputs: 'X' number of people receiving loans or 'Y' number of budgets built. We may go a step further and say that the average increase in FICO score of our clients is 75 points, which is true (and we are proud of that), but it's also misleading because, as far as we know, that's only true of the clients we are able to reach for follow up surveys. In other words, selection bias--the people most motivated to improve their credit are also the most likely to stay in touch long enough to do a survey--skews the numbers in our favor.
Fundamentally speaking, most nonprofits don't enjoy the data systems, logic models, survey methodologies and personnel needed to do robust social impact reporting— ourselves included. In fact, the only reason why we know this so well is that we've been running a randomized control trial for the past year, and therefore have much in the way of firsthand understanding of the challenges.
But don't take my word for it. Let's talk through how hard it is. You first have to collect data at intake. Sounds easy enough, right? Just give a survey to each client starting an intervention and then enter that data into some sort of database. On the ground, however, things gets messy. Not all the people doing intake surveys are properly trained, or with everything going on they may forget to administer them. Clients may have terrible handwriting, misunderstand questions, skip sections or feel uncomfortable filling them out. Still, entering the data is where things really start to fall apart, because so few nonprofits have good databases; they are hard and expensive to build, and require in-house expertise to maintain and use. We've spent a year and tens of thousands of dollars switching to Salesforce.com for all our data needs, and there are still bugs, training issues and ongoing costs.
But where things really get difficult is when it comes time to do follow up surveys. For our Coaching, we collect data at intake and six, twelve and 24 months later. Given the hundreds of people we coach, doing all these surveys requires that we have dedicated staff who will email, call, meet with and write letters to clients. Many clients have moved, or had their phones shut off, or don't return our calls, or we simply end up playing phone tag with them. As a result, the cost per survey completed is VERY high--all that time spent just completing one survey has to be accounted for--and we have never received funding specifically for a person to make these calls. I
By the time we are asked to provide outcome data to a funder, we have several things going against us: the quality of intake data is less-than-perfect; we are severely understaffed to do follow up surveys; only a fraction of our total clients completes these surveys; and given how overworked and unpaid everyone is, even generating the reports can become onerous. What this means is that the highest-quality data we have is demographic: it's not hard to report on the average income of our clients, or their household size, banking status, credit score, and the like. But the stuff that really matters--increases in savings, reductions in debt and stress, overall well being...in short, changes over time resulting from our products and services--gets lost in the mix.
The last part—"result from"—is absolutely critical. Unless you conduct a randomized control trial (RCT), you can't actually say whether your actions resulted in an outcome. For example, your client's credit score may have gone up even without you, or maybe another client got a job, not because of your training, but rather because the economy got better. By randomly assigning people to receive your service or to simply be tracked, you can look for a causal relationship between the program and the outcome. That said, doing an RCT costs a lot of money. Not only are you paying the costs of serving clients, but you must also pay to track the control group (those that don't receive a service).
None of these problems are insurmountable. Unfortunately, the solutions come down to resources, by which I really mean money. It's one thing to say "We granted you $5,000 and in six months expect a report on how many people you served, their average increase in savings, and so on," but it's another thing entirely for us to be able to provide credible and interesting data. As a nonprofit we are expected to do twice the work for half the pay, while operating in a space that is stressful and often rife with despair. It is utterly unrealistic to expect that even as 90 cents of every dollar goes right into programs, and the rest barely covers rent, utilities, executive staff, insurance, etc., we are going to somehow squeeze out of that a social impact measurement infrastructure.
Here's the good news, though. There is a strong overlap between the need to track data and the need to maintain relationships with clients over time. In other words, the more readily we communicate with our clients, the more impact we have and the easier it will be to uncover that impact. We know how to do this! Provide us with funds to hire low-income community members--people who truly understand our clientele--to recruit participants, do surveys with them in their native language and follow up with them over time. Give us grants to build top-notch systems that enable us to make sense of all those surveys, and be sure to include a line item for the people in charge of maintaining and updating those systems. Consider funding an incentive structure to encourage completion of surveys—perhaps every survey one completes generates a raffle ticket, and once a month we raffle off $250 to go into a client's savings account. Be willing to give larger grants to fewer nonprofits, with fewer strings attached, and with more time to experiment and grow. And finally, understand the amount of time it takes to report back. For many grants, we spend 10 cents of every dollar writing the grant, and another 10 to twenty cents tracking and reporting on it. The smaller the grant, the less justifiable that becomes!
My goal here is not to be a downer. Rather, I want to be open and honest about what I've seen with us and other nonprofits when it comes to impact data. Most importantly, I think it's time to have an authentic dialogue with funders and the public. I couldn't agree more that it's critical we understand what works and what doesn't; I would even view it as a victory if our RCT showed that what we do doesn't work, because we would then be able to make adjustments or shut down the program. Yet I feel it is imperative that we stop pretending like what we report in splashy annual reports and pamphlets (and by 'we,' I mean Capital Good Fund too) is very meaningful. There are all sorts of issues with these reports, ranging from selection bias to poor sample size, unrepresentative samples, a confusion between correlation and causality and other basic statistical errors.
In short, we can solve this. But only if we are willing to put our brains and our wallets where our mouths and best intentions are.
No comments:
Post a Comment