In drafting a final report to a grant funder, the program manager for a household financial education program that teaches low-income households how to become more financially stable boasted that the program was successful, and provided the following list as examples of “outcomes” that were achieved:
- 1,500 promotional flyers were distributed in the community.
- Ten financial literacy workshops were conducted throughout the community.
- More than 200 people participated in the financial literacy workshops.
At first glance, the outcomes listed above might lead a reader to agree that the program was a success. But could the funder determine from those outcomes what the impact of the program was on the participants? If the goal of this program is to teach low-income households how to become more financially stable, does reporting to the funder how many people were in the audience tell them anything about whether or not progress was made toward achieving that goal? The answer to both of these questions is “No,” because the goal of the program was not to get people to attend events, but to teach people a skill. In this scenario, the examples provided by the program manager were “outputs,” and not “outcomes.” An “outcome” is the desired change that is the result of your work. “Outputs” are what help you achieve that change and may include activities or things you produce, like an educational DVD, workbook, or webinar.
Generally speaking, what motivates a grant maker to fund a program is the anticipated effect of the program, not what things are done to achieve that effect. Sure, grant makers will likely want to know how results are achieved and if their funds were spent as planned, and perhaps how the success of a program might be duplicated. But the details of the processes do not communicate a program’s impact.
While distinguishing outputs from outcomes might seem simple, mistaking one for another is easy to do (I even do this occasionally!) and examples of organization reporting outputs as measures of success are not uncommon. So, how can you be sure that the outcomes you identify for your program are not actually outputs? I recommend the use of a logic model. Even something as simple as the model below can serve as a blueprint for program reporting as a useful tool to distinguish outputs from outcomes.
Using this model for the same program, below is a list of possible achieved outcomes that could be reported to the grant maker:
- 90% of the people who attended and responded to a follow up survey had developed a household budget within 30 days of attending the workshop.
- 75% of the people who attended and responded to a follow up survey had a savings account with a balance of at least $500 six months after attending the workshop.
Of the two lists of achievements provided, which do you think communicates to a funder the effectiveness of this program and offers more compelling evidence to fund the program?
Now ask yourself:
Are you effectively distinguishing outputs from outcomes when you are seeking grants or reporting to funders?
You can explore this topic further and learn how to develop meaningful outcome objectives by checking out my webinar:
Scott Herr, PhD
Technical Assistance Project Manager
In writing case for supports, I try to think of it in terms of features versus benefits. This is similar to outputs and outcomes. It is important to think through these issues to make sure we are communicating the lasting impact funders are making in the cause we are serving. Thanks for the flowchart. I will be using this to remind me when we write grants and other communications..
Great clarification of an oft misunderstood topic. I also like the flowchart and will add it to my list of good resources. Thanks.
Thanks for your input1
Thanks for reviewing the differences between these. Are you finding that funders still find logic models reliable? I’ve heard feedback on both side of the argument. Always helpful to explain this information!!
Thanks for your comment. Yes, we do find funders still find logic model reliable.