An Award-Winning Recipe for Logic Models
During the holidays, I go into baking mode. I love finding the perfect recipe that leaves friends and relatives craving more. This year, it was all about my pumpkin pie. We have a family recipe, but I wanted to kick it up a notch. I reimagined the model of what I thought the recipe for pumpkin pie was and came up with something even better. (Spoiler alert — I made a gingersnap crust instead of pastry and used fresh ginger instead of powdered ginger. Boy, was it amazing!). This time of year, just like with holiday recipes, we encourage our clients to reimagine their programs and services. They each have their own recipes, but sometimes you need to rethink everything to create an award-winner. One of the best ways to do this is through a logic model.
Logic models have been in use since the 1970s, but gained popularity in the 1990s when United Way started using them with its agency partners. Logic models graphically express the step-by-step process of organizing appropriate resources and activities that then produce the intended outputs and outcomes of a program or organization — just like a recipe using ingredients to create a dish. Internally, they can be used to monitor and evaluate work. Externally, the best logic models can summarize the purpose of a program in a way that a written statement cannot. As you reimagine your logic models, here are the top questions we receive, as well as a template we developed, to get you started on your new recipe:
How often should a logic model be created or updated?
Logic models should be created or updated for new programs or existing programs that are undergoing change. Programs with long track records of success, those with a strong evidence base or those that follow quality improvement programs, like early childhood centers following NAEYC standards, are less likely to require updating on a frequent basis. However, logic models for programs that are experimental or for which the evidence base is still developing should be reviewed regularly to ensure fidelity to the model or to make necessary course corrections. If you’re trying to determine whether to update your logic model, here are some questions to ask:
- What was our original hypothesis about our program?
- Based on our experience now, what have we learned?
- What inputs were really used to produce our desired outcome?
- Did we experience any positive or negative unintended outcomes? How do we mitigate or accelerate them?
- Does our model take community goals into account?
- Have we created or experienced any systemic shifts?
- Does any new research exist that addresses what works and should be added?
- Given the above, what should we modify for next year?
It is important to update logic models because it informs the evaluation plan that staff use to determine what data needs to be collected at what intervals to demonstrate impact.
Who should the creation process include?
Creating or updating logic models is a team sport. It cannot be delegated to a single staff member to complete because multiple people are involved with the organization’s programs, including program staff who execute the services, management who set expectations and fundraising staff who report on program outcomes to donors and supporters. Creating or changing a logic model is akin to creating or changing a program. Staff from all areas of the organization need to be involved to ensure the program is feasible, the projected outcomes and budget are realistic and the organization has a plan to capture data it needs to tell compelling stories.
Are logic models only for programs?
No — logic models are like recipes; you can create logic models for every area of your organization, including your board.
We serve multiple populations. How do you make logic models easier to read?
While the original logic model from the 1990s is classic, we have found that as the emphasis on community-building and two-generation programs has grown in the 21st century, the classic logic model needed a facelift. For our clients who are serving multiple populations (e.g., women and children) and/or have community-driven outcomes, we created a “layered logic model.” It creates clarity about who you are impacting and what outcomes you expect for each population. If your program serves multiple populations, consider upgrading your logic model with this approach — you can find a template HERE.
What is difference between all of the logic model components?
Logic model components — inputs, activities, outputs, outcomes and impact — can be confusing. We are often asked to clarify the meaning of each term. Here is a quick rundown on the difference between them.
What is the difference between inputs, activities and outputs?
To understand the difference between inputs, activities and outputs, we tell nonprofits to think of inputs as the raw ingredients of a recipe, the activities as the actions you take to create a meal and the outputs as the number of meals produced. In nonprofit language, inputs can be things like staffing, funding or curricula; activities include things like administering assessments, delivering classroom instruction or hosting counseling sessions; and outputs include number of children served and number of counseling sessions hosted. Remember: more activities do not equate a better logic model; list only activities that create the impact.
What is the difference between an output and an outcome?
This is the most common question we get, but the easiest one to answer. An output is a unit of measurement that counts numbers served or activities conducted. It answers the question, “What happened?” An outcome is a unit of measurement that determines what has been accomplished. Anytime you multiply or divide (e.g., percentage change in the number of meals produced), it is always an outcome. It answers the question, “What resulted?”
What is the difference between outcomes and impact?
This is the hardest one to measure, but is the most important to differentiate. Impact is a unit of measurement that illustrates whether the service made a difference. It can be calculated by starting with the participant group outcomes (what resulted?) and subtracting control group outcomes (what would have resulted anyway?). It answers the question, “What difference was made?”
For example, an afterschool program has 15 seniors (output) in its program with the goal of increasing the graduation rate of its participants. It has a 95 percent graduation rate (outcome). If the target high school has a graduation rate of 80 percent, the program increased the graduation rate by 15 percentage points (assuming that the demographics of the student body and the program participants are constant).
What differentiates a great logic model from an average one?
We have worked with many organizations on creating or improving their logic models, and the difference between a great and average one is that the reader can tell what the program does and which methods you need to achieve the stated outcomes. You should be able to look at the logic model — without any other information — and know what the program does and the organization’s secret recipe. A great logic model is an award-winning recipe that can help other organizations replicate the results of your program at another site. It narrows the activities down to only the essential ingredients and helps streamline the data agencies need to collect to demonstrate impact. It also includes a theory of change as a headline for the logic model. To pressure test your logic model, ask someone unfamiliar with your program to look at your logic model and ask what they observe. If they cannot accurately articulate what your program does, it may be time for a refresh. To go one step further, have them rate it critically based on: clarity, comprehensiveness, coherence and common sense.
As you reflect on the end of the year, we hope you will be inspired to look at your organization’s logic model and assess whether it is time for a new or reimagined recipe. Use this as a time to determine whether your programs are as impactful as you’d like and whether you have the right data to draw and communicate that conclusion. And as always, we’d love to hear what you cook up with and if you have any additional questions.