The Logic Model Guidebook - Bett - Lisa Wyatt Knowlton
The Logic Model Guidebook - Bett - Lisa Wyatt Knowlton
Guidebook
“Better thinking and planning through logic models can contribute to stronger
results. The Guidebook supports rigor and quality. It’s a great tool for the
important work of creating sustainable social change.”
“A holistic roadmap for design, plans, and evaluation. This text offers sage
advice on metacognition and easy, clear steps to improve effectiveness.”
“The material in this book has enduring value. It is a ‘keeper’ for students
and me.”
“Regardless of sector, logic models are valuable tools to design systems and
improve strategy.”
“The Guidebook fills a niche in the skills and knowledge needed by nonprofit
managers to be successful in their work. It leads the field in providing both
the theory and practice of using logic models as a critical management tool.”
“It is the only text I am aware of that focuses specifically on logic modeling.
The links from theory to practice are important. It contains many practical
illustrations of innovative and diverse logic models. The Guidebook also
offers support to more experienced professionals by providing a range of
approaches and raising important considerations in model development.”
“The Guidebook is easy to read and understand. I like how logic models
make assumptions visible. This makes it more likely to choose effective
strategies and secure desired results.”
“I especially liked the learning aids, the clear writing style, the many figures
and examples and the listings of important points within each chapter. This is
all good teaching methodology.… Logic models are an important tool in
planning and evaluation. Both planners and evaluators should know how to
use them.”
The
Logic Model Guidebook
For Taylor, my earth angel.
I know you will soar.
For Tim, with profound gratitude, admiration, and respect.
The
Logic Model Guidebook
Better Strategies for Great Results
All rights reserved. No part of this book may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying, recording, or by any information storage and
retrieval system, without permission in writing from the publisher.
The logic model guidebook : better strategies for great results / Lisa Wyatt Knowlton, Cynthia C.
Phillips. — 2nd ed.
p. cm.
Includes bibliographical references and index.
HG177.K56 2013
658.15′224—dc23 2012016268
12 13 14 15 16 10 9 8 7 6 5 4 3 2 1
Brief Contents
Preface
Acknowledgments
About the Authors
PART I: CONSTRUCTION
1. Introducing Logic Models
2. Building and Improving Theory of Change Logic Models
3. Creating Program Logic Models
4. Modeling: Improving Program Logic Models
Preface
Acknowledgments
About the Authors
PART 1: CONSTRUCTION
1. Introducing Logic Models
Basic Concepts
Models and Modeling
Logic Model Benefits
Logic Models Defined
Logic Model Uses
Two Types: One Logic
Historical Background
Examples
Theory of Change Model Example
Program Logic Model Example
Program Logic Model and Evaluation Design
Limitations of Logic Models and Modeling
Models Begin With Results
Logic Models and Effectiveness
In Summary
Learning Resources
2. Building and Improving Theory of Change Logic Models
Building a Theory of Change Model
Getting Started
Preferences and Styles
Evidence Based and Plausible
The Big Picture
Multiple Strategies and Results
Realistic Models
Knowledge and Assumptions
Action Steps: Creating a Theory of Change Logic Model
Improving Theory of Change Models
Multiple Perspectives
“Unpack” and Share Assumptions
Toggling
Promising Practices and Benchmarking
Group Process
Nonlinear Theory of Change Models
Doing the “Right Work”
Tough Questions
In Summary
Learning Resources
3. Creating Program Logic Models
From Theory of Change to Program Models
Assumptions Matter
Key Elements of Program Logic Models
Nonlinear Program Logic Models
Hidden Assumptions and Dose
Building a Program Logic Model
Program Logic Model Example
From Strategy to Activities
Action Steps for a Program Logic Model
Creating Your Program Logic Model
Guiding Group Process
In Summary
Learning Resources
4. Modeling: Improving Program Logic Models
Modeling and Effectiveness
Context Challenges
Common Pitfalls: Blind Spots and Myths
Logic, Scale, and Specificity
Politics, Persuasion, and Perception
A Learning Culture and External Review
Quality Techniques
Modeling
Testing Model Quality: SMART and FIT
A “Mark Up”
Quality Questions
A Quality Model
“Better” Decisions
In Summary
Learning Resources
PART II: APPLICATIONS
5. Logic Models for Evaluation
Getting More Out of Evaluation
Connecting Management With Measurement
Evaluation for Effectiveness
Evaluation Design Basics
Where Consumers Add Value
Where Logic Models Add Value
A Design Example
Two Kinds of Learning
Key Evaluation Questions
Indicators
Indicators and Alignment
Results Require Choices
Performance Standards
Quality Evaluation Designs
A Quality Framework
In Summary
Learning Resources
6. Display and Meaning
Variation and Learning
Graphic Display
Complexity and Meaning
Content, Uses, and Creation
Model Benefits
Alternative Approaches
Selected Examples
Example 1. Eco Hub
Example 2. Wayne Food Initiative
Example 3. Promoting Preschool Change
Example 4. Collaborative Learning, Inquiry, and Practice
Example 5. New York Healthy Weight Model
Example 6. Evaluation System Development
In Summary
Learning Resources
7. Exploring Archetypes
The Blank Page Challenge
Archetypes and Learning
Recipes for Change
Value of Archetypes
More Critical Thinking
Selected Archetype Examples
Example 1. Federal Block Grants
Example 2. Education Readiness and Success
Example 3. Communications
Example 4. School Improvement
Example 5. Public Health Research
In Summary
Learning Resources
8. Action Profiles
Strategy, Evaluation, and Learning
Profile 1. Building Civic Engagement
Profile 2. Better Corporate Giving
Profile 3. Kyrgyzstan Decent Work Country Programme
Profile 4. Alabama Tackles Asthma
Profile 5. Resilient Communities
Profile 6. Sheltering Families
Profile 7. Environmental Leadership
In Summary
Learning Resources
Name Index
Subject Index
Preface
R
esponding to and creating change is demanding. Every day, people
in nongovernmental organizations, the private sector, universities,
and community-based organizations are responding to or creating
change. Models can help us see what is and what we want to create.
They can be powerful tools that support learning and performance. They can
help us with metacognition: thinking about our thinking.
Logic models are used in a huge range of topical content and functions
worldwide. They can easily explicate the influence of actions on results. If
our aim is coping with change and generating it, a critical review of “do” and
“get” is a vital action. As we face complex challenges like climate change,
education quality, poverty, homelessness, water distribution, healthcare
inequities, aging, and hunger, we need potent ways to communicate the
current situation and the desired one. As we consider ways to innovate,
transfer, and market knowledge—we need powerful approaches to new
contexts. As we deliberate a sustainable planet—we need to be able to co-
create options with shared meaning. Logic models are tools that help these
examples of important work.
We wrote the Guidebook because we care about results. We know people
need better skills, knowledge, and tools to have influence. While logic
models are never perfect, they do offer a partial remedy for better decisions,
plans, and adaptation. They can contribute to effectiveness and are consistent
with Palchinsky’s Principles to
This second edition of the Guidebook provides the reader with a basic
understanding of how to create and use logic models. This is important for
people who work in the nonprofit, government, and private sectors with
responsibilities to lead and manage. Evidence-based models can be
particularly helpful to create programs, plan, communicate, and evaluate.
Logic models can provide important help that guides better thinking and
focused inquiry. Logic modeling is a process that contributes to clarity about
the sequence and the content of interactive relationships. Logic models
display relationships of many kinds: between resources, activities, outcomes,
and impact. They can also articulate the interaction of environmental barriers
and facilitators. The physical display models provide allows a chance to
critically review the relational logic among the “pieces” and context. And
they can be a platform to prompt important questions about assumptions and
choices. Logic models can significantly aid strategy development if we use
them to consider what’s plausible, feasible, and optimal vis-á-vis intended
results.
All logic models should be considered drafts. Every model example in the
Guidebook has flaws. Because models represent perception and reflect
choices, they have consequent limitations. Any individual has “blind spots,”
so people and groups that author models include those. Regardless, models
and modeling offer a potent alternative to lengthy narrative because visual
display is such a powerful, common way to create shared understanding and
test quality.
There are no perfect models, but the quality of models certainly can range
from simply “cockamamie” to highly strategic. Quality is a vital matter in
creating models. The best standard we can offer to ensure the potential of its
intended outcomes is prior evidence. However, when generating innovation,
it’s important to simply acknowledge rationale and “see” the prototype on
paper. This can ensure fidelity of implementation and focus evaluation or at
least document the initial approach in contrast to what actually is executed.
Modeling can be an exciting process. It includes a cycle of display,
review, analysis, critique, and revision to develop a model. These action
steps, best done with colleagues or stakeholders, can contribute significantly
to more informed models and are more likely contribute to results. Using
logic models in a systemic and disciplined approach to design, planning,
communication, and evaluation can contribute to individual and
organizational learning.
The Guidebook is a practical text for students and field practitioners. It is
organized with the assumption the reader has no knowledge or prior
experience. We hope it supports your changes in awareness, knowledge, and
skill relative to models and modeling.
O
ur work is valuable because of amazing people, our clients, who
care about change and results. Our first and warm thanks go to
them.
This edition of the Guidebook benefited from many new
contributors and more than a dozen new models. We appreciate the time and
effort these colleagues made to enrich the text. Some of the models that
appeared in the first edition have been retained. In all, contributors include
the following:
Chapter 6
Example 1: Eco Hub—Adrian Jones, Integration and Application
Network, University of Maryland Center for Environmental Science
Chapter 8
Profile 1: Civic Engagement—Seattle Works—Tara Smith and Dawn
Smart, MA Clegg Associates
Construction
1
T
his chapter introduces logic models. There are two types: theory of
change and program. This chapter describes model benefits and uses
and explains the role of modeling in both program and organizational
effectiveness. The process of modeling begins with results.
Regardless of type, quality models are evidence based.
LEARNER OBJECTIVES
Explain the difference between models and modeling
Recognize the benefits and uses of logic models
Demonstrate how to “read” a logic model
Recognize types of models and their characteristics
Describe the ways that models can support effectiveness
Basic Concepts
When logic models and modeling are used as a standard technique, they
can influence an organization’s effectiveness. Logic models offer the
strategic means to critically review and improve thinking. And better thinking
always yields better results. Modeling can happen well before resources are
committed or final decisions get made. This offers a way to pretest quality
and limit risk.
Effectiveness is not limited to—but certainly depends on—a clear vision,
capable implementation, and the means to monitor both processes and results.
Logic models can be tremendous supports for creating and communicating a
common understanding of challenges, resources, and intended success.
Moreover, models can also be used to calibrate alignment between the “big
picture” and component parts. They can illustrate parts of or whole systems.
Choosing a perspective can influence the level of detail. When modeling, this
specifies boundaries as well as the breadth or depth of display. For example,
a logic model can show the learning objectives for an elementary Spanish
curriculum, what a school district will do to secure student achievement, or
what the federal government will provide in educational resources for
second-language learning.
Historical Background
Use of theory of change and program logic models began in the 1970s. Carol
Weiss (1995) and Michael Fullan (2001) and Huey Chen (2005) are among
the pioneers and champions for the use of program theory in program design
and evaluation. U.S. Agency for International Development’s logical
framework approach (Practical Concepts, Inc, 1971) and Claude Bennett’s
(1976) hierarchy of program effectiveness were among the earliest uses of the
types of visual displays that have evolved into the program logic models we
know today.
Examples
In the examples that follow, we briefly explain the general concepts and
terms related to a theory of change and to a program logic model. Chapters 2
and 3 provide more depth. Although we show one of each type of model, it is
important to keep in mind that these are but two examples from a much
broader continuum of possibilities. There are many ways to express or
display the ideas and level of detail.
This program model suggests desired results include more and better
leaders and community development. It implies the leadership development
agenda is about resolution of community challenges and that, if resolved, it
contributes to community development.
To “read” this model, first note the intended impact (ultimate aim) of the
program: community development. Then, move to the far left-hand side,
where resources or inputs essential to the program are listed. Logic models
employ an “if–then” sequence among their elements. When applied to the
elements in each column, it reads, “If we have these resources, then we can
provide these activities. If we pursue these activities, then we can produce
these outputs. If we have these outputs, then we will secure these outcomes,”
and so on.
This model is just one very simple representation of how a program might
be designed and planned for implementation. Many variations on this
example could represent program design and planning for community
leadership development that meets standards of logic and plausibility. We
know that Figure 1.2, in fact, represents a program with some definite flaws.
More discussion about how the program could be improved through a “mark
up” (or critical review) that tests the program design is described in Chapter
4.
These evaluation questions can be very helpful in the initial design and
development of the program, as they help to aim the program intervention.
The next step is establishing indicators. Models also help in guiding the
conversation and exploration needed to determine indicators or the measures
of progress for an effort. These issues are addressed in greater detail in
Chapter 5.
IN SUMMARY
Logic models are simply a visual display of the pathways from actions to
results. They are a great way to review and improve thinking, find common
understandings, document plans, and communicate and explicate what works
under what conditions. We think theory of change models are distinct from
program logic models in several important ways. Theory of change models
present a very high-level and simple explanation of “do and get.” Program
logic models offer a detailed map that can be implemented when
supplemented with work plans. In this chapter, we also distinguished between
models as tools and modeling as a process. A quality feature of logic models
is that they are evidence based. Logic models can be used for learning,
improving, and greater effectiveness.
LEARNING RESOURCES
Reflection
1. In what circumstances can you use logic models in your work or field of
study?
2. What benefits does each type of model provide? And to whom?
3. What do logic models display? And what is missing?
4. How are theory of change models and program models alike? Different?
5. What kind of logic models have you seen before? Which are most
commonly used?
6. What current models/processes are commonly used for program design
in your organization? What work cultures are best suited for logic
models?
Application
Select and draw one of the following: promotion of a new brand of ketchup, a
driver’s training program, or a domestic violence awareness campaign. Have
others independently draw the same project you select. What do all the
drawings have in common? What areas are different? Why? When and how
do these differences become reconciled? How did the levels of detail differ
among the drawings? What can these drawings tell us about mental maps?
Journal Articles
Bennett, C. (1976). Analyzing impacts of extension programs, ESC-575.
Washington, DC: U.S. Department of Agriculture, Extension Service.
Dwyer, J. (1996). Applying program logic model in program planning and
evaluation. Public Health and Epidemiology Report Ontario, 7(2), 38–46.
Fitzpatrick, J. (2002). A conversation with Leonard Bickman. American
Journal of Evaluation, 23(3), 69–80.
Funnell, S. (1997). Program logic: An adaptable tool for designing and
evaluating programs. Evaluation News and Comment, 6(1), 5–17.
Julian, D. (1997). The utilization of the logic model as a system level
planning and evaluation device. Evaluation and Program Planning,
20(3), 251–257.
Julian, D. A., Jones, A., et al. (1995). Open systems evaluation and the logic
model: Program planning and evaluation tools. Evaluation and Program
Planning, 18, 333–341.
Sartorius, R. (1991). The logical framework approach to project design and
management. Evaluation Practice, 12(2), 139–147.
Internet Resources
Davies, R. (2008). The logical framework: A list of useful documents.
Retrieved December 1, 2011, from http://mande.co.uk/2008/lists/
the-logical-framework-a-list-of-useful-documents/
Duigan, P. (n.d.). Brief introduction to program logic models (outcome
models). Retrieved December 1, 2011, from
http://outcomescentral.org/programlogicmodels2.html
Evaluation logic model bibliography. (n.d.). Madison, WI: University of
Wisconsin Extension Service. Retrieved December 1, 2011, from
http://www.uwex.edu/ces/pdande/evaluation/evallogicbiblio.html
Israel, G. D. (2001). Using logic models for program development. Retrieved
December 1, 2011, from http://edis.ifas.ufl.edu/wc041
List, D. (2006). Program logic: An introduction. Retrieved December 1,
2011, from http://www.audiencedialogue.net/proglog.html
McCawley, P. F. (1997). The logic model for planning and evaluation.
Retrieved December 1, 2011, from
http://www.uiweb.uidaho.edu/extension/LogicModel.pdf
2
T
his chapter identifies the basic elements of a theory of change logic
model. They are evidence based and plausible. This chapter describes
the steps to create and improve a theory of change model. It also
names criteria for a “good” model.
LEARNER OBJECTIVES
Identify basic elements of a theory of change model
Identify the contributions a theory of change model lends to a change effort
Create a simple theory of change model
Apply critical review for theory of change model plausibility
Getting Started
While logic models can be used for many purposes, there are two basic types:
theory of change and program models. Understanding these types is
important to their development and use. The choice of which to use reflects
whether the model needs to describe broad and general concepts about
change or more detailed operational elements essential to design, plans, and
management. It is possible to begin with either a program logic model or
theory of change model.
We believe it is important that a program model always accompany a
theory of change because the assumptions held in the theory of change have
fundamental value for program operations and success. These assumptions
should be consistent and anchor choices made in the development and
selection of strategies to fulfill intended results. When assumptions are
evidence based, then a single coherent logic and alignment can occur that
enables success. Relying on knowledge, whether theory, research, practice,
and/or literature, is essential to a good model.
Realistic Models
Theory of change models should demonstrate plausibility. This means they
“could work.” Given the realities of limited time, as well as human and social
resources, logic alone is inadequate. In fact, the logic displayed in a model
can be uninformed or misinformed. For example, world peace is a tangible
and clear desired result, but a theory of change that relies solely on
communication (e.g., newsletters and websites) is not plausible in securing
world peace. Or consider the desired result of hiring more mid-level scientists
at your research institute. Are outreach strategies with local math and science
teachers and students logical action steps? Yes, but meetings with those
targets can be helpful only in a pipeline that can tolerate a decade of delay. It
is not a best strategy given urgent human resource needs this week and next
month.
Knowledge and Assumptions
So far, we have described a basic theory of change model for improved
health that is specifically composed of doing (strategies) and getting (results).
Each of us brings along some other contributions to our theory of change that
are more closely held. While not often named, we commonly bring what we
believe (our assumptions) to theories of change, too. The most viable
assumptions used to select strategies are rooted in knowledge, and that
knowledge generally includes research, practice, and theory. Figure 2.4
illustrates the knowledge base for beliefs that precedes assumptions and
strategies in a theory of change.
It is critical to recognize the role of beliefs. They are important
determinants in choices about strategies for both creating and improving a
theory of change model. Figure 2.4 illustrates how knowledge and beliefs
contribute to a program’s underlying or driving assumptions. Assumptions
are often informed by knowledge, which can include research, practice, and
theory. We find that making assumptions explicit can improve our chances
for program success. Sometimes assumptions are informed by experiences,
habits, or values that do not also reflect knowledge. Mediating or moderating
factors such as program context are useful to consider as barriers or
facilitators to program success at this stage. Dogma, misinformation,
ignorance, and wishful thinking are hazards here. Often, assumptions can
differ significantly among and between both stakeholders who create and
those who execute. They can also dramatically affect how problems are
identified and framed. For model utility, it’s important to cite what
problem(s) we’re trying to solve and find a way to frame a problem so that it
is meaningful to others.
Modeling can help surface vital differences among stakeholders and offer
a disciplined process for resolution based first on plausibility, then on
feasibility during subsequent versions. This is why, in part, modeling offers
considerable value beyond the construction of models alone. It’s important to
note that dialogue is critical to exploration of knowledge and assumptions
that are embedded in models. Engaging multiple stakeholders is critical to
quality as well as meaning.
However, modeling can be an uncomfortable process because it nearly
always raises differences among participations’ perceptions, experience,
knowledge, training, and other factors. Identifying and negotiating these can
be challenging. This navigation is most easily done with external assistance.
If not, then it can be useful to explicate the criterion for decisions about
model content and display. Simply who participates in modeling can be
loaded with politics, since it will very likely influence the model content.
Multiple Perspectives
People hold and operationalize theories of change in both their work and
personal lives. Most experienced parents, for example, have a recipe that
contains the primary strategies they believe are vital to parenting a “good
kid.” Parents can vary considerably, however, in what they mean by a good
kid. Likewise, even if we agree on what a good kid might know and be able
to do, it is highly likely that from one parent to the next, there will be many
variations on parenting strategies to ensure the “good kid” result. This
example suggests the considerable importance of ensuring that all
stakeholders in your program or change effort are specifying results and the
strategies needed to get there with the same meaning and level of specificity.
Developing and improving the theory of change for your program is one way
to begin the conversations needed to reach shared understanding.
In the health example we started this chapter with (see Figure 2.3), we
identify improved health as the result sought. It is important to ensure that
everyone has a highly consistent understanding of what “improved health”
means. To one participant, it may be weight loss. Another could interpret it as
normal blood pressure. Others may feel improved health is a combination of
several positive outcomes. If you ask a half-dozen people what improved
health means to them, it is quite likely there will be variation in their
individual answers.
Specifying what the results mean, such as improved health in this
example, becomes critical for your program design as well as essential for
measuring progress toward and determination of results. If the meaning and
measures of results are shared and understood similarly, then it is more likely
strategy choices will align with your intended impact. It is more likely
indicators of progress will be appropriate, too.
Group Process
Consider involving others in co-creating a theory of change model. Let’s
build on the improved health example from earlier in this chapter and aim at
obesity prevention. How could you guide a group in exploring a countywide
program design intended to maintain healthy weight and prevent obesity? In
tackling this question, it’s important to anticipate the need for data prior to
the convening. Gathering and sharing information about research, practice,
and theory makes for a much smarter dialogue. It’s also possible to include
experts who bring data and field experience literally to the table. In general, a
guided group process could follow these action steps in a daylong work
session or over a series of meetings.
Remind participants, again, of the intention of the work to establish a
theory of change that articulates a single relationship between results and
strategies. The assignment is to identify strategies most likely to get the
planned results given the context, target audiences, and other factors. So it’s
important, first, to secure a shared understanding of the results intended. Ask
all the participants, on their own, to identify the result they want the program
to achieve in the next 3 years. It’s vital to specify a period to bound the
program effort. Have participants post directly (or transfer) their intended
results for public sharing. This first posting will likely display a range of
expectations and assumptions about what results are desired. Reconcile those
that are similar and do discovery on what’s “underneath” the postings.
Through dialogue, find the result that the group believes is most feasible
given the context. Features of context might include historical and current
rates of obesity and overweight, definitions of those terms, an inventory of
physical fitness options and their physical proximity, socioeconomic data for
the county population, and access to healthcare and weight loss resources,
along with aspects of prevailing culture. Create a list of resources, including
specific funds that could be designated for the program. Your participants can
probably name many other features of context. These are the influences as
well as data that help to inform the current reality. It may help to post facts
and features of context so they are present to dialogue. This portion of the
process should rely on facts as well as perception.
Then, consider your target audience(s). Will your program effort be
designed to influence males, females, teens, young adults, all residents
between 10 and 50 years of age? Or some combination of these
characteristics? Employ learning from the context discussion to inform your
choices. Be aware the selection you make may require you to adjust the
group’s intended result. The effort to name and understand the results is well
worth the effort because it frames subsequent action steps.
Last, ask participants to name strategies that the program should include.
Post them. Often, people will name tactics or specific activities. Getting to
the same level of detail just requires some modification. This is another great
opportunity to insert more information. For example, identify independent
research, practice, and theory shown to influence weight management. Share
some benchmarking information from effective programs that have already
tackled this same challenge and those that failed to make progress. Be sure to
include their costs and related organizational resources.
Ultimately, the group should determine a clear list of strategies and
specified results that are not simply feasible but optimal—that is, highly
likely to secure the impact. This may require some “toggling.” Use the
Guiding Questions (below) to critically review the work of the group. Look
forward to Chapter 6 and review the New York state Healthy Weight
Partnership. It offers some great ideas about strategies and results (defined by
their mission and vision). The NY model cites target sectors/settings to
segment their program plans since the work is focused on all state residents.
As you construct, then review a theory of change, the following questions
may be useful:
Guiding Questions for Reviewing a Theory of Change
Model
The first question was about the right work. This is about attending to
making the strongest, most direct and plausible connection between your
strategies and results. It is about the focus of time, energy, talents, and
resources in relation to your specified success. Eventually, right work is also
about detailing those specific activities that are subsumed by each strategy
that is chosen for display in the program logic model. Giving conscious
attention to the criterion used in selecting strategies at this stage, and again
later, will identify how implementation can make a big difference in the
likelihood that your program will secure results. The right work is clarified
and confirmed if there is a shared understanding of the problem you plan to
resolve and there is agreement on how it can be accomplished. Specificity
here, on the front end, contributes to the results you and your colleagues
intend to secure. Ambiguity can doom the best-intentioned efforts to failure.
Tough Questions
Of course, there are many ways to secure a named and intended result.
Discarding strategies/activities that are peripheral, modest contributors or less
than optimal in potency can focus limited resources. Models and their
iterations can develop a disciplined way of thinking that contributes to new
understandings about what will generate progress toward results. Once results
are specified, the discovery and discussion that should be encouraged during
your modeling attends to these two big questions:
IN SUMMARY
Logic models display mental maps people hold about cause and effect.
Combined, theory of change coupled with program logic models are the most
potent design prescription. Theory of change models specify and link
strategies with results. Most change efforts require multiple strategies.
Knowledge is a critical input for models and can include research, practice,
and theory. What people believe affects the content and format of models.
Improving theory of change models requires multiple perspectives,
unpacking assumptions, shared language, toggling, and the exploration of
promising practices.
LEARNING RESOURCES
Reflection
Application
1. Have a conversation:
A. Ask colleagues to share their beliefs about parenting (or their mothers’
or fathers’ beliefs) to ensure a happy, confident, successful young
adult. From this conversation, draw a theory of change. What are their
most important strategies? Can you identify their beliefs, values,
assumptions? Do they cite any evidence for their choices? Is research,
practice, or theory part of their explanation? How are their views
similar to or different from yours? Do they have a shared understanding
and agreement about parenting with their spouse (or among their
parents)? How does your response to these questions influence the
model?
B. Ask a friend or colleague to share a recipe for marketing a new car
model. What are the most important strategies for ensuring profit?
What evidence supports their choice of strategies? How do assumptions
inform their theory of profitability? How does your response to these
questions influence the model?
2. Ask several people to list the many ways that “improved health” might be
described. Why does this outcome/result have different meanings? Could
these differences influence modeling?
3. Find a news article that describes a change effort (in a government,
nonprofit, or private sector). Draw it. Can you detect the efforts
underlying theory of change? How was it informed: based on a claim or a
hypothesis?
4. Considering the drawings from Questions 1 and 3, how do choices of
strategies influence the likelihood of achieving your intended results?
What changes, if any, could be made to improve the plausibility of these
models?
Texts
Bickman, L. (1987). The functions of program theory. In L. Bickman (Ed.),
Using program theory in evaluation. New directions for program
evaluation, 33, 5–18. San Francisco: Jossey-Bass.
Bono, E. (1999). Six thinking hats (2nd ed.). New York: Little, Brown &
Company.
Chen, H. T. (1994). Theory-driven evaluations. Thousand Oaks, CA: Sage.
Donaldson, S. I. (2007). Program theory-driven evaluation science:
Strategies and applications. Mahwah, NJ: Lawrence Erlbaum.
Porter, M. E. (1995). Competitive advantage: Creating and sustaining
superior performance. New York: Free Press.
Reisman, J., & Gienapp, A. (2004). Theory of change: A practical tool for
action, results and learning. Baltimore: Annie E. Casey Foundation.
Retrieved December 7, 2011, from
http://www.aecf.org/upload/PublicationFiles/CC2977K440.pdf
Scheirer, M. A. (1987). Program theory and implementation theory:
Implications for evaluators. In L. Bickman (Ed.), Using program theory
in evaluation. New directions for program evaluation, 33, 59–76. San
Francisco: Jossey-Bass.
Weiss, C. H. (1995). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice
Hall.
Weiss, C. H. (1995). Nothing as practical as a good theory. In J. P. Connell,
A. C. Kubisch, L. B. Schorr, & C. H. Weiss. (Eds.). New approaches to
evaluating community initiatives: Concepts, methods and contexts (pp.
65–92). Washington, DC: Aspen Institute.
Wholey, J. S. (1987). Evaluability assessment: Developing program theory.
In L. Bickman (Ed.), Using program theory in evaluation. New directions
for program evaluation, No. 33 (pp. 77–92). San Francisco: Jossey-Bass.
Journal Articles
Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in
practice: What do we learn? Evaluation Review, 24(8), 40–43.
Bolduc, K., Buteau, E., Laughlin, G., Ragin, R., & Ross, J. A. (n.d.). Beyond
the rhetoric: Foundation strategy. Cambridge, MA: Center for Effective
Philanthropy. Retrieved December 7, 2011, from
http://www.effectivephilanthropy.org/assets/pdfs/CEP_BeyondTheRhetoric.pdf
Chen, H. T., & Rossi, P. (1983). Evaluating with sense: the theory-driven
approach. Evaluation Review, 7(3), 283–302.
Connell, J. P., & Kubisch, A. (1998). Applying a theory of change approach
to the evaluation of comprehensive community initiatives: Progress,
prospects and problems. Washington, DC: Aspen Institute. Retrieved
December 7, 2011 from
http://www.dochas.ie/Shared/Files/4/TOC_fac_guide.pdf
Donaldson, S. I. (2001). Mediator and moderator analysis in program
development. In S. Sussman (Ed.), Handbook of program development
for health behavior research and practice (pp. 470–496). Thousand Oaks,
CA: Sage.
Fullan, M. (2006). Change theory: A force for school improvement (Seminar
Series Paper No. 157). Victoria, Australia: Centre for Strategic Education.
Retrieved December 7, 2011 from
http://www.michaelfullan.ca/Articles_06/06_change_theory.pdf
Kramer, M. R. (2001, May/June). Strategic confusion. Foundation News and
Commentary, 42(3), 40-46.
Monroe, M. C., Fleming, M. L., Bowman, R. A., Zimmer, J. F.,
Marcincowski, T., Washburn, J., & Mitchell, N. J. (2005). Evaluators as
educators: Articulating program theory and building evaluation capacity.
New Directions for Evaluation, 108, 57–71.
Rogers, P. J., Petrosino, A., Huebner, T. A., & Hacsi, T. A. (2000). Program
theory evaluation: Practice, promise, and problems. New Directions for
Evaluation, 87, 5–13.
Sridharan, S., Campbell, B., & Zinzow, H. (2006). Developing a stakeholder-
driven anticipated timeline of impact for evaluation of social programs.
American Journal of Evaluation, 27(6), 148–162.
Internet Resources
Sharpe, M. (2009) Change theory. Muncie, IN: Ball State University.
Retrieved December 7, 2011 from http://www.instituteforpr.org/wp-
content/uploads/ChangeTheory-1.pdf
T
his chapter identifies the basic elements of a program logic model.
Generally, these models have enough detail to support design,
planning, management, or evaluation. This chapter describes a
program logic model example and the action steps to create a model
with a small group.
LEARNER OBJECTIVES
Describe the relationship between theory of change and program logic models
Identify basic elements for a program logic model
Create a simple model
Recognize limitations of display
1. Identify the results that one or more strategies will ultimately generate.
2. Describe the stepwise series of outcomes (or changes) that will show
progress toward impact.
3. Name all the activities needed to generate the outcomes (for each
strategy).
4. Define the resources/inputs that link directly to and will “supply” the
activities.
5. Identify the outputs that reflect the accomplishment of activities.
IN SUMMARY
High-quality program logic models depend on the evidence base found in
their parallel but simpler theory of change models. Program logic models
display several important elements: resources; activities; outputs; short-,
intermediate-, and long-term outcomes; and impact. To create a program
logic model, start with the intended results: outcomes and impact. Then,
activities (which are consistent with strategies in the theory of change model)
are selected. Next, resources and outputs are cited. We believe creating
models with deep participation of stakeholders improves their quality and
encourages their use.
LEARNING RESOURCES
Reflection
1. What are the implications of a program logic model built without a
specific theory of change?
2. Think of a successful business and its product or service. What is the
underlying program logic that shows the explanations for profitability?
3. Feasibility relies on several aspects. Can you name some?
4. What are strengths and limitations of a linear or a nonlinear display?
Would individuals from different fields (and their relevant cultures)
answer similarly or differently? Why?
5. Why is being specific about results important?
Application
Specify the result of a shared program, project, or idea. Draw a theory of
change model for the program, project, or idea. Then, attempt a program
logic model. Using sticky notes or pieces of paper, brainstorm the outcomes
that need to happen to secure the result. Organize them into short,
intermediate, and long term. Pick one short-term outcome. Brainstorm what
activities are critical to that outcome. Organize the activities relative to a
single or multiple strategies. For given strategies and their activities, name
the resources needed. From the activities, cite what outputs are possible.
Organize these elements as one model.
Texts
Frechtling, J. (2007). Logic modeling methods in program evaluation. San
Francisco: Jossey-Bass.
Green, E. L. (2005). Reinventing logic models: A stakeholder-driven group
approach. Unpublished doctoral dissertation, University of Cincinnati,
OH.
Mayeske, G. W. (1994). Life cycle program management and evaluation: An
heuristic approach. Washington, DC: United States Department of
Agriculture.
Stringer, E. T. (2007). Action research (3rd ed.). Thousand Oaks, CA: Sage.
United Way of America. (1996). Measuring program outcomes: A practical
approach. Alexandria, VA: Author.
Westley, F., Zimmerman, B., & Patton, M. Q. (2007). Getting to maybe: How
the world is changed. Toronto: Vintage Canada.
Wong-Rieger, D., & David, L. (1996). A hands-on guide to planning and
evaluation. Ottawa: Canadian Hemophilia Society.
Journal Articles
Cooksy, L. J., Gill, P., & Kelly, P. A. (2001). The program logic model as an
integrative framework for a multi-method evaluation. Evaluation and
Program Planning, 24(2), 119–128.
McLaughlin, J. A. (1999). Logic models: A tool for telling your program’s
performance story. Evaluation and Program Planning, 22(1), 65–72.
Millar, A., Simeone, R. S., & Carnevale, J. T. (2001). Logic models: A
systems tool for performance management. Evaluation and Program
Planning, 24(1), 73–81.
Porteous, N. C., Sheldrick, B. J., & Stewart, P. J. (2002). Introducing
program teams to logic models: Facilitating the learning process.
Canadian Journal of Evaluation, 17(3), 113–141.
Renger, R., & Titcomb, A. (2002). A three-step approach to teaching logic
modeling. American Journal of Evaluation, 23(4), 493–503.
Rush, B., & Ogborne, A. (1991). Program logic models: Expanding their role
and structures for program planning and evaluation. Canadian Journal of
Program Evaluation, 6, 95–106.
Internet Resources
For comprehensive bibliographies and links to additional resources, see
Logic model resources. (n.d.). Atlanta, GA: The Evaluation Working Group
of the Centers for Disease Control and Prevention. Retrieved December
8, 2011, from http://www.cdc.gov/eval/resources/index.htm#logicmodels
Jung, B. C. (1999–2012). Evaluation resources on the Internet. Retrieved
December 8, 2011, from http://www.bettycjung.net/Evaluation.htm
University of Wisconsin Extension. (n.d.). Logic model bibliography.
Retrieved December 8, 2011, from
http://www.uwex.edu/ces/pdande/evaluation/evallogicbiblio.html
Modeling
T
his chapter focuses on improving models through simple processes
that test feasibility. With careful and deliberate review, models for an
idea, program, or project can change and mature in their quality.
Logic models that are accurate and realistic representations of what
you do and will get can increase the likelihood of effectiveness.
LEARNER OBJECTIVES
Apply simple review and improvement steps to models
Identify common errors in program logic models
Recognize the value of multiple versions of models
Recognize contributors to model quality
Context Challenges
Quality Techniques
Modeling
Most ideas, projects, or programs can be characterized in their “life” to
include four simple stages: design, implementation, evaluation and
adaptation. We suggest that modeling is most useful when done in the
creation stage and during evaluation, but models can be used at any stage for
different purposes. Getting things right at the start can be very important to
ultimate results and is a key influence to subsequent stages. Modeling can be
thought of as a review process that occurs prior to implementation or
execution. It is done to improve thinking and the models that reflect thinking.
Time and effort spent in this work can have enormous return on investment
through the influence on the program itself. The steps in modeling are draw
and test. This construct is displayed in Figure 4.1.
As a program, project, or idea is created, we suggest it gets drawn as a
model. The “draw” step is satisfied when all elements of a program model
(see Chapter 3) are present. Completion of this step means resources,
activities, outputs, outcomes, and impact are named. This provides an
opportunity to graphically display the thinking behind how the ideas framed
in the theory of change will be implemented as a program. Many efforts with
logic models quit at this point. However, through modeling, you can move
quickly to dialogue to process the content and the “tangles.” Tangles
represent areas of confusion or where some in your group think a choice is
wrong, confusing, or poorly specified. Modeling is the process that guides
model improvement.
In this chapter, we begin to name how and what can test (or explore)
model quality. We believe this testing can help improve models. The
subsequent versions of models that result from literal and figurative tests are
products of modeling. This process can yield benefits to the specific idea or
project as well as the individuals engaged as a work group. It is important to
be aware that many external issues influence modeling. We describe some of
those issues, but our list is not exhaustive.
Testing Model Quality: SMART and FIT
In a conscious testing effort, one way to explore the quality of a model is to
apply SMART principles to it. SMART is a mnemonic used since the early
1980s to set objectives:
Frequency of occurrence,
A “Mark Up”
In Figure 4.3, we revisit the logic model introduced in Chapter 1 for the
Community Leadership Academy program. We suggest a technique that’s
often used in the legislative process as working drafts of language for a
regulation or authorization are generated. It is called a “mark up.” We adapt
the legislative mark up to raise important questions about model quality by
applying SMART and FIT principles. Other elements, including context and
technique questions, can also be used. This discovery is aimed at changing
the model in constructive ways that reflect evidence, strategic choices, and
better thinking. Using a disciplined approach to modeling captures an
important opportunity for models to mature in quality.
Figure 4.2 Modeling as Quality Review
“Better” Decisions
Earlier in the text, we asked three questions about effectiveness:
IN SUMMARY
LEARNING RESOURCES
Reflection
1. Given how subjective program logic models are, what are the
implications for the outside “reader” of a model? What does a model
that will be read and perhaps used by those other than those who
constructed it have to communicate?
2. What role might politics, persuasion, or perception play in how a model
might be created, tested, and improved? How do these issues influence
model quality and use?
3. What prevailing myths might influence choices in your workplace or
family? How do blind spots influence choices?
4. How might the improvement process for a simple, single-site project
model be different from that for a more complex multisite, multilevel
initiative? What concerns should the model development team be sure to
address, and what aspects of the model will be most important to
communicate?
5. Can a complex, comprehensive program be effectively modeled with a
single diagram? Why or why not? How would you approach a task like
this?
Exercises
Texts
The Joint Committee on Standards for Educational Evaluation. (1994). The
program evaluation standards: How to assess evaluations of educational
programs (2nd ed.). Thousand Oaks, CA: Sage.
Van Hecke, M. L. (2007). Blind spots: Why smart people do dumb things.
Amherst, NY: Prometheus.
Journal Articles
Alter, C., & Egan, M. (1997). Logic modeling: A tool for teaching critical
thinking in social work practice. Journal of Social Work Education,
33(1), 85–102.
Doran, G. T. (1981). There’s a S.M.A.R.T. way to write management’s goals
and objectives. Management Review, 70(11), 35–36.
Dwyer, J. (1996). Applying program logic model in program planning and
evaluation. Public Health and Epidemiology Report Ontario, 7(2), 38–46.
Israel, G. D. (2010). Using logic models for program development (AEC
360). Gainesville, FL: University of South Florida, IFAS Extension.
Retrieved December 7, 2011, from http://edis.ifas.ufl.edu/wc041
Julian, D. (1997). The utilization of the logic model as a system level
planning and evaluation device. Evaluation and Program Planning,
20(3), 251–257.
Renger, R. (2006). Consequences to federal programs when the logic-
modeling process is not followed with fidelity. American Journal of
Evaluation, 12(27), 452–463.
Rush, B., & Ogbourne, A. (1991). Program logic models: Expanding their
role and structure for program planning and evaluation. Canadian
Journal of Program Evaluation, 6, 95–106.
Internet Resources
Burke, M. (n.d.). Tips for developing logic models. Retrieved December 7,
2011, from http://www.rti.org/pubs/apha07_burke_poster.pdf
In addition to practicing the review steps on your own models, there are
many other examples of logic models to work from available on the Internet.
For several different approaches, see the following:
Duigan, P. (n.d.) Outcomes model listing. Retrieved December 8, 2011, from
http://www.out comesmodels.org/models.html
Tucson: University of Arizona Cooperative Extension. (2009). Logic models.
Retrieved December 7, 2011, from
http://extension.arizona.edu/evaluation/content/logic-model-examples
Capable communities: Examples. (2011). East Lansing: Michigan State
University. Retrieved December 7, 2011, from
http://outreach.msu.edu/capablecommunities/examples.asp
SEDL. (2009). Research utilization support and help: Logic model examples.
Retrieved December 8, 2011, from
http://www.researchutilization.org/logicmodel/examples.html
PART II
Applications
5
T
his chapter focuses on using logic models as the architecture for
deeper engagement of stakeholders in discussion about evaluation
design. Logic models inform the development of several elements of
evaluation design. Logic models are a powerful device even if they
have not been used for program planning. This chapter covers selected
concepts useful to an evaluation consumer.
LEARNER OBJECTIVES
Describe the contributions logic models can make to evaluation design
Use a logic model to focus evaluation on high-value information needs
Use a logic model to provoke dialogue on both process and results indicators
Identify how logic models can be used to increase effectiveness
Because only limited resources are usually available for the evaluation, it
is important to identify who the evaluation users are and determine what they
need to know. Generally, there is lots of discussion about what they want to
know or could know. Evaluations are rarely allocated resources that provide
for a thorough examination of all program elements and their relationships as
expressed in a model. Logic models and modeling (which display versions or
aspects in greater detail) can help explore options and point to the most
strategic choices for evaluation investment. Sometimes the evolution of an
evaluation design is a long dance.
At the outset, clear determinations of users and their uses are important
considerations. Knowing your audiences and their information needs will
support good choices and focus your evaluation so that it has optimal utility.
In practice, the functional objective is to specify what information is essential
and secure an evaluation that discovers and delivers in response to that need.
The logic model and modeling process provide the architecture against which
evaluation experts and consumers can decide. The power of evaluation is
harnessed when the findings and analysis generated are applied to the work
examined. With logic models as the framework for design decisions,
evaluation can provide critical feedback loops about the progress of a
strategy, program, initiative, or organization toward its desired results.
Evaluation consumer participation in the logic model development
process (whether during program planning, evaluation, or both) helps to
ensure that the evaluation services they procure address their needs. The tools
and processes of logic modeling provide the opportunity to build common
language and understanding with their evaluation partners about what will be
included in the evaluation and how the information will be used.
Stakeholders, in the role of evaluation consumers, need to know enough
about the evaluation design process to have input on the questions to be
addressed and the evidence that will be used to determine success. Given that
the logic model is the graphic representation of the program’s key processes
and outcomes, consumers can then easily identify and advocate for those
aspects of the model most important from their perspective to manage and
measure.
While the reasons and expectations for evaluation can vary, we are
predisposed to utility. This requires a clear determination of who needs to
know what about the program and to what end? Without logic models to
portray a shared understanding of the evaluation, it may serve some or none
of your audiences. For evaluation to make its full contribution to performance
management and effectiveness, it is important to design the evaluation as a
resource that can support the learning of those for whom its use is intended.
A Design Example
In Figure 5.3, the program logic model is used to determine the other key
questions central to evaluation design. In this display, we indicate those key
questions that test the implementation logic. This information can be used to
determine areas for improvement and to increase the likelihood or magnitude
of effect. The key questions are placed near links of logic (areas of the
model) that specify where deeper discovery about implementation might
yield relevant information. It is important to note that the questions about
outcome and impact need to be addressed for both types of learning. Both
theory of change and program logic models show the same information, just
in different detail as well as for different purposes. Ultimately, the evaluation
design for the CLA addressed these five key questions:
1. Is the Academy doing the right things?
Question 1 is about the “recipe” for the program. It seeks information
about program content (strategies as well as the resources, activities, and
outputs). It attends to discovery about these, their interaction, and
contribution to results. This exact query is placed on the theory of change
model (see Figure 5.2). The question is hidden in the program logic model,
where the program view has considerably more detail.
2. Is the Academy doing things right?
Question 2 is about the implementation quality or execution of the
selected program content.
3. What difference has the Academy made among participants?
Question 3 focuses on how individuals may have changed because of
their Academy experience.
4. What difference has the Academy made across the community?
Question 4 examines the changes that could be attributed to the
community because of the program.
5. What are the ways that community needs can and should be
addressed by the CLA?
Question 5 seeks other information that can help inform a better or an
improved program. This might be by improving strategy and/or
implementation.
These questions are very typical but highly general program evaluation
questions. In some form, they may even have universal application because
they represent common areas of interest about any program, project, or
initiative. These questions can also be the basis for more precise inquiry or
subquestions in each area. Subsequently, data are collected to respond to
questions.
Theory of change and program models for this effort share the same
intended impact: “community development.” Before evaluation and during
planning, it could be useful to ensure shared understanding of what
“community development” means and what it would look like if the program
were successful. Does “community development” mean full employment, a
vibrant arts culture, effective schools, all of these, or something else?
Similarly, on the CLA theory of change model, note that the outcome of
“more and better leaders” precedes this desired impact. Assuming that “more
and better” means an increased number of designated leaders with skills, then
we could infer skill changes among Academy graduates. Arriving at shared
understanding of what the terms used in the models actually mean helps
determine how they can be measured. Questions like these help evaluators
and evaluation consumers address the “black box” issues facing many
programs. Logic models are ideal tools to use to dissect policies and
programs into their constituent parts. This way, the overall explanation of
what is expected to occur (and, to some extent, why) can be more coherent.
The next place where evaluation consumers can provide insight into
evaluation design is in the development of indicators. Program logic models,
in particular, can be used to develop and display quite specific definitions of
the evidence that evaluation experts and consumers agree is needed to
“indicate” progress from strategy to results during implementation. To inform
effectiveness, indicators of strategy and results are needed.
Indicators
We all are familiar with the indicator lights on the dashboard of our cars.
These lights call our attention to specific automotive functions or
performance issues, and typically they inform corrective steps. A logic
model, when used to improve strategy and results, is similar to the dashboard
in this example. An evaluation will typically focus primarily on
monitoring/measuring the output and outcome elements of a logic model;
thus, the output and outcome elements serve as the indicators of program
performance. We need indicators to help us understand whether we are
making progress. However, as most change does not occur instantly, it is
important to have gauges that show progression over time. Indicator
development is the step between the development of a logic model and the
specification of the metrics (data points) and methods that the evaluation will
use.
Indicators are the evidence that will verify progress (or lack of) for a
given output or outcome. They can be real measures of the concept or
surrogates, which are also referred to as proxy indicators. Proxy indicators
are indirect and represent the concept. The number of woman-owned
businesses is a real indicator of gender equity in a community. Proportion of
women in the Chamber of Commerce is a proxy indicator for the same
concept. Proxy indicators are used when a direct measure is unavailable.
Both kinds of indicators, those for outputs and those for outcomes,
provide confirming or disconfirming information about progress toward
impact. In this text, process indicator refers to those indicators selected to
gauge progress against the outputs. The process indicators are the evidence
you will collect to show what you “did.” We use the term outcome indicator
to distinguish those indicators of progress toward results (may include
outcomes and impact). The outcome indicators are the evidence that you will
collect to show what you “got.”
For example, in a model about mine safety, you would need indicators of
your efforts to achieve mine safety (“do,” the process) and indicators that
safety has been achieved (“get,” the outcome). You might use a live (or dead)
canary as an indicator of air quality (one of the many outputs needed to
achieve mine safety). Here, the canary in a cage would be a process indicator.
Alternatively, if we are focusing on mine safety as an outcome, accident
reduction could be among the many outcome indicators selected. Similarly, if
great hitters are important in winning baseball games, then batting averages
are an output. Here, things like batting averages and type of hits would be
process indicators. Games won would be an outcome indicator.
There is quite a bit of variability in the level of detail and complexity of
the concepts reflected in output and outcome statements. In practice, the
specification of output and outcome statements is often blurred with indicator
development. In the text that follows, we explain the concepts of process and
outcome indicators using the CLA example. We take the relatively broad
output and outcome statements shown on the CLA program logic model
(Figure 5.3) and split it into process (Figure 5.4) and outcome (Figure 5.5)
portions. In these two figures, we illustrate the first stage in developing
process and outcome indicators needed to inform evaluation design.
To move the logic model from illustrating program design to serving as
the framework for evaluation, the outputs need further specification to create
the indicators of whether the activities occurred as intended. For a program to
achieve its intended results, it is important to have information about both the
quantity and quality of the activities as well as the availability of resources to
support the work. This is important because the concept of “dose” has a
direct influence on effectiveness and your ability to improve your programs,
if you think of your program as a treatment or intervention, much like a
vaccination might be. How much of your program is actually delivered, who
and how many participate, over what time, and how “good” each activity is
all play a role in whether a program makes progress toward its intended
outcomes and impact.
What outputs would you expect or need to see from the curriculum?
What outputs would you expect or need to see from experiences?
What outputs must occur to support subsequent outcomes?
Figure 5.4 shows the range of process indicators the CLA evaluation
identified as measures of the output or “dose” of the CLA curriculum and
experience. Notice that they specify the quality of curriculum and
experiences in addition to listing the typical participant counts and
satisfaction. Logic models used for evaluation typically display much more
detailed information than those used during program design. Based on your
thoughts about the questions above, what might be missing from this set of
process indicators? What questions about implementation dose or fidelity
might the CLA program not be able to address?
Notice that some of the process indicators are more specific than others.
If we were going to continue to develop a full set of metrics for this model,
the more complex indicators such as “instructional delivery quality” would
need to be parsed into smaller, more measureable pieces. Indicators like
“number and type of curriculum units” although more specific, would need
instructions on how exactly this would be measured. Typically, for
measurement purposes, you want your indicators to reflect a single concept
and not be multidimensional. However, this is beyond the scope of this text.
Recall that outcomes reflect the majority of the “getting” side of the logic
model. Outcomes are also time sensitive. They occur in a typically fairly
ordered sequence. This sequence or outcome chain illustrates the likely steps
between “do” and “get.” How tight or loose the order is will depend on the
type of program being modeled. Sometimes the model might or might not
show the specific connections from a given activity to each particular
outcome. Some programs lend themselves to the description of distinct
pathways from activities to outcomes, while others are more holistic and
show all activities leading to all outcomes. The degree to which
interdependencies are strictly defined and clear entry points are
predetermined can vary considerably. Most models represent a cluster of
outcomes that occur at a number of levels (individuals, organizations,
systems) from a combination of efforts. In any case, short-, intermediate-, and
long-term outcomes inform evaluation design because they indicate when and
where to look for evidence. This is particularly true when the program is very
complex. What is likely to happen first, and then what? Sometimes the
outcomes are sufficiently specified in the program logic model to guide
measurement, and other times the model needs to be adapted to serve
evaluation design.
Performance Standards
If expectations (or standards) for performance have been cited, then outputs
are an easy place to look for both fidelity (how close to plan) and level (dose)
of performance. Sometimes expectations are more detailed and qualified.
These are called performance standards. Securing better health may require a
particular quantity and quality of exercise. The number of hours and type of
exercise can be recorded for any given participant. In mature fields, like
education and health, we have considerable knowledge about what works
under what conditions. Sometimes our knowledge is precise enough that
performance standards have been established. As work is planned and
evaluated, standards can be helpful in the pursuit of desired results. The CLA
example did not set performance standards initially, but once the evaluation
design was complete and data were collected, the group would have the
information needed to set expectations for the next round of evaluation.
In the CLA example, new or improved skills among participants are
indicators of progress toward outcomes. They are one choice on which to
focus inquiry. This deliberate choice about focus can occur because the
program is displayed graphically. It is easier to see and choose among areas
that have explanatory potential when they are named and displayed in a
model (instead of narrative). Evaluation could determine whether or not
individuals gained new skills.
At any point of time during the program implementation, inquiry could
yield many possibilities. Perhaps, in the case of the CLA evaluation, one
discovers no new skills were learned or the skills learned weren’t relevant to
community development. Maybe skill development for individuals happened
but the individuals were never engaged in any community projects. Each of
these findings would have implications for program improvement.
Alternatively, evaluation could look at curriculum content or even at the list
of inputs: participants, faculty, marketing, or other areas. To manage cost and
effort in evaluation, choices must be made about where to focus the inquiry.
A Quality Framework
Figure 5.6 shows a framework for program and evaluation quality. It
assembles the key points from the book’s first five chapters. Previously, we
described two important standards for model quality: plausibility (theory of
change and “could it work”) and feasibility (program logic and “will it work
under your specific conditions”). The quality characteristics for theory of
change models are noted (as in Chapter 2) where the focus is on the
relationship between strategies and results.
The quality characteristics for program logic models focus on the strength
of the relationship between activities and outcomes. They employ FIT
(frequency, intensity, and targets) and SMART (specific, measurable, action
oriented, realistic, and timed) principles (see Chapter 4). We suggest that
logic models are extremely valuable for evaluation design. This means the
process of modeling surfaces the most important information needs of
identified users. Logic models can support and assure that information
gathered is used in the pursuit of performance management and greater
effectiveness. We think a program, project, or organization is more likely to
achieve impact if relative theory of change models are plausible, program
logic models are feasible, and the evaluation models that test the underlying
assumptions of each are designed for practical use. Similarly, the ideas
presented in this chapter could easily be applied in a research design setting
—particularly in problem identification and in posing the research questions
or hypotheses. Evaluation and research both are inquiry and/or problem
solving in much the same way.
IN SUMMARY
In the first half of this book, we posited three questions about effectiveness:
All of these questions, including the third one, require some evaluation
literacy. This chapter describes the evaluative thinking and processes logic
models can support when effectiveness is given deliberate attention during
evaluation. We hope readers will use logic models to contribute to the design
of evaluations that will answer these vital questions. They are significantly
different from “Are we busy?” These questions focus attention on
effectiveness rather than on efficiency or the accomplishment of a laundry list
of activities.
Both formative (improve) and summative (prove) evaluations are useful
for many reasons. Both of these approaches can help build understanding
about what works under what conditions. Because evaluation is a key
function in managing for results, this chapter explains how logic models can
assist evaluation design directed toward that end. Models help with decisions
about the most relevant information and its use. Identifying and choosing
among information needs and users focuses evaluation resources where they
are most needed to influence effectiveness. These steps are crucial in creating
a useful evaluation. Program evaluation and planning are “bookends” that
reflect the same thinking and thus share a common theory of change and very
similar program logic model views. Specifically, outputs and outcomes can
be very helpful gauges for monitoring and improving the status of your work.
LEARNING RESOURCES
Reflection
1. What are the strengths and limitations for evaluation when the logic
modeling process has already occurred during program development?
What about when it occurs after the program is under way?
2. What are the various ways that a theory of change and/or logic model
can be used to inform the development of an evaluation design?
3. How might the information needs of funders, grantees, evaluators, and
participants be different?
4. What relationships exist among evaluation, logic models, performance
management, and effectiveness?
Exercises
1. Based on the program, project, or idea you mapped out in Chapter 4,
design the key questions and indicators for its evaluation.
2. Using the health improvement example in Figure 3.4, display your
version of key evaluation questions. Cite some process and outcome
indicators. Compare your approach to that of your colleagues.
3. If the evaluation for the CLA (see Figures 5.4 and 5.5) focuses on two
strategies and the impact, what items are completely overlooked and
could yield some important information?
Texts
Adair, J. (2010). Decision making and problem solving strategies: Learn key
problem solving strategies, sharpen your creative thinking skills, make
effective decisions. Philadelphia: Kogan Page.
Argyris, C. (1993). Knowledge for action: A guide to overcoming barriers to
organizational change. San Francisco: Jossey-Bass.
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R., (2010). Program
evaluation: Alternative approaches and practical guidelines (4th ed.).
Saddle River, NJ: Pearson.
Frechtling, J. (2007). Logic modeling methods in program evaluation. San
Francisco: Jossey-Bass.
Grantmakers for Effective Organizations. (2007). Learning for results.
Washington, DC: Author.
Greenleaf, R. K. (1977). Servant leadership: A journey into the nature of
legitimate power and greatness. New York: Paulist Press.
Hatry, H. (2007). Performance measurement: Getting results. Washington,
DC: The Urban Institute.
Kapp, S. A., & Anderson, G. R. (2010) Agency-based program evaluation:
Lessons from practice. Thousand Oaks, CA. Sage.
McDavid, J., & Hawthorn, L. (2006). Program evaluation and performance
measurement: An introduction to practice. Thousand Oaks, CA: Sage.
Patton, M. C. (2008). Utilization-focused evaluation (4th ed.). Thousand
Oaks, CA: Sage.
Ridge, J. B. (2010). Evaluation techniques for difficult to measure programs:
For education, nonprofit, grant funded, business and human service
programs. Bloomington, IN: Xlibris Corp.
Spitzer, D. R. (2007). Transforming performance measurement: Rethinking
the way we measure and drive organizational success. New York:
AMACOM.
Stufflebeam, D. L. L., & Shinkfield, A. J. (2007). Evaluation theory, models,
and applications. San Francisco: Jossey-Bass.
Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of
practical program evaluation (3rd ed.). Thousand Oaks, CA: Sage.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011).
The program evaluation standards: A guide for evaluators and evaluation
users (3rd ed.). Thousand Oaks, CA: Sage.
Journal Articles
Adler, M. A. (2002). The utility of modeling in evaluation planning: The case
of the coordination of domestic violence services in Maryland.
Evaluation and Program Planning, 25(3), 203–213.
Bellini, S., Henry, D., & Pratt, C. (2011). From intuition to data: Using logic
models to measure professional development outcomes for educators
working with students on the autism spectrum. Teacher Education and
Special Education, 34(1), 37–51.
Carman, J. G. (2007). Evaluation practice among community-based
organizations: Research into the reality. American Journal of Evaluation,
28(1), 60–75.
Ebrahim, A. (2005). Accountability myopia: Losing sight of organizational
learning. Nonprofit and Voluntary Sector Quarterly, 31, 56–87.
Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help an
organization to learn? Evaluation Review, 18(5), 574–591.
Hayes, H., Parchman, M. L., & Howard, R. (2011). A logic model framework
for evaluation and planning in a primary care practice-based research
network. Journal of the American Board of Family Medicine, 24(5), 576–
582.
Herranz, J. (2010). The logic model as a tool for developing a network
performance measurement system. Public Performance and Management
Review, 33(1), 56–80.
O’Keefe, C. M., & Head, R. J. (2011). Application of logic models in a large
scientific research program. Evaluation and Program Planning, 34(3),
174–184.
Zantal-Wiener, K., & Horwood, T. J. (2010). Logic modeling as a tool to
prepare to evaluate disaster and emergency preparedness, response, and
recovery in schools. New Directions for Evaluation, 126, 51–64.
Internet Resources
Administration for Children and Families. (n.d.) Evaluation toolkit and logic
model builder. Retrieved December 8, 2011, from
http://www.childwelfare.gov/preventing/evaluating/toolkit.cfm
Bureau of Justice Assistance. (n.d.). Evaluation and performance
measurement resources. Washington, DC: Author. Retrieved December
8, 2011, from https://www.bja.gov/evaluation/evaluation-resources.htm
Centers for Disease Control and Prevention. (2011). CDC’s evaluation
efforts. Retrieved December 8, 2011, from
http://www.cdc.gov/eval/index.htm
National Center for Mental Health Promotion and Youth Violence
Prevention. (2010). Safe Schools Healthy Students evaluation tool kit.
Retrieved December 8, 2011, from http://sshs.promoteprevent.org/sshs-
evaluation-toolkit/logic-model
The Pell Institute. (2011). The evaluation toolkit. Retrieved December 8,
2011, from http://tool kit.pellinstitute.org/
United Nations Population Fund. (2008). The programme managers
planning, monitoring and evaluation toolkit. Retrieved December 8,
2011, from http://www.unfpa.org/monitoring/toolkit.htm
6
T
his chapter describes selected examples of logic model display and
the implications of choices relative to meaning and use. In brief
examples, we present models used in private and public sector
organizations. The variation in format and content is intentional.
These models, presented with some context, are provided to enrich readers’
experience and experimentation with features of display.
LEARNER OBJECTIVES
Identify variations in model format and style
Recognize that models reflect culture and intended use
Explore what will and will not work in your organization
Explain why logic models are highly interpretive
Graphic Display
As logic models are tools that show and support critical thinking, the
selection of elements used in their display helps illustrate the subject content
in a dynamic way. Models avoid some of the interpretation that dense text
requires, but they simply are not immune to interpretation. Because logic
models convey relationships among elements, it is important to be conscious
of the use of boxes, lines, curved lines, circles, single- and double-headed
arrows, and other shapes in terms of their meaning. Further, their creation
occurs in context and has meaning for their creators, and this can vary as they
are read by others.
Models in the cases range from pictorial images with copy (Example 4) to
circular displays (Example 1) and the most common flowchart style that
employs text plus symbols and shapes that are read from left to right.
Elements of the models differ, too. Some include inputs, barriers, and
facilitators; others do not. Some use arrows, some just lines. Others use
neither of these. There is a substantial difference in comprehensiveness.
Some are general change recipes, while others offer detail adequate to operate
a program. In some cases, the models require the case narrative to understand
their content. In others, the models are quickly and completely understood
without external copy to support them. Examples 3 and 6 show both a theory
of change and a program logic model.
While the use of graphics to convey meaning can quickly become very
sophisticated, most people have had some experience with a model or
diagram that contains words and arrows. And all cultures have symbols that
convey meaning. Many people, North Americans, for example, understand
that a lightbulb means an idea, crossed swords means conflict, and linked
hands means harmony. However, these symbols are cultural and may have no
meaning or different meaning in another context.
Model Benefits
In all of these cases, the models secure at least one important process
objective: a shared understanding of the work among stakeholders. They all
organize and display relationships among multiple features such as strategies,
activities, and results. And they all provide a common vocabulary and
framework for those involved in model creation. Some of the models support
operations and others are simply input to the creation of other models or a
framework that provides “tent stakes.” Regardless of scale (a project,
initiative, organization, or other), models can be an important anchor for
implementation, evaluation, dissemination, or other next steps because they
quickly convey the parameters and content of a bounded effort. Describing
the “it” is vital to prospective work. It serves as construct explication.
Some models describe an organization’s direct and indirect influence, and
several of the cases suggest the important implication of time as their models
parse outcomes in a sequence or the accompanying narrative references this
feature. Direct influence means that the organization can take actions that will
likely affect cited outcomes. Indirect influence is a reference to work that is
dependent on other organizations, individuals, or target markets to act in a
particular way before outcomes may occur. Time is a particularly important
feature to identify in a model and to look for when reading one. Time is not
often labeled in years but rather in generic qualifiers like “short” and
“intermediate.” These phrases can have very different meanings among
readers. Occasionally, definitive parameters for time are omitted
intentionally.
Alternative Approaches
Causal loop diagrams and logical frameworks (also known as logframes) are
two other approaches to modeling the connections between “do” and “get.”
Causal loop diagrams are used to display complex systems behaviors. They
highlight the influential forces acting on cause-and-effect relationships. They
also show patterns of how and why things change rather than a static
snapshot. They have much less text than traditional logic models and are
more schematic in appearance. They use interlocking circles, arrows, and
other symbols to display cycles. These types of models are most often used
by practitioners active in systems thinking and organizational learning.
Logical frameworks grew out of the Management by Objectives
movement in the 1970s. They are typically a four-by-three matrix. The rows
describe objectives/goals, purposes, outputs, and activities. The columns
address achievement indicators, verification means, and important
risks/assumptions. The construction process emphasizes testing the vertical
and horizontal logic. These frameworks are widely used internationally by
development agencies, nongovernmental agencies, and philanthropies.
In addition to using different elements, logical frameworks differ from
logic models in several important ways. Logic models are generative in that
they typically emphasize the desired outcomes or impact. In contrast, logical
frameworks begin with an analysis of the problem(s) and thus are a more
reactive approach. In logic models, the assumptions are propositions upon
which the strategies and clusters of activities are based. Alternatively, the
assumptions in logical frameworks are those conditions that must exist for the
program to be implemented. References for these alternative approaches are
provided at the end of this chapter.
Selected Examples
The following examples include both theory of change and program logic
models in different formats with different content and uses. We hope that
your exposure to these materials helps you to explore important choices as
you create models that are most useful to your work and stakeholders. These
interesting examples are shared to display relative diversity. Each and every
logic model is distinct—although there are some common features among
them. Most of the models and associated descriptions were contributed by
colleagues in academia, the government, and the private and nonprofit
sectors. This range provides multiple perspectives and contexts.
At the beginning of each, we suggest one way to read the model and offer
comment on selected features. In most examples, we share the model with
associated narrative (boxed copy) contributed by colleagues who were
involved in its creation and use. Last, we ask some thought-provoking
questions about the display, meaning, and use. Each example also includes
some additional resources.
All the models in the following cases are versions of an initial effort to
capture and communicate. When people read (or interpret) a model, they
should ask, “What is this telling me?” As you explore the examples, it is
valuable to consider how the context may have influenced the model. It may
also be useful to think how you and your colleagues would create models for
the purposes named. What revisions would you make and why? Small
changes, just moving a line or element to a different area in the display, can
be very significant. We encourage use of the Resources section at this
chapter’s end because it can help in using these examples for additional
learning.
Questions
References
For more information about Eco Hub and to view the logic model in full
color, see http://www.eco-hub.org/about/.
Eco Hub is a project associated with the Integration and Application Network
(IAN) at the University of Maryland Center for Environmental Science.
See http://ian.umces.edu/(retrieved January 10, 2012).
1. Does the tree format help or hurt the intended messages for the Wayne
Food Initiative? How would this novel approach to display be received
where you work?
2. Is it helpful to distinguish strategies from program and tactics as this
model does?
3. What advantages and disadvantages do you see in a model in this
format?
4. Could you easily use this model to inform evaluation design? What
helps and what hinders?
References
For more information about Wayne Food Initiative, see
www.waynefoods.org and/or
http://www.cefs.ncsu.edu/whatwedo/foodsystems/waynefoodinitiative.html
Both retrieved December 12, 2011. Their fascinating logic tree in full
color can be found at
http://www.waynefoods.files.wordpress.com/2008/12/wfi-plan-tree-logic-
model.jpg (retrieved December 12, 2011).
Questions
References
For more about the David and Lucile Packard Foundation, see
http://www.packard.org.
Coffman, J. (2007). Evaluations to watch: Evaluation based on theories of the
policy process. Evaluation Exchange, 13(1), 6–7. Retrieved December
12, 2011, from http://www.hfrp.org/evaluation/the-evaluation-exchange/
issue-archive/advocacy-and-policy-
change/evalua tion-based-on-theories-of-the-policy-process
Kingdon, J. W. (1995). Agendas, alternatives and public policies (2nd ed.).
New York: Longman.
Questions
References
Online modules are available for people to learn how to implement CLIPs on
their campus. The modules are available free through the InSites website at
www.insites.org. The modules can be downloaded and adapted to other
contexts.
Brown, J., & Isaacs, D. (2005). The world café: Shaping our futures through
conversations that matter. San Francisco: Berrett-Koehler.
Hughes, P. M. (2004). Gracious space: A practical guide for working better
together. Seattle, WA: Center for Ethical Leadership.
Maki, P. (2004). Assessing for learning: Building a sustainable commitment
across the institution. Sterling, VA: Stylus.
Mohr, B., & Watkins, J. (2002). The essentials of appreciative inquiry: A
roadmap for creating positive futures. Waltham, MA: Pegasus
Communications.
Parsons, B. (2002). Evaluative inquiry: Using evaluation to promote student
success. Thousand Oaks, CA: Corwin.
Questions
References
For more information about the Overweight and Obesity Prevention Planning
Partnership in New York, see New York State Department of Health
(2010). New York state strategic plan for overweight and obesity
prevention. Albany, NY: Author. Retrieved December 12, 2011, from
http://www.health.ny.gov/prevention/
obesity/strategic_plan/docs/strate gic_plan.pdf
The IS program logic model (Figure 6.9) is read from left to right, and
content is grouped in three areas relative to outcomes: emerging issues
(Stronger Communities), operations (Stronger IS), and signature work
(Stronger Non-profits). The priority areas are included because they have
special significance to staff. They reflect internal action plans and
accountabilities. Outputs result from strategies in the priority areas (here, a
large number of activities are subsumed). They contribute to IS outcomes.
The strength of communities (society), nonprofit organizations (members and
the sector), and IS (the organization) are all linked to the outcomes named in
the theory of change. This model displays the work of the entire membership
organization; thus, there is a wide variety of targets for outcomes (e.g.,
members, staff, policymakers, sector influentials). This model is used for
monitoring (process side, outputs) and evaluation (outcomes). The intent is to
be explicit and to show reflection processes for staff that connect data from
each side to inform the work of the whole.
In this model, the ellipses are critical features that convey groups, flow,
and relative (internal) value. They show the strands of work from strategies
through to desired outcomes and impact. They intersect to show interaction
and integration among program elements. The Priority Areas column is a
custom element that is important for organizing information and meaning for
those creating and using the model. The model describes “progress toward
outcomes and impact” on the far right. It does not, intentionally, define time
in months or years. Progress toward outcomes is used to indicate that a
sequence of outcomes from awareness through to action is implied. The
broad outcome statements are unpacked in detailed indicator and data
collection tables not highlighted here. Arrows are not used. This is because of
the highly interwoven nature of the organizations work across departments.
All strategies contribute to all outcomes. The outputs and outcomes shown in
this model draw on a variety of communication, policy advocacy, and
individual behavior change theories. Resources, at the far left, are
synonymous with inputs and are essential to the organization’s work.
Questions
1. Is the level of detail in the theory of change model adequate to explain
how change is expected to happen? Why or why not?
2. How would you draw a model representing the IS theory of change?
3. Are the relationships between the theory of change and program logic
model evident? Why or why not?
4. What are the advantages and disadvantages of not specifying time in the
program logic model outcomes?
5. Could you build action or project management plans from this program
model? Why or why not?
6. Is there enough information to generate evaluation questions from the
theory of change or program model?
7. Are the ellipses adequate in organizing the content, left to right, or is
more detail about relationships between activities and outcomes
necessary? Why or why not?
References
Creation of this model was led by Phillips Wyatt Knowlton, Inc. For more
information about Independent Sector, see
http://www.independentsector.org.
IN SUMMARY
Logic models describe and reflect thinking about programs. They are a
display of information and the relationships among elements that depends
largely on graphic presentation. In practice, logic models address a vast range
of content areas and formats. Some are simple and others are complex, even
dense. They are influenced by who creates them, their relative experience and
skills, culture, and intended use. Sometimes models are used as templates to
align and organize related work. The choices of elements used in a model are
significant in their interpretation. Often, models are read left to right. Circular
displays, top-to-bottom, and other orientations are increasingly common. This
chapter offers examples of real-use models with considerable variation.
LEARNING RESOURCES
Reflection
1. Is there consistent use of symbols and shapes in the case models? How
do you ensure models are “read” or interpreted with the same meaning
by everyone?
2. Does your field or workplace have technical or cultural standards for
communicating that might influence your models?
3. What do the examples suggest about how models can be used to transfer
and diffuse ideas? What challenges would an organization face using
logic models as a communications tool? What benefits seem evident?
4. What do the cases suggest about the use of logic models in the context
of measurement? How can models support measurement and
evaluation?
5. Which applications are most like and most different from your current
use of models? How? Why?
6. What level of detail is most useful in a given model? Why?
7. How does color or lack of it affect the models? What about font type,
arrows, shapes, columns, rows, texture, icons, and other features?
Exercises
1. Select a case and conduct a mark up (see Chapter 4). What changes
would you make? Why? Compare the model you create with versions
created by colleagues. Discuss your differences. Which model do you
think is the best and why?
2. Divide the cases in this chapter among your colleagues and contribute
your analysis to the matrix below:
Once this matrix is completed, discuss the variation among models.
Which feature choices might work best under what conditions?
3. Select a theory of change model from the cases and apply the
suggestions we offer in Chapter 2. How would the model change?
4. Select a program logic model from the cases and apply the modeling
suggestions we offer in Chapter 4. How would the model change?
5. With your colleagues, list the stakeholders in any case you choose.
Then, independently, cite with whom and what action steps you would
use to generate a program logic model. Compare and contrast your list
of stakeholders and sequence of steps with others. What rationales are
used to explain differences?
6. Choose any of the models shown here and draw a new version of it with
different format and features.
Texts
Buzan, T. (2002). How to mind map: The thinking tool that will change your
life. New York: Thorsons/HarperCollins.
Craig, M. (2003). Thinking visually: Business applications of 14 core
diagrams. New York: Continuum.
Duarte, N. (2008). Slide:ology: The art and science of creating great
presentations. Sebastopol, CA: O’Reilly Media.
Horn, R. (1999). Visual language: Global communication for the 21st
century. Bainbridge Island, WA: MacroVU, Inc.
Lohr, L. L. (2003). Creating graphics for learning and performance: Lessons
in visual literacy. Upper Saddle River, NJ: Pearson Education.
Margulies, N. (2005). Visual thinking: Tools for mapping your ideas.
Williston, VT: Crown House.
Nast, J. (2006). Idea mapping: How to access your hidden brain power. New
York: John Wiley.
Neumeier, M. (2006). Zag. The number one strategy of high-performance
brands. Berkeley, CA: Peachpit Press, New Riders. Arnheim.
Pink, D. H. (2006). A whole new mind: Why right-brainers will rule the
future. New York: Riverhead Trade/Penguin.
Racine, N. J. (2002). Visual communication: Understanding maps, charts,
diagrams and schematics. New York: Learning Express.
Reynolds, G. (2012). Presentation Zen: Simple ideas on presentation design
& delivery (2nd ed.). Berkeley, CA: Peachpit Press, New Riders.
Arnheim.
Roam, D. (2008). The back of the napkin: Solving problems and selling ideas
with pictures. New York: Portfolio/Penguin.
Roam, D. (2011). Blah, blah, blah: What to do when words don’t work. New
York: Portfolio/Penguin.
Internet Resources
In addition to the other modeling resources cited in Chapters 1 through 5, see
the following:
Logical Frameworks (Logframes)
ACP-EU Technical Centre. (n.d.). Smart tool kit for evaluating information
projects, products and services. Waginingen, Netherlands: CTA.
Retrieved December 10, 2011, from
http://www.smarttoolkit.net/node/376
Asian Development Bank. (1998). Using the logical framework for sector
analysis and project design: A user’s guide. Mandaluyong City,
Philippines: Asian Author. Retrieved December 11, 2011, from
http://www.adb.org/Documents/
Guidelines/Logical_Framework/default.asp
Food and Agriculture Organization of the United Nations. (1999). Manual on
logframes within the CGIAR system. Retrieved December 10, 2011, from
http://www.fao.org/Wairdocs/TAC/
X5747E/x5747e00.htm#Contents
International Labour Office. (2006). ILO technical cooperation manual:
Development cooperation. Version 1. Geneva: Author. Retrieved
November 28, 2011, from http://www.ifad.org/evaluation/
guide/annexb/b.htm#b_1
Note
1. The David and Lucile Packard Foundation is a tax-exempt charitable
organization qualified under section 501(c)(3) and classified as a private
foundation under section 509(a) of the Internal Revenue Code. Packard
Foundation funds may be used to support some, but not all, of the activities
of grantees and others described in this logic model. No Packard Foundation
funds are used to support or oppose any candidate for election to public
office. No Packard Foundation funds are “earmarked” or designated to be
used for lobbying or “attempts to influence legislation” (as defined in section
4945(d)(1) of the Internal Revenue Code).
7
Exploring Archetypes
T
his chapter suggests readers consider the potent value archetypes can
give to their own models. We understand archetypes as a tested,
general template for an intervention, program, or strategy. They are
generic versions that can advance your own models. Often, with
modification, they can inform your planning, evaluation, communication, or
other needs. Archetypes can also provoke new thinking and provide a quality
check that improves ideas.
LEARNER OBJECTIVES
Describe the rationale for evidence-based models
Define a logic model archetype
Specify contributions an archetype can make to modeling
Name the limitations of archetypes
Value of Archetypes
Archetypes, just like recipes, are important because if the same model is
repeatedly implemented, it can be used as a platform to inform learning about
how to improve implementation and results. This means we can work toward
precision so that when replication of results is sought, it is a real possibility.
Further, we can also “stand on the shoulders” of the good work done before
us and have it inform where we might improve a process or result. In effect,
this serves the development of knowledge. It advances our understanding of
what works under what conditions. Several mature fields, specifically health
and education, have archetypes that practitioners rely on because of their
proven, well-established content. It is more likely, in some situations, to get
the results sought by using an archetype already developed and tested in
contrast to starting from scratch.
In general, the archetype examples selected for this chapter serve either
individual/group change or communities and systems change. Archetypes can
contribute to both program planning and evaluation as they generate new
learning about intentional variations in their content or execution. When
program efforts require shared elements or evaluation needs to aggregate
impact, archetypes can provide an umbrella or framework for design. Often,
evaluation archetypes are linked to valid and reliable measures.
Figure 7.2 Ready for School and Succeeding at Third Grade Theory of
Change
Source: U.S. Pathways Mapping Initiative, 2007.
These priorities correspond to the PMI goal areas 3 and 5, Supported and
Supportive Families and Continuity in Early Childhood Experiences. To
improve key aspects of early care and education in Texas, the council will
spend nearly $11.5 million (over 3 years) in American Recovery and
Reinvestment Act (ARRA) funds. Like similar structures and efforts in other
states, the Texas Early Learning Council is responsive to the requirements of
the Improving Head Start for School Readiness Act of 2007.
The Texas Council plans exceed expectations of the 2007 Act.
Subcommittees were formed to address the priorities areas that are driven by
stated needs. On specific tasks and goals, subcommittees will partner with
key stakeholder groups, national experts, and consultants to ensure high-
quality and relevant products are created.
Through council, staff, and contractor efforts, the Texas Early Learning
Council will make key strategic improvements to the Texas early care and
education multisector system. The council will post more than 20 requests for
proposals (RFPs) to accomplish a significant portion of the goals identified in
the model. Note the reliance on information exchange, reporting, and needs
assessment, which can assist relevance in the council’s actions.
References
In addition to school readiness, PMI offers pathway maps and other materials
for successful young adulthood, family economic success, and the prevention
of child abuse and neglect.
For additional information and more resources about the PMI, see:
http://www.cssp.org/publications/documents/pathways-to-outcomes
(Retrieved December 12, 2011) as well as:
Schorr, L. B., & Marchand, V. (2007). Pathway to children ready for school
and succeeding at third grade. Washington, DC: Pathways Mapping
Initiative, Project on Effective Intervention. Retrieved April 24, 2012,
from
http://www.familyresourcecenters.net/assets/library/109_3rdgradepathway81507.pdf
Texas Early Learning Council. (2011). Infant and toddler early learning
guidelines. Retrieved December 12, 2001, from
http://earlylearningtexas.org/umbraco/default.aspx
Example 3: Communications
Human Behavior Change
Frequently, program, project, and initiatives aimed at human behavior change
rely heavily on communications or a special discipline known as social
marketing. In effect, very few efforts can avoid having a communications and
marketing strategy if there’s an expectation that people will adapt in a
particular way or adopt a new practice. This model is a generic archetype we
created that builds on Prochaska’s transtheoretical model (TTM). TTM has
four stages: precontemplation, contemplation, preparation, and action (see
Figure 7.4).
References
Itzak, A. (1991). The theory of planned behavior. Organizational Behavior
and Human Decision Processes, 50, 179–211.
Mackenzie-Mohr, D. (2011). Fostering sustainable behavior: An
introduction to community-based social marketing. Gabriola Island,
British Columbia, Canada: New Society Publishers.
Maio, G., & Haddock, G. (2010). The psychology of attitudes and attitude
change. Thousand Oaks, CA: Sage.
Perloff, R. M. (2010). The dynamics of persuasion: Communication and
attitudes in the twenty-first century. New York: Routledge.
Prochaska, J. O., Norcross, J. C., & DiClemente, C. C. (1994). Changing for
good: The revolutionary program that explains the six stages of change
and teaches you how to free yourself from bad habits. New York: W.
Morrow.
Weinreich, N. K. (2011). Hands-on social marketing: A step-by-step guide to
designing change for good. Thousand Oaks, CA: Sage.
References
Epstein, J. L. (2011). School, family, and community partnerships: Preparing
educators and improving schools (2nd ed.). Boulder CO: Westview Press.
Epstein, J. L., et al. (2009). School, family and community partnerships: Your
handbook for action (3rd ed.). Thousand Oaks, CA: Corwin.
National Network of Partnership Schools. (2010). The Center on School,
Family and Community Partnerships. Baltimore, MD: Johns Hopkins
University. Retrieved December 11, 2011, from
http://www.partnershipschools.org
For additional information and more resources about the HFRP Family
Involvement Project (FIP) and its research, see
Coffman, J. (1999). Learning from logic models: An example of a
family/school partnership program (Reaching Results, January 1999).
Retrieved December 11, 2011, from http://www.hfrp.org/publications-
resources/browse-our-publications/
learning-from-logic-models-an-example-
of-a-family-school-partnership-program
Harvard Family Research Project. (2011). Bibliography of family involvement
research published in 2010. Retrieved January 6, 2012, from
http://www.hfrp.org/publications-resources/
publications-series/family-involvement-
bibliographies/bibliography-of-family-
involve ment-research-published-in-2010
Harvard Family Research Project. (2011). Family involvement. Retrieved
December 11, 2011, from http://www.hfrp.org/family-involvement
Redding, S., Langdon, J., Meyer, J., & Sheley, P. (2005). The effects of
comprehensive parent engagement on student learning outcomes.
Cambridge, MA: Harvard Family Research Project. Retrieved January 6,
2012, from http://www.hfrp.org/publications-resources/
browse-our-publications/the-effects-of-comprehensive-
parent-engagement-on-student-
learning-outcomes
Source: Centers for Disease Control and Prevention, Injury Control Research Center, 2009.
CDC; NCIPC: Office of the ADS. Findings from the injury control research centers portfolio
evaluation. Atlanta, GA: U.S. Department of Health and Human Services, 2009.
Arrows depict influence and interaction. The heavy, black arrows indicate
interactions that are known to exist and have measures. For example, it is
relatively easy to model the interaction between Research and Core Activities
inside the Activities domain. Smaller arrows indicate interactions that are
known to exist, but the authors are less certain of the pathways and measures.
Lighter-colored arrows between domains indicate interactions in which there
is still active learning about movement from one domain to the next.
Evaluation work was conducted over a 2-year period from 2007 to 2009.
As a review of a portfolio, the evaluation necessarily illustrates the actual
activities of the grantees. It expands on the first funding opportunity model
and articulates greater detail in the outputs and specifies short-term, long-
term, and ultimate goals. The three squiggly lines between Longer-Term
Outcomes and Ultimate Goals represent the black box of translation.
Since program benchmarks had not been identified, the evaluation model
focused on possible outputs and outcomes over the last 20 years of the
program. This approach allowed the evaluation team to “back into”
identifying contributions of the program to injury research and practice. One
recommendation the portfolio evaluation generated was for CDC to work
with the ICRCs to develop specific indicators for the program. Note that
assumptions that influenced and guided construction of the models and the
evaluation process are specified below each model. This contrasts with the
traditional method of showing context variables in a logic model. The
coauthors indicate this specification was more consistent with the mixed-
method approach used in the portfolio evaluation.
These models were built after review of progress reports from injury
control grantees as well as literature that indicates how research activities
move a field from research to practice. Then, key stakeholders checked the
embedded logic and assumptions.
References
For additional information, contact Sue Lin Yee and Howard Kress at the
Centers for Disease Control and Prevention–National Center for Injury
Prevention and Control.
Centers for Disease Control and Prevention. (2010). Findings from the Injury
Control Research Center portfolio evaluation. Atlanta, GA: U.S.
Department of Health and Human Services.
Dahlberg, L. L., & Krug, E. G. (2002). Violence—a global public health
problem. In E. Krug, L. L. Dahlberg, J. A. Mercy, A. B. Zwi, & R.
Lozano (Eds.), World report on violence and health (pp. 1–56). Geneva,
Switzerland: World Health Organization.
IN SUMMARY
As tested, general templates for action, archetypes have great potential for
informing your work. They can test the quality of your original efforts and
generate new thinking. Archetypes are evidence based, so they can reliably
jumpstart your modeling. Archetypes can be thought of as recipes. They can
contribute to planning, managing, and evaluation. They are improved upon
by your own knowledge and experience because of your unique context and
conditions. The breadth of content in an archetype varies. They look different
and are often not referred to specifically as theories of change or logic
models. What is important is that they contain the information distilled from
an evidence base needed to illustrate the basic concepts in theories of change
or program logic models. Some represent a single strategy, while others cover
complex projects. This chapter provided examples of archetypal theory of
change and program logic models.
LEARNING RESOURCES
Reflection
Exercises
Evidence-Based Models
Baruch, G., Fonagy, P., & Robins, D. (2007). Reaching the hard to reach:
Evidence-based funding priorities for intervention and research. New
York: John Wiley.
Center for the Study and Prevention of Violence. (2004). Blueprints for
violence prevention. Boulder, CO. Retrieved December 11, 2011, from
http://colorado.edu/cspv/blueprints/index.html
Coalition for Evidence-based Policy. (2003). Identifying and implementing
educational practices supported by rigorous evidence: A user friendly
guide. Washington, DC: Author. Retrieved December 11, 2011, from
http://www2.ed.gov/rschstat/research/pubs/rigorousevid/index.html
Department of Health and Human Services. (2009). Identifying and selecting
evidence-based interventions: Guidance document for the Strategic
Prevention Framework State Incentive Grant Program. Washington, DC:
Substance Abuse and Mental Health Services Administration. Retrieved
April 24, 2012, from http://store.samhsa.gov/shin/content//SMA09-
4205/SMA09-4205.pdf
Duignan, P. (2004). Principles of outcome hierarchies: Contribution towards
a general analytical framework for outcomes systems (outcomes theory).
A working paper. Retrieved December 11, 2011, from
http://www.parkerduignan.com/se/documents/122f.html
Norcross, J. C., Beutler, L. E., & Levant, R. F. (2006). Evidence-based
practices in mental health: Debate and dialogue on the fundamental
questions. Washington, DC: American Psychological Association.
Promising Practices Network on Children, Families and Communities.
(2011). Programs that work. Retrieved December 11, 2011, from
http://www.promisingpractices.net/programs.asp
Proven Models. (2011). Proven models. Amsterdam, the Netherlands.
Retrieved December 11, 2011, from http://www.provenmodels.com
Rockwell, K., & Bennett, C. (2004). Targeting outcomes of programs: A
hierarchy for targeting outcomes and evaluating their achievement.
Retrieved December 11, 2011, from
http://digitalcommons.unl.edu/aglecfacpub/48/
Schorr, L. B., & Farrow, F. (2011). Expanding the evidence universe: Doing
better by knowing more. New York: Center for the Study of Social
Policy. Retrieved January 6, 2012, from http://lisbethschorr.org/doc/
ExpandingtheEvidenceUniverse
RichmanSymposiumPaper.pdf
8
Action Profiles
T
his chapter demonstrates the amazing utility and vast application of
logic models. It includes model examples with tremendous variation
in subject content and display. Generally, these models have enough
detail to support design, planning, and management as well as
evaluation. In several instances, they supported multiple functions. These
“practice profiles” include models about civic engagement, corporate giving,
international development, public health, sustainability, human services, and
environmental leadership. This chapter displays the versatile functionality of
logic models.
LEARNER OBJECTIVES
Describe the benefits and limitations of logic models in practice
Identify the rationale for model use in multiple contexts
Recognize and use concepts introduced in Chapters 1–7
Show how models display problems and support strategy, evaluation, and learning
References
See the Seattle Works website at www.seattleworks.org.
Creation of this model was led by Dawn Smart at Clegg & Associates.
Contact her via e-mail at [email protected].
Profile 2: Better Corporate Giving
Childhood hunger in America is a significant challenge. It is likely to
increase as our population grows, climates change, and food prices rise.
In households across every state in our nation, every day, children face
inconsistent access to nutritious and adequate food. They don’t know if or
from where they will get their next meal. Hunger has broad implications for
human development: increased susceptibility to illness, cognitive and
behavior limitations, and associated impairment of academic achievement.
ConAgra Foods, via its charitable giving through the ConAgra Foods
Foundation, has chosen this cause and used logic models inside and outside
to align its important work. The focus is ending childhood hunger. ConAgra
Foods Foundation intentionally chose ending childhood hunger as its primary
cause in 2006. The giving program distributes funding nationwide, through a
dozen community intervention programs, and through far-reaching brand
promotions. In 2011, 2.5 million meals were distributed as a result of a 30-
minute news special combined with a company-led consumer campaign that
paired products purchased with donations (see
www.childhungerendshere.com). Over the past 20 years, ConAgra Foods has
led the charge against child hunger in America with donations of more than
$50 million and 275 million pounds of food. ConAgra’s community
involvement platform, Nourish Today, Flourish Tomorrow®, focuses on
ending hunger, teaching kids and families about nutrition, and improving
access to food.
While all these organizations have active and long roles in antihunger
work, their staff had never convened to see or understand the roles each
played among key strategies supported through ConAgra funding.
Our firm used highly participatory processes to ensure that multiple
perspectives were expressed and reflected in any products. A thorough
review of internal and external ConAgra documents along with several phone
conferences were essential to inform a preliminary draft of both a theory of
change (TOC) and a program logic model. The TOC, shown in Figure 8.2,
remained largely unchanged over the project. It simply documented the
knowledge-based strategies that would most likely influence childhood
hunger.
References
This content is adapted from a feature article, “Corporate Giving Gets
Smarter,” in The Foundation Review, Spring 2012.
Kotler, P., Hessekiel, D., & Lee, N. (2012). Good works: Marketing and
corporate initiatives that build a better world … and the bottom line. New
York: Wiley.
Creation of this model was led by Phillips Wyatt Knowlton, Inc.
References
For additional information, contact Alexey Kuzmin at
[email protected] and Craig Russon at [email protected].
The evaluation report for this work is found at Independent Evaluation of the
ILO’s Decent Work Country Program: Kyrgyzstan: 2006–2009.
Retrieved December 22, 2011, from
http://www.ilo.org/public/english/bureau/program/dwcp/download/eval-
kyrgyzstan.pdf
References
Contact Debra Hodges (at [email protected]), Alabama
Department of Public Health. See also:
Williamson, D. E., Miller, T. M., & McVay, J. (2009). Alabama asthma
burden report. Montgomery, AL: Alabama Department of Public Health.
Retrieved December 22, 2011, from
http://adph.org/steps/assets/ALAsthmaBurden.pdf
Profile 5: Resilient Communities
A “world of resilient communities and re-localized economies that thrive
within ecological bounds” is an exciting vision. This is the work of the Post
Carbon Institute (PCI). Created in 2003, PCI is leading the transition to a
more resilient, equitable, and sustainable world.
Alarming changes reflecting fundamental crises face our planet. Experts
in economics, ecology, political systems, social justice, public health, and the
environment can each cite complex challenges in their respective content
areas. As these challenges converge and interact, they affect every living
thing. Identifying those intersections for both vulnerabilities and
opportunities is vital to building a more resilient society. The PCI suggests
the following assumptions are essential in future planning:
References
Additional detail regarding this model can be secured via contact with
Johanna Morariu at Innovation Network, [email protected].
For more on the Post Carbon Institute, see http://www.postcarbon.org/about/
References
For more information, see www.havenhouseel.org.
References
Matt Keene, Policy Office, U.S. EPA, and Chris Metzner, a graphic artist,
were deeply involved with the development of the PPSI models. They can
be reached via email at [email protected] and
[email protected], respectively.
Visit the live website: Retrieved December 22, 2011, from
http://www.PaintStewardship Program.com
Download required scripts for pop-up boxes: Retrieved December 22, 2011,
from http://flowplayer.org/tools/demos/overlay/index.html
Oregon Department of Environmental Quality, Paint Product Stewardship.
Retrieved December 22, 2011, from
http://www.deq.state.or.us/lq/sw/prodstewardship/paint.htm
IN SUMMARY
Logic models are a potent tool for many reasons and multiple functions. They
are robust communication platforms that can anchor a shared construction
that eventually serves strategy development, monitoring, evaluation, and
learning. These field profiles offer a big range of subject matter content and
use. Each was created in a process that reflected particular circumstances.
They vary considerably in display and frame problems, both implicit and
explicit. The preceding chapters suggest ways to both test and improve their
quality.
LEARNING RESOURCES
Reflection
1. What features of logic models are most common in the field profiles
shown in this chapter? Why?
2. Which model is most like the one you might create? Why does it
resonate with your communication style or purpose?
3. Which model is most difficult to interpret? Can you name the reasons?
Are there changes you would make to simplify or clarify it?
4. Which model represents work that’s most likely to garner the intended
results?
5. Can you articulate assumptions for each model? How would you cite the
problem(s) each solves?
6. Consider contextual barriers and facilitators for each model. Try to name
some for each.
Exercises
1. Revisit Chapter 4 and consider quality principles for each model. How
does this influence your perception of the model’s potential to describe
work and associated results? Are there changes you would make?
2. Explain the purpose of a given model and its content. Then ask two
small groups to draw a model. Compare it to the figure shown. What
differences are there? Why? Any improvements?
3. Prepare an evaluation design for the ConAgra Foods Foundation (Profile
2). How do the models help or hinder? What questions does the process
raise?
4. Try to locate an evidence base for each of the models. How does your
discovery inform corrections or edits to the models?
Texts
Krogerus, M., Tschappler, R., & Piening, J. (2008). The decision book: Fifty
models for strategic thinking. London: Kein & Aber.
Nissen, M. E. (2006). Harnessing knowledge dynamics: Principled
organizational knowing and learning. Hershey, PA: IRM Press.
Osterwalder, A. (2010). Business model generation: A handbook for
visionaries, game changers and challengers. Hoboken, NJ: Wiley.
Pugh, K. (2011). Sharing hidden know-how: How managers solve thorny
problems with the knowledge jam. San Francisco: Jossey-Bass.
Rumelt, R. (2011). Good strategy bad strategy: The difference and why it
matters. New York: Random House.
Sibbett, D. (2010). Visual meetings: How graphics, sticky notes and idea
mapping can transform group productivity. Hoboken, NJ: Wiley.
Journal Articles
Astbury, B., & Leeuw, F. L. (2010). Unpacking black boxes: Mechanisms
and theory building in evaluation. American Journal of Evaluation, 31(3),
363–381.
Brandon, P. R., Smith, N. L., & Hwalek, M. (2011). Aspects of successful
evaluation practice at an established private evaluation firm. American
Journal of Evaluation, 32(2), 295–307.
Garcia-Iriarte, E., Suarez-Balcazar, Y., Taylor-Ritzler, T., & Luna, M.
(2011). A catalyst-for-change approach to evaluation capacity building.
American Journal of Evaluation, 32(1), 168–182.
Kundin, D. M. (2010). A conceptual framework for how evaluators make
everyday practice decisions. American Journal of Evaluation, 31(3), 347–
362.
Piggott-Irvine, E. (2010). Confronting evaluation blindness: Evidence of
impact of action science-based feedback. American Journal of
Evaluation, 31(3), 314–35.
Saari, E., & Kallio, K. (2011). Developmental impact evaluation for
facilitating learning in innovation networks. American Journal of
Evaluation, 32(2), 227–245.
Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing
evaluator roles. American Journal of Evaluation, 30(3), 275–295.
Management by Objectives, 91
Mayeaux, Angela, 155, xvii
Measuring Program Outcomes, 6
Metzner, Chris, xvii
Michigan, 155–156
Morariu, Johanna, xvii
Idea maps, 16
If-then sequence, 7
Impact
defined for program logic model, 7
models begin with results, 10
in program logic models, 37, 43
Improvement of program logic models, 48–62
context challenges, 49–52
decisions to do the right work, 59–60
modeling and effectiveness, 48–49
quality model features, 59
quality questions, 58
quality techniques in modeling, 52–53
testing model quality with SMART and FIT, 53–54
using a mark up, 54–58
Improvement of theory of change logic models
benchmarking, 26
doing the right work, 29–30
group process, 26–27
knowledge and assumptions, 24
multiple perspectives, 23–24
nonlinear theory of change logic models, 28–29
promising practices, 26
questions in reviewing, 27, 30
toggling, 25
Improving Head Start for School Readiness Act, 126
Independent Sector evaluation system development example, 108–113
Indicator development, 74
Indicators, 68, 73–80
and alignment, 80–81
outcome indicators, 74, 78, 79
process indicators, 74–77
proxy indicators, 74
Indirect influence, 90, 108
Influence, 90, 108
Information needs, 68, 69
Initiatives, 40
Injury Control Research Center example, 132–134
Inputs, defined for program logic model, 6
Intermediate-term outcomes, 36
International Labour Organization, 146–150
Large-scale programs, 40
Learning
and archetypes, 118–121
Communities of Learning, Inquiry, and Practice (CLIPs), 102–105
in evaluation design, single loop and double loop, 70, 71
and variation, 88–91
Legislation
American Recovery and Reinvestment Act, 126
Government Performance and Results Act, 1993 (GPRA), 123
Improving Head Start for School Readiness Act, 126
Legislative mark up, 55
Logical frameworks (logframes), 90–91
Logic in models, 50
Logic map, 92
Logic Model Development Guide, The, 6
Logic models
about, xii–xiii, 2–15
adding value in evaluation design, 68–70, 80
beginning with results, 10–12
benefits of, 3–4, 90
compared to logical frameworks, 91
computer software for creation of, 45
definition of, 4
effectiveness and, 3–4, 12–13, 48–49
for evaluation. See Evaluation design
examples. See Examples
historical background of, 5–6
improvement. See Improvement of program logic models; Improvement
of theory of change logic models
limitations of, 10
program logic models. See Program logic models
relationship of program and theory of change models, 5, 34–37
synonyms for, 4
theory of change. See Theory of change logic models
uses of, 4, 5
Long-term outcomes, 36
Management
connecting with measurement in evaluation design, 64–66
in corporate giving, 144
Management by Objectives, 91
Marketing, social, 126, 128
Marketing tool, models as, 51
Mark up, in program logic modeling, 54–58
Mayeaux, Angie, 155
Meaning. See Display and meaning
Measurement
connecting management and, 64–66
in corporate giving, 144
Measuring Program Outcomes, 6
Mental maps/mapping, 16, 124
Michigan, sheltering families profile, 155–156
Mistakes, 52
Modeling, 48–62
basic concepts, 3
context challenges in, 49–52
draw and test steps, 52–53
and effectiveness, 48–49
limitations of, 10
quality techniques, 52–60
Multiyear change efforts, 40
Myths, 49, 82
Realistic models, 20
Recipes, archetypes as, 119–120
Resilient communities action profile, 153–154
Resources, defined for program logic model, 6, 36
Results
choices are required, 81–83
models begin with, 10–12
in program logic models, 34, 43
in theory of change logic models, 18–20, 21–22, 26
See also Outcomes
Right work, doing, 12, 29–30, 59–60, 72
Scale of effort, 50
School-improvement efforts, 26
School improvement example, family and parent engagement, 130–131
School, Ready and Succeeding at Third Grade map, 124, 125
Seattle Works, civic engagement profile, 138–140
Servant Leadership, 78
Share Our Strength, 141
Short-term outcomes, 36
Single loop learning, 70
SMART principles
in evaluation design, 78, 83
testing model quality, 53–54, 55, 58
Social interests, in corporate giving, 141–142
Social marketing, 126, 128
Software applications for model creation, 45
Specificity, 50–51
Stakeholders, defined, 65
Strands, 19, 40
Strategy
models begin with results, 12
in program logic models, 34, 38, 40–41, 42, 43
in theory of change logic models, 18–20, 22, 26
Styles, in theory of change logic models, 17
Subject matter content, 89–90
Summative evaluation, 66–67
Sustainable food system example, 94
Value added
by evaluation consumers, 67–68, 69, 78
by logic models, 68–70, 80
Value of archetypes, 120
Variation, 88–91