“Who is at the table?” This is the guiding question that I believe lies at the heart of participatory evaluation, which is an evaluation principle that is becoming standard practice in many contexts.
This post introduces the concept of participatory evaluation, describes benefits and pitfalls, and offers suggestions on how to incorporate participatory evaluations successfully into practice at your organization.
What is participatory evaluation?
Participatory evaluation is an overall guiding principle for evaluation that emphasizes the inclusion of various program stakeholders – particularly program participants – in evaluation discussions and decisions.
Different organizations define participatory evaluation in slightly different ways – you can refer to Better Evaluation, UNICEF, and the Community Tool Box for some example definitions. A key difference among various definitions is in terms of which stakeholders are involved and to what extent. The Community Tool Box, for example, sets the bar quite high in stating that participatory evaluation “involves all the stakeholders” in helping to understand a project and apply that understanding, whereas the UNICEF and Better Evaluation definitions only specify that it involves stakeholders “at any stage” or “in specific aspects” of the evaluation process.
What does participatory evaluation look like in practice?
Given that there is no universal definition, “participatory evaluation” can become a buzzword thrown around casually or, on the other hand, an ideal that seems out of reach. I find it helpful to think of “participation” as a spectrum rather than a yes or no proposition. On one end of the spectrum is a fully detached evaluation that relies entirely on the technical expertise of an outside evaluator and does not engage internal program stakeholders in evaluation discussions or decisions. On the other end of the spectrum is a fully participatory evaluation where ALL program stakeholders play an active role in ALL stages of the evaluation cycle. In practice, all program evaluations fall somewhere in between these two ends of the spectrum.
In order to move closer to the “participatory” end of the spectrum, it is helpful to the ask “who is at the table” throughout the evaluation process. Asking this question can help organizations and evaluators be mindful and intentional about including different stakeholders in discussions and decisions related to evaluation. Here are some examples of guiding questions for participatory processes at each stage of CDC’s evaluation framework :
- Engaging stakeholders: who have we considered, and who have we not considered, as “stakeholders” for this program?
- Describing the program: who should define/articulate what the program does and what it is intending to achieve?
- Focusing the evaluation design: whose values are prioritized when determining the evaluation questions?
- Gathering credible evidence: who should be involved in determining evaluation methods and collecting data?
- Justifying conclusions: who needs to be involved in interpreting results?
- Ensuring use and sharing lessons learned: who should have a role in communicating results?
Why should we incorporate participatory practices into evaluation?
From my perspective (which parallels that presented by UNICEF), there are two main reasons to employ participatory practices:
- Ethics: “participatory evaluation is the right thing to do.” This perspective can include the principle that people have a right to be involved in decisions about programs that may affect them. I have also spoken with nonprofits that view participatory evaluation as a tool for community transformation and liberation from oppressive power structures.
- Quality: “participatory evaluation is the smart thing to do.” Working in close partnership with “on-the-ground” stakeholders like program staff and participants can help develop trust and therefore open doors to generating richer, more valid data. Similarly, on-the-ground stakeholders can provide contextual insight that allows an evaluation to better answer questions like: “Why did we see the outcomes we did?” or “Why did demand for our program grow so quickly?”
What are some pitfalls of participatory evaluation?
- It takes more time – and money. Participatory evaluation involves bringing many cooks into the kitchen. Different stakeholders may have different opinions, for example, about which outcomes are most important to measure and about what qualifies as “good evidence” of whether those outcomes have been achieved. Participatory evaluation requires a budget to compensate the people involved in dialogue and decision-making – program staff salaries, an evaluation consultant to facilitate the process, compensation for program participant representatives, etc.
- It may intensify perceived conflicting priorities between funders and on-the-ground stakeholders. Many funders emphasize standardization of outcomes and metrics so that they can evaluate investments in a consistent way. In contrast, participatory evaluation approaches are likely to generate context-specific methods and metrics that may need to be reconciled with funder expectations.
- It can actually perpetuate power imbalances. Even if many different stakeholders are represented “at the table,” power dynamics may persist. For example, a program participant from an oppressed group may feel silenced by or hesitant to share an opinion with a funder who holds the purse strings.
How can we make participatory evaluation successful?
Here are some tips:
- Involve a savvy facilitator. This facilitator may be an external evaluation consultant who has enough distance from the program to be able to act as a moderator. Soft skills like listening, conflict management, and cultural humility are critical here.
- Be mindful in inviting participant representatives. Program participants are not a homogenous group of people, but rather a diverse group with various interpersonal and power dynamics. The best participant representatives are the ones who are involved not to get a leg up on their peers but rather because they are trusted by their peers to represent the interests of the group .
- Continue asking, “Who is at the table?” It is rarely feasible or effective to involve ALL possible stakeholders in ALL aspects of the evaluation. Nevertheless, checking in on who is and is not being represented can help programs and evaluators continuously improve ethical practice and evaluation quality.
- Better Evaluation. Participatory Evaluation. 2018 06/06/18]; Available from: https://www.betterevaluation.org/en/plan/approach/participatory_evaluation.
- Guijt, I., Participatory Approaches, in Methodological Briefs: Impact Evaluation 5. 2014, UNICEF: Florence, Italy.
- Community Tool Box. Section 6. Participatory Evaluation. 2018 06/06/18]; Available from: https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/participatory-evaluation/main.
- Centers for Disease Control and Prevention. A Framework for Program Evaluation. 2017 06/0618]; Available from: https://www.cdc.gov/eval/framework/index.htm.
Adam Lipus, MPH
Program Evaluation Consultant
Adam’s program management and evaluation experience spans government, academia, nonprofit, and social enterprise settings. He seeks to facilitate reflection, learning, and growth among organizations with social and environmental missions. Adam is particularly passionate about food, nutrition, and agriculture, and burritos are his favorite food.