Social Impact

Impact Evaluations: Choosing the Right Approach

Impact Evaluations

How to Measure Success

Measuring success is one of the most difficult aspects of running a social sector organization. Unlike the private sector—which tends to measure success based on the relatively straightforward criteria of profits and losses—the social sector must grapple with much more abstract concepts. How, for example, do you know if you’ve improved quality of life or increased leadership capacity?

This complexity is magnified by the fact that each mission-driven organization is unique. Success—or, in social sector parlance, impact—as defined by one organization may be irrelevant to another organization, depending on its mission. Simply put, there is no uniform industry barometer for impact.

An impact-focused measurement approach should both derive from an organization’s strategy (Theory of Change) and improve its future performance by enabling continuous improvement.

Consequently, choosing the right impact evaluation is a critical decision for mission-driven organizations and should involve four important considerations:

  1. Measurement Approach – Different evaluation methods yield different results, both in terms of rigor and validity. If you want those results to be accurate and useful, it’s important to match the evaluation method to the organization’s needs.
  2. Theory of Change – Choosing an impact evaluation is one of the best ways to vet an organization’s theory of change. If you cannot determine an effective evaluation strategy, then you need to consider if the organization’s programs and activities logically lead to the intended impact.
  3. Funding – More and more often, donors are requiring hard, data-driven evidence of impact before funding projects or programs. Simply pointing to the existence of activity (number of microloans repaid or students taught or wells dug) is no longer sufficient.Donors want to know that their dollars are creating measurable, meaningful changes.
  4. Continuous Improvement – Without accurate measurement, organizations cannot optimize their functions. Access to insightful impact data makes a significant difference in an organization’s ability to continuously improve and, ultimately, achieve its mission.
Types of Impact Evaluations

So, how does an organization go about selecting the right impact evaluation method? As with most high-level decisions, we recommend a backwards-design approach:Organizations should first decide what they intend to achieve through evaluation and then choose an appropriate method. This process begins with thinking through the four considerations listed above, which guide the why behind impact evaluations. Carefully considering an organization’s needs across these four areas will provide clarity around which evaluation method to use.

There are a number of evaluation strategies to choose from, each with its own strengths and limitations. Two of the most common strategies are quasi-experimental and pre-post impact evaluations. The table below provides a brief overview of these two strategies:

  • A treatment group receives or experiences the “intervention” (e.g., leadership training or microfinance loans), and results are compared to a control group that does not receive the treatment. This differs from an actual experiment in that participants are not randomly assigned to the treatment or control groups.
  • Participants’ current state (e.g., leadership capacity or quality of life) is measured before and after the “intervention.” Note that this method’s main difference from the quasi-experimental approach is that no control group is required (though adding one is certainly possible).
  • Allows for direct comparison between treatment and control groups
  • Incorporates before-and-after measurement
  • Logistically simpler than the quasi-experimental method (does not require data collection for a control group)
  • Often lower cost than the quasi-experimental method
  • Lack of random assignment can preclude causal claims
  • Often requires going outside an organization to find control group participants
  • Lack of a control group may mean important insights are not captured
  • Some loss of internal validity if “post” participants are not also the “pre” participants
Further Reading


Impact Evaluation Case Studies

To put these two methods into context, consider the following case studies of impact evaluations, drawn from Cicero’s recent engagements with Young Life and Junior Achievement USA.


Young Life: Quasi-Experimental Impact Evaluation

Dedicated to serving underprivileged youth nationwide, Young Life sought to understand the effectiveness of its impact model. Its programs focus on developing relationships of trust and caring between adults and adolescents, and range from hosting summer camps to establishing a physical, supportive presence in local neighborhoods.

Because Young Life’s work is fundamentally about nurturing positive adult-youth relationships, it opted to leverage a quasi-experimental evaluation approach, which would allow the organization to understand youth’s experiences with and without these relationships (via treatment and control groups). In order to make accurate comparisons, Cicero designed a detailed sampling methodology in which the treatment and control groups mirrored one another in as many ways as possible (e.g., age, gender, ethnicity, geography, family makeup, etc.). Cicero and Young Life are currently working together to conduct the impact evaluation, with a focus on understanding the changes certain programs produce in youth—particularly in terms of educational attainment, drug usage, crime and violence rates, personal relationships, self-confidence, and connection to their communities.


Junior Achievement USA: Pre-Post Impact Evaluation

Junior Achievement (JA) also works with youth nationwide, helping them develop personal finance and entrepreneurial skills that will enable them to succeed in school and their future careers. JA sought to evaluate the impact of two of its 20+ programs—specifically, their effect on students’ financial knowledge and attitudes.

In order to quantify these programs’ impact, JA and Cicero utilized a pre-post evaluation method. Hundreds of students across eight geographic regions participated in the evaluation. The pre-post approach allowed JA important flexibility in working with numerous classroom teachers according to their individual time constraints—not every student was able to take both the pre- and post-tests, but ensuring a large sample size allowed JA to collect a large amount of pre- and post-assessment data with which to calculate each program’s impact. The evaluation produced valuable results confirming the effectiveness of JA’s impact model and also identified high-leverage opportunities for continuous improvement.



While the quasi-experimental and pre-post approaches are different in important ways, choosing the “right” strategy should focus on matching an organization’s needs (recall the four considerations at the beginning of this article) to the specific strengths and limitations of a particular approach. This is how Young Life and Junior Achievement selected evaluation methods that would ensure the results were as actionable as possible, both in terms of qualifying for future funding and enabling continuous organizational improvement.

Tyler Hardy


Tyler Hardy is a Principal at Cicero Group. In this role, Tyler has provided strategic insight and direction to major public and private entities in the TechnologyEntertainment, Retail, and Healthcare industries. He also works frequently with nonprofit and mission-driven organizations to define change models, drive social innovation, and measure impact. Tyler’s expertise spans strategic planning, new product developmentprocess improvement, and service and experience design. 

Start a Conversation

Thank you for your interest in Cicero Group. Please select from the options below to get in touch with us.