Excellence in monitoring, evaluation and learning (MEL)


What to consider

How do you embed MEL into organization design? How can philanthropies evaluate the impact of their investments across geographies and thematic areas in a consistent manner? How best can insights from these exercises enable grantees to communicate the value of their work to the broader impact ecosystem?

“MEL” has become common talk in modern, purpose-driven organizations in the social impact sector. Despite MEL tools being used far and wide, not all users have rationalized their objective and purpose of them for their organizational goals. Absent this understanding, organizations can often implement MEL systems that create new bureaucracies requiring great time and expense. But those that don’t necessarily yield tangible gains in performance outcomes.

In simple terms, MEL activities act as feedback loops between an organization and its stakeholders. They help institutions understand how their actions affect desired outcomes, and how tweaks in these actions can change or improve outcomes.

A great MEL system should help you both improve and prove impact. Improve through continuous feedback loops that influence decision-making. And prove by generating stories about your impact so success can be scaled up and past mistakes avoided.

If MEL systems drive excellence in organization performance, what drives the excellence of MEL systems themselves? In our work, we have discovered seven principles of effective monitoring and evaluation strategies:

  1. Use MEL as a mirror of your organization/program strategy. Learning from M&E systems helps you understand how well the objectives at the core of your strategy are being met. When initially setting up your MEL systems, see this time as an opportunity to refine your theory of change or program strategy. Determine what goals matter most, and which activities helps you achieve them with the time and resources you have. Indicators against these become the skeleton of your MEL framework. As you process measurement data, constantly think about the strategic implications for the program regarding both immediate improvements and long-term opportunities for scale.
  2. Measuring social impact is not an exact science and therefore iteration is essential. A sound M&E strategy requires a close understanding of ground realities of programs and their causal relationship with beneficiaries, availability of data, availability of measurement tools, and capacity of data collectors. Indicators that look thorough on paper need to be constantly refined to balance the trade-off between rigor and practicality. Run trials of a data collection process on the ground. Keep what works, tweak what doesn’t. Repeat till the core metrics have successfully yielded actionable insights.  
  3. Process is as important as the outcome. A strong strategy and impact framework is an excellent starting point to develop a shared, consistent language around impact. How each user engages with this framework, from data collection, analysis, synthesis, to storytelling, predicts the long-term use of a strong M&E strategy. Ensure users have clarity on their roles and responsibilities in the chain of M&E command, and comfort with technical tools and analytic methodologies. This may necessitate an internal technical assistance budget to train and sensitize stakeholders to new processes.
  4. Impact metrics are rarely one-size-fits-all. Identify a subset of metrics that are non-negotiable and core to the objectives and values of your philanthropic endeavors. These can be applied across all grants. Have another set that are good-to-have, but adaptable based on regional and cultural contexts
  5. Walk grantees through the end goal and the co-benefits of participating in M&E activities. If local agents are expected to increase their efforts in collecting information, they should understand how this helps their own work. A clear impact narrative brings greater visibility to the work of grantees, which can help them raise more funding and increase the scale of their programs. A mature learning organization looks at cycles of feedback constructively, not defensively.
  6. Good measurement should always lead to action. There is limited value in evaluation that simply results in a report, even if it identifies areas for improvement. Instead, all measurement activities should result in clearer plans and decisions, more informed activities, and better results. This requires strategic, action-oriented leadership to interpret insights in ways that key stakeholders will understand, and then translate those insights into activity—better tools, processes, materials, programs, etc. 
  7. Finally, ensure everyone is heard through this process. Vocal support from top leadership is key, but so is consent and buy-in from people eventually responsible for executing your M&E strategy. Keep your development process open, deliberative, and consultative so a strategy and its processes are not finalized in an organizational silo.

    Start a Conversation

    Thank you for your interest in Cicero Group. Please select from the options below to get in touch with us.