(Author: Rudolf van der Helm, Trainer/Consultant, MDF Netherlands)

At MDF we like to say that for answering these kind of questions, you first need to go back to the question: why do you monitor in the first place? By answering the why question, you are much better positioned to answer the how questions you might have. So let us first examine the definition of monitoring, or to be more precise, the definition of results-based monitoring: “monitoring is the ongoing collection, analysis and use of information about project progress and the results being achieved. It supports effective and timely management decision making, learning by project stakeholders and accountability for results and the resources used.”

Why we monitor

This definition already shows why we monitor. Monitoring helps us to learn from experience (do our activities indeed contribute to certain outputs and outcomes?), to make decisions about projects and programmes (should we speed up implementation?), and to be accountable to our donors (by reporting on results using monitoring information), so that we become more successful at achieving our intended results.

Indicators should thus provide us with information that can be used for these purposes. This already helps us to think about what indicators should be monitored and perhaps also limit the number of indicators you want to design. Indicators that provide information that cannot be used for learning, decision making and/or accountability purposes can be considered redundant. The definition of an indicator also hints to these three purposes of results-based monitoring: “an indicator is a quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievements, to reflect the changes connected to an intervention, or to help assess the performance of a development actor.”

However, we still did not answer the question how to define an indicator. Usually, we use an abbreviation like SMART or QQTTP to formulate indicators. These kinds of abbreviations make sure that certain elements are included in the indicator (quality, quantity, target, time, place in case of QQTTP) or that the indicator is formulated in certain way (SMART stands for specific, measurable, achievable, realistic, and time-bound). But do these tricks really help you to define indicators for your project?

Back to the what

There is a risk that defining indicators become a separate exercise, something you do in isolation. It can therefore become a frustrating exercise as you are forced to include all these elements in each and every indicator. Perhaps another way to go about it is to look at indicators as part of your monitoring plan or measurement framework. A document that includes the intended results of your project, with corresponding indicators, targets, tools, and timelines for your monitoring ambitions. Would it not be liberating to only include the “what” as an indicator and leave the rest to be defined as part of the monitoring plan? For example, why don’t we use “school attendance” as indicator for a result about child literacy, and leave the quantity (how much), target group (who), time (when) and place (where) for the monitoring plan to define? Thinking thoroughly about the “factor or variable that measures the achievement of results” is perhaps a more constructive exercise, than trying to include all elements about the “how to measure it” in your indicator!

Join the discussion!

If you are interested in these kinds of questions, we warmly invite you to join our upcoming course on Monitoring & Evaluation for Learning. During this course, we will teach you how to design a monitoring plan including all its elements, but also discuss with you “why” we do and define certain things in the field of monitoring and if there are any alternatives. If you are interested to move beyond the ”how” and think about the “why”, this course might be the right course for you!