Once upon a time there lived a King. He was very rich and one night, three clever thieves broke into the King’s palace. They stole many precious goods. But the King’s guard were also clever. They caught the thieves red handed as they tried to run away. They brought them before the King. 

“Did you commit that robbery?”, the King demanded.

“We did your Majesty”.

“Are you guilty or not guilty?” enquired the King.

“Not guilty, your Majesty”.

The King raised one eyebrow. “How is that?”

“It was easy to break into the palace, because the mason who built your walls did a poor job. The cement was loose. Therefore, the mason is guilty”.

“Bring the mason”, ordered the King.

“Mason, are you guilty or not guilty of putting the cement very loose when you built these walls”?

“Not guilty, your Highness. The labourer I appointed to mix the lime did not put enough lime in it” .

The labourer was summoned. “Are you guilty?”, the King asked.

So the story goes on...

 The lesson: a systematic monitoring from the beginning makes the people accountable.

“Not everything that can be counted counts. And not everything that counts can be counted.” - William Bruce Cameron

Commitment to operational and programmatic excellence of an organization demands continuous improvement of the ability to collect, document, analyze and apply learning at the project, sector and agency levels, and to share the reflections with stakeholders, practitioners and policymakers.

It’s mainly depends on how the organization utilize effective feedback mechanisms to ensure greater accountability to program beneficiaries. Thus, systematically measure results, incorporate and document experiential learning. The second aspect is that how could the organization optimize use of technology to improve the accuracy, timeliness and accessibility of monitoring and evaluation data. This will guide to facilitate decisions based on evidence and learning. Finally, sharing the learning with the broader development community and policymakers will create an accountability of the contribution of the organization towards social impact.

A monitoring system can be described as a set of procedures through which planned information flows through an organisation to different management levels to support decision making. In other words, the monitoring framework is constructed to track the progress and results, reasoning why or why not it achieved.

However, the dilemma remains in capturing the information related to the subtle and progressive changes inbuilt in the intended results. There could be many reasons; a) No availability of specific planning indicators b) Indicators might not be not realistic and practical enough c) No baseline information available or collected d) Indicators are not revised with the changing context and trends e) The planning indicators might not be unpacked as milestones or targets to answer the monitoring questions. f) Non-availability of monitoring questions g) Non-availability of staff to construct effective monitoring questions. h) The question remains who collects, what information and what for? In other words, how is the information transferred and used. Finally, are the specified staff being capable who are involved in the process?

An information system serves the management. Therefore, it is necessary to understand the responsibilities of the management level for which the system is intended. Who is to be served by the monitoring system, what are the management’s responsibilities, and to whom does management report? These are relevant questions. In practice, it is not easy to define management responsibilities if these are not formalised. However, it is often possible to define for each management level what kinds of decisions it has to take and what kinds of decisions it has to leave to lower or higher management levels.

The practical way of resolving could be during the designing of the projects the planning indicators have to be appraised on availability of baseline information, realistic end-line targets. This could be compared and verified from the experience of other interventions. The changing context has to be continuously observed and fed into the monitoring system. The most challenging part is how to convert the indicators into specific and operational milestones, in other words un-pack the indicators into monitoring indicators. Simplifying the indicators will help the managers to formulate specific monitoring questions, thereby collecting information will be straight forward and clear.

The crux of the monitoring is not about knowing the “what” and it is all about knowing the “why”.   If we have a structured and systematic monitoring plan, it is not that difficult to collect the right information and for right decision making or dissemination.

Interested in a Monitoring & Evaluation training for your staff or a consultancy service for your organisation, please contact us at mdfasia@mdf.nl.