Eiffel Tower

Evaluating is an essential step in any project cycle. Just image the number of evaluation reports that must have been written in past decade…they probably make a pile as high as the Eiffel Tower.

Quite often though, evaluation reports end up on dusty shelves or in dark drawers.  Unused, unread, and sometimes even unfinished. That is my experience.  There are many reasons for this: long and dreary reports, unwanted criticism, busy schedules, incompetent evaluators, defensive project managers, poor timing, imposing donors …

Dichotomy

Whatever the reason, it is always a challenge to do an evaluation that strikes a good balance between ‘accountability’ (assessing project effects, impact or efficiency) and ‘learning’ (evaluation that enable all parties to learn and do better).

Participation

One way to tackle this dichotomy, is to do a participatory evaluation. Involving stakeholders at one or more stages of the evaluation process is a way to ensure that 1) they learn from the process and 2) they own the evaluation results. This makes it more likely that conclusions are accepted and recommendation followed.

But participation is a big word. What does it mean, and how to do it? First, we need to ask ourselves two questions

  1. Who should participate?
  2. How and when should they participate?

On the ‘who’ question, I can think of many different actor groups that can be involved. Project staff, donors, partners, field staff, beneficiaries, (local) authorities etc.

To answer the ‘how and when’ questions, we need to distinguish the three phases in the evaluation process.

In each stage, stakeholders can participate in various degrees.  To give a few examples:

  • Give beneficiaries inputs into design of evaluation tools. Let them help to define ‘success’ of the project.
  • Design evaluation methods and data collection tools through a workshop with project staff and partners, so that hands-on learning can take place.
  • During data collection, use peer-review techniques. Why not let the staff of partner project X assist with data collection of sister project Y? And vice versa. Not only will you have data collectors or interviewers, it also offers great opportunity for staff to learn from other projects. Comparable to a study visit.

 

Then there is data analysis, reflection. That I’d say is the greatest win-win situation for evaluators. Project staff or partners can be of great help in making sense of raw field data. They have local knowledge and contextual experience. Involving management in drawing conclusion will for sure ensure these conclusions are owned by them, they can even write their own recommendations.

Bias

How about bias? Shouldn’t the evaluator be the independent third party that has an unbiased, independent outlook? Certainly. Therefore, the evaluator should remain the independent overseer of the process. To ensure appropriate choice of evaluation methods and tools. To supervise during data collection. It is also the evaluators role to write the final report or presentation. In this final step of the evaluation process participation is not beneficial.

And even if the evaluations suffer a bit on accountability but enable true learning, is that such a bad thing?

 

About the author 

Ms. Van Lieshout has 20 years of experience in the overseas development  sector working with many UN and other international organizations in Africa, Asia and the Caribbean (e.g. UNDP, UNICEF, ILO, USAID, UNESCO, EU, GTZ, MDF, CARE, PSI, etc.). She is a versatile trainer and has trained over 1,000 people in more than 20 countries worldwide, from government, non-profit and private sector. She has developed an all-round training expertise in the fields of Soft skills (leadership, presentation, negotiation etc.), Facilitation, TOT, Coaching, M&E, Evaluation, Research.

Susanne is currently the Branch Director, Senior Consultant/Trainer of MDF Asia Myanmar office.