As the end of the Dialogue and Dissent Partnership era draws near, evaluations once again start rearing their head. After 3-4 years of hard implementation work, monitored by increasingly advanced systems, a final external evaluation remains part of the deal. Of course, such evaluations have the benefit of providing ‘independent’ evidence of what was accomplished. Ideally, they shed new light on what worked, what not and why, and even suggest ideas about how to do better in future. However, many evaluations confirm largely what is known already as they are informed by stakeholders that have been closely involved for years.

So with another round of evaluations comes the question if the uncertain benefits outweigh the time, money, energy and emotions spent on such an evaluation? If you were not contractually obliged, would you voluntarily expose yourself again to the scrutiny and judgement of an outsider? To be honest, I doubt it.

Nevertheless, evaluations remain inevitable. Not only because public funds need to be accounted for, but also because you owe to yourself, to your partners and your beneficiaries that you make a serious visible effort to reflect and learn from the past to do even better in the future. The international development sector simply cannot afford to continue its work without openly reflecting and learning from its experiences. No other sector has to demonstrate results whilst having so little control, and being subjected to so many external factors.

So how to turn this necessity into a virtue? What does it take to make you feel that the benefits of an evaluation outweigh, or at least break-even, its costs? At MDF this led to the concept of ‘virtuous evaluations’ that can help shaping beneficial evaluation processes and teams. This concept includes a number of distinct experience-based process- and content-related features.

 

Process-related features;

1. speak the same language

In any organisation, newly recruited staff is subjected to a deliberate induction process. During this ‘socialisation’ process, the recruit is familiarised with the habits and terminology of the organisation or programme. As most evaluations concern a short-term temporary involvement, this socialisation process is often ignored or just assumed to take place. Yet, external evaluators are expected to provide an accurate and nuanced description of the performance of your programme. Understanding and using terminology that is fitting and recognisable proves to be of key importance. There have been (too) many occasions where a different use of words to describe programme features and performance let to misunderstanding and resistance, while in essence there is actually agreement. A typical example is a programme that was complimented for its improvisation and adaptation, while programme management contested the limited appreciation of its innovation. Making sure your evaluator speaks your language can help a lot in making evaluation results recognisable, and thereby acceptable and useful.

 

2. cherish process above product

Often the most interesting insights emerge during the evaluation process itself through regular exchange and discussion of observations. The results of this process normally find their way into the deliverables of an evaluation, often in the shape of a power-point presentation and/or a report towards the end of the process. These products are by no means as rich and nuanced as the dialogue during the process.

This dialoguing, however, is often limited to the evaluation team and one or two M&E officers, which makes them the biggest beneficiaries of the evaluation process. The majority of programme management and staff usually only benefit from the evaluation products, if they have the time and energy to go through them. Insisting on joint intake and sense-making sessions, where you design and reflect together on findings and ways forward, can make a big difference in realising an evaluation with a stronger learning effect. These moments can take place at the beginning and the end of the data collection process, combined with regular, short, intermediate moments of exchange.

 
3. more user-friendly reporting

Linked to this is acknowledging the limitations of traditional evaluation reports. These 50 or 60-page products may serve the accountability purpose of an evaluation, but do a lot less for its learning effect. At times, they even undermine the learning effect, as discussions become about the choice of words rather than the essence of findings. More regular exchange moments and complementing traditional reports with more attractive reader-friendly products such as videos, podcasts, infographics, or web-based interactive reports, help in fostering the learning-orientation of the evaluation.

Lego puppets

 

4. all different but the same

We all are unique and so are our programmes and projects. This needs to be recognised and appreciated in each evaluation, but should not prevent comparison and learning from others. There are ample challenges that are similar for many international lobby & advocacy programmes, like: complicated dynamics among partners or between managers and implementers, capturing and convincingly demonstrating meaningful but often intangible results, making credible contribution claims, reconciling progressing insights and achievements with initial goals and targets, and so on. In other words, a lot can be learned from looking at other partnerships, but not many of us have the opportunity of looking in the kitchen of a wider range of partnerships. This is where an external evaluator can add value. Finding an evaluator who has seen many different partnerships, alliance or coalitions, and is able to de-contextualise findings whilst respecting confidentiality can help in learning from the experience of others.

 

5. include an appreciative approach

Imagine the implementation team that worked four years to overcome all the imperfections that make international Lobby and Advocacy work so difficult. Then the evaluator is called in to ‘judge’ programme performance and suggest improvements. Rationally speaking, we all understand this, but this is sensitive. Every judgement deserves to include a deliberate appreciation of what was accomplished and what can be learned from success, before moving on to what can be done better. Emphasizing the need for an appreciative evaluation approach is not hiding from what went wrong – as the mirror image of success is failure- but simply prioritising learning effectiveness. Research and experience both show that people are more likely to accept and act on lessons when they feel appreciated rather than criticized. So why not insist that your evaluators include this as part of their methodological approach?

 

Mike ZuijderduijnNext week we will introduce some of the content related features of virtuous evaluations. We will discuss how to measure the contribution of lobby & advocacy programmes and how to recognise early signs of success. Would you like to know more about our evaluations? Get in touch with Mike Zuijderduijn, MDF Global director and senior evaluator, or read more about our methodologies on our Evaluation page.