Commentary

When do humanitarian action evaluations trigger change?

In the 21st Century, demand for evaluations of humanitarian action has soared. This increased demand is due to contextual and external factors, such as the severity of humanitarian crises or even the sheer volume of emergency funding. These evaluations are launched mainly because ‘we have to’.

This contrasts with evaluations which are strategically planned and designed for well-defined learning and accountability purposes. These are conducted because ‘we want to’.

With the increased number and scale of conflicts and disasters across the world, humanitarian evaluations have often become a ritualistic compliance tool. Their added value has been questioned. When not used – and we must all have seen a few of those in our careers already – they are failures.

During the global COVID-19 pandemic, evaluation functions across the humanitarian system were even more fundamentally questioned. As such, they had either to ‘resist, adapt or die’.

While traditional evaluation approaches suffered a setback, real-time evaluation and real-time learning experienced a renaissance, as agencies and national governments started demanding insights to navigate the ‘operational fog’. Amidst uncertainty, evaluation helped them adapt to situations as they unfolded.

What can we learn from this? It is only when evaluations are used that their value is recognised and they spark change. Humanitarian evaluations need to be clearer in their purpose – to provide real-time feedback for adaptation, provide accountability to affected populations, peers and donors, or draw lessons for future crises. Evaluations should be carefully timed and designed to meet such expectations because all purposes cannot be met at once.

Fundamentally, evaluators need to understand the usefulness to end users and more specifically what, when and how they can influence change throughout the process. Only when they are clear in their intent and well-planned, their focus is prioritised, their methods tailored and understood by users, can they make timely contributions to organisational learning, informed decision-making processes and accountability for results, as well as generating broader knowledge for the humanitarian system.

While evaluations of humanitarian action have more resources than ever, lessons from responses to protracted conflicts, cyclical disasters and escalating crises are not always being systematically being used. Why?

Despite the efforts of ALNAP and the Inter Agency Humanitarian Evaluation, the humanitarian system continues to be fragmented and suffers from chronic amnesia. Most humanitarian actors still seem to prioritise their own organisational learning and accountability needs. We need more systemic thinking and working in partnership.

Furthermore, humanitarian actors experience high turnover and work under constant pressure, so have little time to dedicate time to learning. We need ways to provide more systematic learning through ‘pearls of wisdom’ before humanitarians dive back into crisis response mode.

Major crises provide clear opportunities to generate learning. During the pandemic, they even helped reposition and retool the evaluation function.

Today, there is more demand for adaptive reviews which are more operationally focused, real-time and developmental evaluations. To trigger learning, evaluations must iteratively engage users and place affected populations at the centre.

Learning spaces established at the field level – in regional support hubs as well as at the central level – need to be regularly cultivated by engaging senior leadership and other key stakeholders, which drive transformation and become evaluation champions.

Finally, evaluators must have a light footprint, be adaptive and be good facilitators. To bring the learning to base, they should swiftly identify allies, entry points and ways to overcome possible stumbling blocks. Otherwise, even with the best of intentions, they will just end up feeding the beast.