Following the London, Geneva and Washington ALNAP Community of Practice workshops within the context of the Strengthening Humanitarian Evaluation Capacities initiative, in DARA we hosted one in Madrid on January 31st. A small but representative group of the Spanish humanitarian community attended the workshop - including donors, NGOs, institutes and independent consultants, managers, and board members.
Participants openly shared their evaluation experience and challenges among a group of peers. When we broke up into groups, colleagues exchanged practical advice on how to tackle challenges related to evaluation use. Participants started to unveil how an evaluation culture was being fostered within their own organizations but also the stumbling blocks they were experiencing.
During the day, three main issues featured clearly:
1) Despite the drastic humanitarian aid budget cuts, there is a growing interest in learning, accountability and performance as well as an increasing demand for evaluation rigor. Now more than ever there is an increasing call for value for money. In every crisis there is an opportunity and if the current financial crisis is to serve one purpose, it may encourage humanitarian agencies to reflect on the question on how they could do more and better with less.
2) Knowledge around learning, accountability and performance among humanitarian practitioners is almost tacit. There is often not enough exchange among peers and knowledge is quite uneven within organisations due to the high levels of turnover both at headquarter and field levels. These problems are perennial ones in the humanitarian community. They need to be overcome if humanitarian response is to become more effective.
3) Learning processes do not necessarily imply an organizational change as either leadership does not always engage, incentives are not in place, knowledge management policies are lacking or the evaluator is perceived as a threat and not as a stepping stone towards improved performance. The link between learning and organisational development must be addressed as a matter of priority.
To share some "food for thought" I would just like to put forward three questions for further debate within our community of practice:
. To what extent are we being utilisation and user-focused in our evaluations and not just reducing them to a box ticking exercise?
. Are we properly engaging with leadership and fostering an evaluation culture? Is leadership sufficiently convinced and committed to organizational learning and development?
. To what extent is the quality of humanitarian evaluations improving and are we successfully managing to move from the "methodological anarchy" that pre-dominated some years ago towards a more disciplined measurable and evidence-based approach?
In this sense, I would be interested in hearing from you on how we may or may not be effectively contributing to improving the evaluation practice.