International Development Research Centre (IDRC)
9 May 2012, 16:00
I have also been thinking about your dilemma for a while. I work with the International Development Research Centre (a funder of international development research based in Canada with regional offices in the developing world). While we are not a funder of humanitarian initiatives, we face the same evaluation dilemma, having to address all of the requests you mentioned while also answering to our own Board and senior management.
My sense is that this is a tension which will always have to be addressed and there is not a solution which will be acceptable to all (in other words, there will probably always be people within your senior management and governing board who will think you are perhaps doing too much of the second imperative).
The way we have traditionally managed it (I say "traditionally managed" as we are now undergoing some rather dramatic changes as a result of the funding environment in Canada and subsequent changes in evaluation at IDRC) has been to development our evaluation strategy as one that engages with (and serves) three constituencies: IDRC, our grantees in the global South (partners), and evaluation practitioners and theorists. The overarching goal for us as a Unit is to promote and support an IDRC-wide, decentralized evaluation system that guarantees high quality evaluation in our organisation. The "theory" is that this can be achieved by maintaining a balance and by engaging with and serving these three constituencies which feed into and inform each other.
Since we see the evaluation of research for development as a hard-to-measure area (attribution problem is acute; research is nested in the wider system; multiple pathways to outcomes like policy influence, knowledge uptake or research use is hard to trace, etc.) the "argument" is that in order to get a handle on results, the Centre needs to be constantly innovating in evaluation tools and methods, and we get that inspiration from IDRC programs. Some of the support for this line of thinking would come from senior management and Board members who understand evaluation as a "field" of theory and practice - and not simply as a management tool (bringing to mind the tension/confusion/debate about evaluation vs. performance management).
Humanitarian evaluation also has its own particular set of challenges; so the need to develop this as a sub-field -with appropriate tools and methods, interacting with other humanitarian colleagues and evaluation thinkers might perhaps be made as an argument. Advancing practice for improvement and learning is arguably not something that UNHCR can do alone. (I say all of this, not having a deep knowledge of your organisational culture and imperatives of your governing body which may be more about accountability for funds spent - and less emphasis on organisational learning).
I guess the question at the bottom of all of this is to what extent you can show that your involvement in the second set of activities, adds value to HCR as an organization and to what extent your shop can make the argument/find supporting evidence to prove this: Does it help you get better a measuring HCR's work? Does it help the organization learn and improve? Does it enhance the visibility and reputation of HCR? Does it help HCR be an accountable, learning organization?
The other thing we have done is to set out evaluation principles (something which might resonate well with your governing body) and then tried to build the evaluation system and our rational for activities around those. Also, when we have to report annually on our activities as a Unit, we try to ensure that the voices of evaluation champions (program staff, senior management across the organization) are clearly audible. (So that it does not look like we are out on our own doing things of questionable use to the organisation). We try to be proactive about continually identifying and working with these champions (involving them in evaluations; getting them training opportunities, mentoring them to be utilization- focused in their approach to evaluation, etc.) so that we have a corps of supporters who can speak convincingly and with legitimacy to the importance of our involvement in these "other activities."
At the end of the day, one has to be a bit self-interested around making choices. When I'm approached by people to be involved in these other activities, I always try to ask myself how it will make evaluation at IDRC better - before I jump, because sometimes I'm just tempted because it sounds really interesting (that's the theory: sometimes I'm my own worst enemy!)
Hope that helps!
Evaluation Unit, IDRC