Translate with Google Translate

New year, new you? What the WHS can learn from failed new year’s resolutions

Alice Obrecht

By Alice Obrecht on 10 January 2017.

View existing comments or leave a new one!

It’s January, a time for new year’s resolutions and self-improvement regimens. Yet many of these resolutions are abandoned by February. For those that manage to succeed, what’s their secret? Apparently, one of the keys to a successful new year’s resolution (or ‘self-change’ as they are called by psychologists), is measurement: setting realistic and clear goals that can be tracked over time. The ability to monitor progress not only helps a self-changer understand whether the resolution is being achieved, but also acts as a source of motivation to continue.

Eight months ago, an unprecedented gathering of humanitarian actors laid out their own version of ‘self-change’ in the form of commitments at the World Humanitarian Summit in Istanbul. It will take some time before we know whether or not these commitments are being successfully achieved. But if we think that measurement is important for a successful commitment, then there is much work to be done to ensure that the Agenda for Humanity is a success. Currently, it’s unclear how commitments made under the Agenda for Humanity will be implemented, or how their impact on humanitarian action will be assessed.

How can we ensure that the WHS commitments do not end up like so many new year’s resolutions? Good data and evidence will play a fundamental role in delivering on the Agenda for Humanity and there are at least three areas that agencies might consider as they implement these reforms:

1) Use well-defined indicators and take baseline measurements

The written commitments submitted to the Summit were made available online last September via the ‘PACT’ portal. On 6 January a self-reporting function was launched through the PACT that will allow organisations to provide details on the implementation of their commitments, and what challenges they are facing. In order for this to offer a meaningful exercise in how progress is or is not happening, organisations will need to be clear on the indicators and methods they are using to monitor and assess their commitments. These reports will be publically available, allowing organisations to do their own monitoring and analysis on how commitments are being advanced (or where there are gaps) according to different themes in the Agenda for Humanity. Hopefully, by reporting through the PACT, organisations will also be able to understand how other actors are monitoring progress on similar commitment areas and use these ideas to collaborate and improve their own monitoring practices.

2) Identify common indicators and share data

Humanitarians have often given themselves a pass on having better data and evidence on the basis that humanitarian contexts are more challenging. However, as seen in ALNAP’s new webinar series, as well as the excellent Evidence Aid conference and What Works Summit last year, many humanitarian actors are demonstrating over, and over and over that these challenges are no longer an excuse for the failure to share data or generate better evidence.

Looking at two of the other major 2030 Agenda frameworks, Sendai and the Sustainable Development Goals (SDGs), both processes delivered a set of clear and shareable indicators to monitor progress which have been assessed and ranked based on the quality of existing data and methods to measure them. This means that the many actors who invested time and resources into negotiating Sendai and the SDGs have a basis for measuring and monitoring their progress, and a good idea of where the gaps in their data lie. The World Humanitarian Summit differs from these processes in a significant way, as it was not an inter-governmental process. But this does not mean that it cannot draw on the lessons learned from these processes and work toward a more formalised and focused approach to measuring progress.

There is hope that the sector is moving in this direction: one of the key outcomes from the WHS related to data and evidence is the establishment of the new Centre for Humanitarian Data in The Hague, which will build on the work of the Humanitarian Data Exchange to improve standardisation of data sets and the data literacy of humanitarian workers. Several of the Grand Bargain work streams are targeting ways to improve data sharing and harmonisation. At a broader level, there continues to be important work to map the evidence of intervention efficacy in the humanitarian system and to identify key gaps that require further research.

3) Take performance measurement seriously and devote the necessary attention and resources to this

The indicator frameworks developed for Sendai and the SDGs are far from perfect. But at least they provide a basic tool for understanding and assessing implementation and the tangible improvements that such Frameworks are bringing about in people’s lives. As with the frameworks themselves, the indicators for Sendai and the SDGs were developed and approved through negotiations with Member States. Yet they also had significant input and leadership from scientists, researchers and other experts: The Scientific and Technical Advisory Group (STAG) for UNISDR has provided independent and scientifically informed advice to the Sendai framework negotiations, ensuring that it recognises the importance of evidence for effective disaster risk reduction. The SDG indicators were developed by UN Statistics, who drew on a wide range of inputs from academics and researchers.

In line with this, OCHA could consider establishing a similar advisory group to the STAG, made up of technical experts from humanitarian agencies as well as academics and independent researchers. The humanitarian system would benefit from a more formal hub for discussing and agreeing on shared indicators, whether these are general indicators of well-being or outcome indicators specific to different humanitarian sectors. The diverse collective of academics who submitted a statement to the World Humanitarian Summit could be a good start for this, or the members of the Evidence Lounge coalition that organised a standing-room only side event at the WHS on evidence.

We’ve heard a lot of talk about the need to change how humanitarian action is conceptualised and approached, and how it is funded. These changes are important. But all the words and money in the world will simply go to waste if we don’t know what progress looks like or how to tell if we are getting there. Like any good new year’s resolution, commitments to change require the right intention, the right resourcing, and the right tool to check if you are achieving any change at all.


  • Share this page:
  • Email
  • Print


John Borton

John Borton (John Borton Consulting) 17 January 2017, 10:19

Many thanks for sharing your thoughts Alice.

In September 2015 I presented a paper “What would it take to establish a system for monitoring the effectiveness of the humanitarian sector?: Two suggestions” at the 'Quest for Humanitarian Effectiveness' conference in Manchester hosted by HCRI and Save the Children.

The first suggestion (first mooted by the Joint Evaluation of Emergency Assistance to Rwanda 21 years ago) is that an independent body to champion the voice and rights of crisis-affected populations should be a key component of efforts to assess and monitor the performance of the humanitarian sector.

The second suggestion is that the widespread use of the Sphere Standards offers the possibility of measuring performance against the technical standards to show those programmes, sectors and geographical areas where the minimum standards are being ‘met or exceeded’ and those where they are ‘not being met’. This would enable attention, effort and resources to be focused on addressing the reasons why the standards were not being met in the affected areas and programmes.

I would be happy to share the paper with those who are interested. If so please email me at

I hope these thoughts are helpful

All the best

Carlisle Levine

Carlisle Levine (self-employed) 17 January 2017, 15:45

This is a great conversation! Thank you, Alice.

I realize that developing common indicators is actually a small part of it: a potential tool that, used well, may address a larger challenge. That said, two colleagues and I recently were commissioned by a number of US-based international NGOs and InterAction to write a white paper on agency-level measurement systems: what they are, why they are developed, what they are intended to accomplish, what's required to develop, maintain and use them, what challenges they face, and what recommendations we identified for developing systems that will be useful and sustained. Since common indicators are part of most of the systems we studied (17 in-depth, mainly US-based INGO systems, although we also included CARE International), this paper could have relevance to this discussion. You can find it on InterAction's website here:

I hope this is helpful!

Best wishes,

Leave a comment

Anyone can leave a comment, but you need a Full or Observer member account first.

If you already have an account, please sign in.

If you don't have an account, you can create an account now.

Before you download this file, please answer two questions to help us monitor usage

1) What do you think you'll use this document for?

  • Other:

1) What is your email address?

2) What is the name of your organisation?

Please answer both questions above Submit

Starting your download...

Pilot version: You are downloading the pilot version of this guide; we welcome any feedback you have. Please email

Close this overlay