Commentary

New year, new you? What the WHS can learn from failed new year’s resolutions

It’s January, a time for new year’s resolutions and self-improvement regimens. Yet many of these resolutions are abandoned by February. For those that manage to succeed, what’s their secret?

Apparently, one of the keys to a successful new year’s resolution (or ‘self-change’ as they are called by psychologists), is measurement: setting realistic and clear goals that can be tracked over time. The ability to monitor progress not only helps a self-changer understand whether the resolution is being achieved, but also acts as a source of motivation to continue.

New year, new you What the WHS can learn from failed new year’s resolutions

Secretary-General Ban Ki-moon makes introductory remarks at the Opening Ceremony of the first-ever World Humanitarian Summit, being held in Istanbul, Turkey, from 23-24 May, 2016. UN Photo/Eskinder Debebe

1. Use well-defined indicators and take baseline measurements

The written commitments submitted to the Summit were made available online last September via the ‘PACT’ portal. On 6 January a self-reporting function was launched through the PACT that will allow organisations to provide details on the implementation of their commitments, and what challenges they are facing. In order for this to offer a meaningful exercise in how progress is or is not happening, organisations will need to be clear on the indicators and methods they are using to monitor and assess their commitments. These reports will be publically available, allowing organisations to do their own monitoring and analysis on how commitments are being advanced (or where there are gaps) according to different themes in the Agenda for Humanity. Hopefully, by reporting through the PACT, organisations will also be able to understand how other actors are monitoring progress on similar commitment areas and use these ideas to collaborate and improve their own monitoring practices.

2. Identify common indicators and share data

Humanitarians have often given themselves a pass on having better data and evidence on the basis that humanitarian contexts are more challenging. However, as seen in a previous ALNAP webinar series, as well as the excellent Evidence Aid conference and What Works Summit last year, many humanitarian actors are demonstrating over, and over and over that these challenges are no longer an excuse for the failure to share data or generate better evidence.

Looking at two of the other major 2030 Agenda frameworks, Sendai and the Sustainable Development Goals (SDGs), both processes delivered a set of clear and shareable indicators to monitor progress which have been assessed and ranked based on the quality of existing data and methods to measure them. This means that the many actors who invested time and resources into negotiating Sendai and the SDGs have a basis for measuring and monitoring their progress, and a good idea of where the gaps in their data lie. The World Humanitarian Summit differs from these processes in a significant way, as it was not an inter-governmental process. But this does not mean that it cannot draw on the lessons learned from these processes and work toward a more formalised and focused approach to measuring progress.

There is hope that the sector is moving in this direction: one of the key outcomes from the WHS related to data and evidence is the establishment of the new Centre for Humanitarian Data in The Hague, which will build on the work of the Humanitarian Data Exchange to improve standardisation of data sets and the data literacy of humanitarian workers. Several of the Grand Bargain work streams are targeting ways to improve data sharing and harmonisation. At a broader level, there continues to be important work to map the evidence of intervention efficacy in the humanitarian system and to identify key gaps that require further research.

3. Take performance measurement seriously and devote the necessary attention and resources to this

The indicator frameworks developed for Sendai and the SDGs are far from perfect. But at least they provide a basic tool for understanding and assessing implementation and the tangible improvements that such Frameworks are bringing about in people’s lives. As with the frameworks themselves, the indicators for Sendai and the SDGs were developed and approved through negotiations with Member States. Yet they also had significant input and leadership from scientists, researchers and other experts: The Scientific and Technical Advisory Group (STAG) for UNISDR has provided independent and scientifically informed advice to the Sendai framework negotiations, ensuring that it recognises the importance of evidence for effective disaster risk reduction. The SDG indicators were developed by UN Statistics, who drew on a wide range of inputs from academics and researchers.

In line with this, OCHA could consider establishing a similar advisory group to the STAG, made up of technical experts from humanitarian agencies as well as academics and independent researchers. The humanitarian system would benefit from a more formal hub for discussing and agreeing on shared indicators, whether these are general indicators of well-being or outcome indicators specific to different humanitarian sectors. The diverse collective of academics who submitted a statement to the World Humanitarian Summit could be a good start for this, or the members of the Evidence Lounge coalition that organised a standing-room only side event at the WHS on evidence.

We’ve heard a lot of talk about the need to change how humanitarian action is conceptualised and approached, and how it is funded. These changes are important. But all the words and money in the world will simply go to waste if we don’t know what progress looks like or how to tell if we are getting there. Like any good new year’s resolution, commitments to change require the right intention, the right resourcing, and the right tool to check if you are achieving any change at all.

Explore our latest State of the Humanitarian System report

Find out more