3.1 Putting a monitoring and evaluation system in place

Monitoring and evaluation systems for the emergency should be put in place at the very outset of the response. In some cases, this will simply require some adaptation of existing CO monitoring and evaluation systems to the emergency context. The emergency response programme is often characterised by quickly changing conditions, large programmes with many donor sources (not necessarily managed on a clear project by project basis) and large-scale activities implemented in very short time frames. The monitoring and evaluation system put in place needs to be able to deliver real-time information on what is happening in relation to the emergency response within these conditions. Information management is a critical function in emergency responses, and is complementary to M&E. Good relationships and an understanding of the different functions of M&E and information management is important. Section 3.1.1 outlines the basic steps in designing a monitoring and evaluation system.

Step Activities
Programme design and establishment of indicators
  • Review and revise programme design and, if necessary, prepare a logical framework.
  • Ensure that objectives, purpose, outputs and risks/assumptions are stated clearly and are measurable.
  • Ensure that indicators are specified adequately with quantity, quality and time.
  • In an emergency response programme, this may need to be done at the programme level rather than project level.
Assess M&E capacity
  • Identify what human resources and funding are available for M&E activities.
  • Assess and specify capacity-building requirements for M&E staff.
Plan for data collection and analysis
  • Determine what data is available and check information sources for reliability and accuracy.
  • Decide what additional information needs to be collected for baseline, monitoring and evaluation.
  • Set a time frame for data collection and processing, and agree on roles and responsibilities.
Plan for reporting, feedback and use of results
  • Design a reporting system and specify formats.
  • Devise a system of feedback and incorporating results into management decision-making.

Source: Adapted from IFRC M&E handbook, 2002

The main tool to measure performance or an emergency response programme against standards is the CARE Humanitarian Accountability Framework, and Benchmarks for Humanitarian Responses (often referred to as ‘Benchmarks’) (available at Chapter 32 Quality and accountability). These benchmarks are designed to represent a mixture of core standards viewed by CARE as a priority as well as common ‘lessons unlearned’ or critical gaps that appear frequently during evaluations or AARs of CARE emergency operations. These benchmarks should be used to inform development of monitoring and evaluation systems.