3.1 Putting a monitoring and evaluation system in place
Monitoring and evaluation systems for the emergency should be put in place at the very outset of the response. In some cases, this will simply require some adaptation of existing CO monitoring and evaluation systems to the emergency context. The emergency response programme is often characterised by quickly changing conditions, large programmes with many donor sources (not necessarily managed on a clear project by project basis) and large-scale activities implemented in very short time frames. The monitoring and evaluation system put in place needs to be able to deliver real-time information on what is happening in relation to the emergency response within these conditions. Information management is a critical function in emergency responses, and is complementary to M&E. Good relationships and an understanding of the different functions of M&E and information management is important. Section 3.1.1 outlines the basic steps in designing a monitoring and evaluation system.
Step | Activities |
Programme design and establishment of indicators |
|
Assess M&E capacity |
|
Plan for data collection and analysis |
|
Plan for reporting, feedback and use of results |
|
Source: Adapted from IFRC M&E handbook, 2002
The main tool to measure performance or an emergency response programme against standards is the CARE Humanitarian Accountability Framework, and Benchmarks for Humanitarian Responses (often referred to as ‘Benchmarks’) (available at Chapter 32 Quality and accountability). These benchmarks are designed to represent a mixture of core standards viewed by CARE as a priority as well as common ‘lessons unlearned’ or critical gaps that appear frequently during evaluations or AARs of CARE emergency operations. These benchmarks should be used to inform development of monitoring and evaluation systems.