3.1.1 Steps in designing a monitoring and evaluation system

Step Activities
Programme design and establishment of indicators
  • Review and revise programme design and, if necessary, prepare a logical framework.
  • Ensure that objectives, purpose, outputs and risks/assumptions are stated clearly and are measurable.
  • Ensure that indicators are specified adequately with quantity, quality and time.
  • In an emergency response programme, this may need to be done at the programme level rather than project level.
Assess M&E capacity
  • Identify what human resources and funding are available for M&E activities.
  • Assess and specify capacity-building requirements for M&E staff.
Plan for data collection and analysis
  • Determine what data is available and check information sources for reliability and accuracy.
  • Decide what additional information needs to be collected for baseline, monitoring and evaluation.
  • Set a time frame for data collection and processing, and agree on roles and responsibilities.
Plan for reporting, feedback and use of results
  • Design a reporting system and specify formats.
  • Devise a system of feedback and incorporating results into management decision-making.

Source: Adapted from IFRC M&E handbook, 2002

The main tool to measure performance or an emergency response programme against standards is the CARE Humanitarian Accountability Framework, and Benchmarks for Humanitarian Responses (often referred to as ‘Benchmarks’) (available at Chapter 32 Quality and accountability). These benchmarks are designed to represent a mixture of core standards viewed by CARE as a priority as well as common ‘lessons unlearned’ or critical gaps that appear frequently during evaluations or AARs of CARE emergency operations. These benchmarks should be used to inform development of monitoring and evaluation systems.