3. Methodologies for monitoring and evaluation in emergencies

Checklist

  • Design an appropriate monitoring and evaluation system for the emergency response.
  • Ensure monitoring and evaluation systems consider all aspects of the response management.
  • Establish appropriate objectives and indicators at the design phase.

Monitoring and evaluation systems for the emergency should be put in place at the very outset of the response. In some cases, this will simply require some adaptation of existing CO monitoring and evaluation systems to the emergency context. The emergency response programme is often characterised by quickly changing conditions, large programmes with many donor sources (not necessarily managed on a clear project by project basis) and large-scale activities implemented in very short time frames. The monitoring and evaluation system put in place needs to be able to deliver real-time information on what is happening in relation to the emergency response within these conditions. Information management is a critical function in emergency responses, and is complementary to M&E. Good relationships and an understanding of the different functions of M&E and information management is important. Section 3.1.1 outlines the basic steps in designing a monitoring and evaluation system.

3.1.1 Steps in designing a monitoring and evaluation system

Step Activities
Programme design and establishment of indicators
  • Review and revise programme design and, if necessary, prepare a logical framework.
  • Ensure that objectives, purpose, outputs and risks/assumptions are stated clearly and are measurable.
  • Ensure that indicators are specified adequately with quantity, quality and time.
  • In an emergency response programme, this may need to be done at the programme level rather than project level.
Assess M&E capacity
  • Identify what human resources and funding are available for M&E activities.
  • Assess and specify capacity-building requirements for M&E staff.
Plan for data collection and analysis
  • Determine what data is available and check information sources for reliability and accuracy.
  • Decide what additional information needs to be collected for baseline, monitoring and evaluation.
  • Set a time frame for data collection and processing, and agree on roles and responsibilities.
Plan for reporting, feedback and use of results
  • Design a reporting system and specify formats.
  • Devise a system of feedback and incorporating results into management decision-making.

Source: Adapted from IFRC M&E handbook, 2002

The main tool to measure performance or an emergency response programme against standards is the CARE Humanitarian Accountability Framework, and Benchmarks for Humanitarian Responses (often referred to as ‘Benchmarks’) (available at Chapter 32 Quality and accountability). These benchmarks are designed to represent a mixture of core standards viewed by CARE as a priority as well as common ‘lessons unlearned’ or critical gaps that appear frequently during evaluations or AARs of CARE emergency operations. These benchmarks should be used to inform development of monitoring and evaluation systems.

A common gap in monitoring and evaluation systems is that only output data is collected. For example, it is common to collect data regarding numbers of plastic sheets and other relief items distributed without understanding how the assistance provided is actually being used or whether significant gaps are remaining. For monitoring to be useful in addressing problems in real time, the monitoring systems need to take account of the factors in section 3.2.1.

3.2.1 What to look for when monitoring an emergency response programme

  • Achievement: What has been achieved? How do we know that the project caused the results?
  • Assessing progress against standards: Are the objectives being met? Is the level of assistance per individual or per family meeting standards (e.g. Sphere) or are significant gaps remaining? Is the project doing what the plans said it would do or are there unintended impacts?
  • Monitoring of project management: Is the project well managed? What issues or bottlenecks should be addressed?
  • Identifying strengths and weaknesses: Where does the project need improvement and how can it be done? Are the original objectives still appropriate?
  • Checking effectiveness: What difference is the project making? Can the impact be improved?
  • Identifying any unintended impacts: Are there any unintended issues or negative consequences arising as a result of the response? How should these be addressed?
  • Cost effectiveness: Are the costs reasonable?

Sharing learning: Can we help to prevent similar mistakes or to encourage positive approaches?

Appropriate objectives and indicators for the emergency response should be established at the outset of the emergency response programme. Indicators should be established at the individual project level as well as the overall emergency programme level. A programme strategy should be developed at the outset of the response (refer to Chapter 32 Quality and Accountability) with overall programme objectives. This is also a good place to establish overall programme level monitoring indicators for the emergency response.

CARE’s humanitarian benchmarks and associated standards, such as the CI programming framework and Sphere standards, should be used as a basis for indicators. The good enough guide, Tool 9 (Annex 9.5 The good enough guide) also provides guidance on developing indicators. Most critically, the standards for indicators shown in section 3.3.1 should be considered.

Simple monitoring frameworks should be put in place for emergency projects. See Annex 9.6 Sample monitoring and evaluation framework, Annex 9.7 Sample indicators, and Annex 9.8 Sample monitoring checklist.

3.3.1 Checklist for indicators

Checklist for indicators

Relevant The indicators should be linked directly to the programme objectives and to the appropriate levels in the hierarchy.
Technically feasible The indicators should be capable of being assessed (or ‘measured’ if they are quantitative).
Reliable The indicators should be verifiable and (relatively) objective. That means conclusions based on them should be the same if they are assessed by different people at different times and under different circumstances.
Usable People in the emergency response programme should be able to understand and use the information provided by the indicators to make decisions or improve their work and the performance of the response.
Easily communicated Using indicators that are based on common standards such as Sphere and/or those used host government makes monitoring information easier to understand by peer agencies, which is particularly useful for coordinating agencies.
Participatory The steps for working with the indicator should be capable of being carried out with the target community and other stakeholders in a participatory manner during data collection, analysis and use (refer also to Chapter 30 Participation).

Other criteria that can also be helpful in selecting indicators include:

  • Comprehensible: The indicators should be worded simply and clearly so that people involved in the project will be able to understand them.
  • Valid: The indicators should actually measure what they are supposed to measure, e.g. measuring effects due to project interventions rather than outside influences.
  • Sensitive: They should be capable of demonstrating changes in the situation being observed, e.g. measuring the gross national product of Uganda does not tell us much about the individual households in one district.
  • Cost-effective: The results should be worth the time and money it costs to collect, analyse and apply them.
  • Timely: It should be possible to collect and analyse the data reasonably quickly, i.e. in time to be useful for any decisions that have to be made.
  • Ethical: The collection and use of the indicators should be acceptable to the communities (target populations) providing the information.

When making a decision about what method to use and which indicators are appropriate, it is important to double-check that the information that you are gathering is going to give a realistic appraisal of what is actually happening, and how it should be used and presented.