1. Role and responsibilities of monitoring and evaluation in humanitarian programming

Monitoring and evaluation help in understanding how the assistance and support that CARE provides to disaster-affected communities affects them. Monitoring and evaluation is a critical part of our accountability, as it allows us to compare the results of our humanitarian actions with our strategic intent (e.g. CARE Program Strategy, Humanitarian & Emergency Strategy, Core Humanitarian Standards) with technical standards (such as SPHERE and companions) and with expected outcomes and benchmarks for the response (from response strategy, proposal frameworks etc.). Monitoring and Evaluation have a range of purposes, but the critical one is this: better outcomes for disaster-affected populations. While Monitoring and Evaluation are used together and therefore overlap, they are separate processes:

Monitoring is a way to find out what is actually happening, by regularly assessing progress through the life of an emergency response programme. Frequent or continuous monitoring allows us to discover any problems or opportunities early so that they can be addressed. Monitoring is mostly internal and findings should be shared quickly in a user-friendly format to be useful. This means that management processes should be explicitly designed to consider and respond to monitoring data.

Evaluation is an episodic process of identifying and reflecting upon the effects of what has been done, and judging the value of the response in terms of contribution to the overall performance of the humanitarian sector. It is normally led by an external consultant to promote objectivity and accountability, and is often conducted jointly with peers of by donors. A ‘real-time evaluation’ is a particular category of evaluation that is conducted (internally) while the response is still on-going to support field management decision-making in ‘real time’ with a more in-depth analysis of the appropriateness of approaches and interventions.

Soliciting feedback from the crisis affected people: While management and donors tend to be the main users of information generated by Monitoring & Evaluation, it is an important that people affected by humanitarian crisis and by our actions are involved in the collection and analysis of information as well as in the decisions which are informed by that analysis. People have a right to have their voices heard in judging our response to their crisis (see also CHS commitments 4 and 5). Asking for the views of the affected population can help us understand the difference we are making during the course of the response, and improve humanitarian policy and practice at a more strategic level.

Position Key responsibilities
Monitoring and Evaluation Coordinator
  • Ensure an appropriate monitoring and evaluation (M&E) system is in place and is functioning satisfactorily. Periodically review and revise the system so that it is adapted appropriately to changing operating contexts.
  • Ensure relevant and timely M&E information is provided in user-friendly formats to key stakeholders, including beneficiary communities, CARE senior management and donors.
  • Provide training and mentoring for CO staff.
  • Act as a focal point to organise and manage monitoring reviews, evaluations and/or After Action Reviews (AARs).
RRT and RED roster M&E Specialists
  • Provide temporary support to the CO to establish baselines and set up M&E systems suited to the operating context.
  • Provide training and mentoring for CO staff.
  • May participate in a monitoring review, evaluation and/or facilitate an AAR.
Emergency Team Leader/Senior Management Team (SMT)
  • Ensure application of CARE’s Humanitarian Accountability Framework.
  • Ensure adequate resources are allocated in project budgets to cover M&E-related activities, including monitoring reviews, external evaluations and AARs.
Lead Member Quality and Accountability Focal Point
  • Monitor implementation of M&E systems for the emergency response and support technical advice where necessary.
Regional Emergency Coordinator
  • Promote and guide quality in the emergency programme, and ensure critical gaps are identified and addressed.
Crisis Coordination Group
  • Determine whether incident is a Type 2, 3 or 4 emergency, in which case the CO is required to fund and organise an AAR.
  • Agree on the need for an external evaluation and/or CARE monitoring visit(s).
CI Coordinator for Quality and Accountability
  • Provide technical support to COs to help them comply with CARE’s humanitarian benchmarks.
  • Support ‘learning in’ (where lessons learned are applied in CARE’s emergency responses) and ‘learning out’ (where lessons learned from new emergencies are captured and shared beyond the CO).

Many team members will have responsibility for monitoring and evaluation activities in an emergency response, in particular project managers and field officers who collect data on response activities. It is important that a member of the emergency team is designated overall responsibility for coordinating monitoring and evaluation activities. This is usually the head of the CO’s monitoring and evaluation unit if one exists, and it is critical that they are closely involved with the emergency response team from the very outset of the response. Where this capacity does not exist, it is important that the CO appoint or recruit a Monitoring and Evaluation Coordinator for the emergency response operation as quickly as possible.

The key responsibilities of the Monitoring and Evaluation Coordinator in relation to the emergency response programme are to:

  • help establish appropriate indicators at the outset of the emergency response (drawing on benchmarks, Sphere and other available tools)
  • establish and coordinate monitoring systems including data collection, analysis and review
  • Work closely with the CO Information Manager to prepare specific data collection methods and tools
  • coordinate monitoring activities and inputs required of other team members
  • Anticipate, plan and support reporting requirements
  • ensure information gathered through monitoring activities is shared quickly and in an appropriate format with senior managers so that any problems arising can be addressed
  • organise evaluation activities in line with CARE’s learning policy (refer to Annex 9.1 Policy on Learning for
  • Humanitarian Action).

Monitoring and evaluation advisors are available through RED roster for deployment to emergencies to assist with setting up and training in monitoring and evaluation systems in emergencies.

Annex 9.2        Job Description for Monitoring and Evaluation Coordinator

Terms What is measured Definition
Baseline Indicators at the start of the project Information about the situation a project is trying to affect, showing what it is like before the intervention(s)
Benchmark Standard of achievement A standard of achievement that a project has achieved, which it can compare with other achievements
Bias A tendency to make errors in one direction. For example, are there potential for errors because not all key stakeholder groups have been consulted? Are there incentives that reward incorrect information? Does reporting a death in the family mean that food ration levels might be reduced?
Outcomes Effectiveness Use of outputs and sustained benefits, e.g. how many litres of clean water are available in each household, how many participants show evidence of training being used
Outputs Effort Implementation of activities, e.g. the number of water containers distributed, number of participants trained
Impact Change

(can be positive or negative)

Difference from the original problem situation. At its simplest, impact measurement means asking the people affected, ‘What difference are we making?’  Examples of impact may be a significant reduction in the incidence of water-borne disease or evidence that what trainees have learned is having a tangible impact on project/programme delivery, etc.
Milestone Performance at a critical point A well-defined and significant step towards achieving a target, output, outcome or impact, which allows people to track progress
Qualitative information Performance indicators Qualitative information describes characteristic according to quality (as opposed to quantity) and often includes people’s opinions, views and other subjective assessments. Uses qualitative assessment tools, such as focus groups, interviewing key informants, stakeholder mapping, ranking, analysis of secondary data and observation. Qualitative data collection tools require skill to obtain a credible and relatively unbiased assessment. The key question is: do they provide reliable and valid data of sufficient quantity and quality?
Quantitative information Performance indicators Information about the number of things someone is doing, providing or achieving, or the length of those things, or the number of times they happen
Triangulation Consistency between different sets of data Use of three or more sources or types of information to verify assessment