For Help Contact:
CARE Emergency Group (CEG) Programme Quality and Accountability Coordinator
Tel: +41 22 795 1035
Email: emergencyQA@careinternational.org

9. Monitoring and Evaluation

Monitoring and evaluation help in understanding how the assistance and support that CARE provides to disaster-affected communities affects them. Monitoring and evaluation is a critical part of our accountability, as it allows us to compare the results of our humanitarian actions with our strategic intent (e.g. CARE Program Strategy, Humanitarian & Emergency Strategy, Core Humanitarian Standards) with technical standards (such as SPHERE and companions) and with expected outcomes and benchmarks for the response (from response strategy, proposal frameworks etc.). Monitoring and Evaluation have a range of purposes, but the critical one is this: better outcomes for disaster-affected populations. While Monitoring and Evaluation are used together and therefore overlap, they are separate processes:

Monitoring is a way to find out what is actually happening, by regularly assessing progress through the life of an emergency response programme. Frequent or continuous monitoring allows us to discover any problems or opportunities early so that they can be addressed. Monitoring is mostly internal and findings should be shared quickly in a user-friendly format to be useful. This means that management processes should be explicitly designed to consider and respond to monitoring data.

Evaluation is an episodic process of identifying and reflecting upon the effects of what has been done, and judging the value of the response in terms of contribution to the overall performance of the humanitarian sector. It is normally led by an external consultant to promote objectivity and accountability, and is often conducted jointly with peers of by donors. A ‘real-time evaluation’ is a particular category of evaluation that is conducted (internally) while the response is still on-going to support field management decision-making in ‘real time’ with a more in-depth analysis of the appropriateness of approaches and interventions.

Soliciting feedback from the crisis affected people: While management and donors tend to be the main users of information generated by Monitoring & Evaluation, it is an important that people affected by humanitarian crisis and by our actions are involved in the collection and analysis of information as well as in the decisions which are informed by that analysis. People have a right to have their voices heard in judging our response to their crisis (see also CHS commitments 4 and 5). Asking for the views of the affected population can help us understand the difference we are making during the course of the response, and improve humanitarian policy and practice at a more strategic level.

1.1 Roles and responsibilities of monitoring and evaluation

1.2 Role of the Monitoring and Evaluation Coordinator in emergency team

1.3 Definition of key terms relating to monitoring and evaluation

Checklist

  • Assess CO capacity for monitoring and accountability.
  • Establish monitoring and evaluation systems from the very outset of the emergency response.
  • Use CARE’s Humanitarian Accountability Framework to inform the development of monitoring and evaluation systems.
  • Establish appropriate objectives and indicators at the individual project level as well as the overall emergency programme level during the design phase of the response.
  • Ensure that the monitoring and accountability system in place is capable of delivering real-time information on what is happening in emergency response conditions.
  • Plan for data collection and analysis. Double-check that the information to be gathered is going to give a realistic picture of what is actually happening.
  • Plan for reporting, feedback and use of results.
  • Ensure that the Monitoring and Evaluation Coordinator coordinates data collection and analysis for monitoring purposes across the programme.
  • Employ a range of appropriate and participatory data collection methods.
  • Confirm that all monitoring data collected is analysed and presented in a timely and user-friendly way.
  • Ensure that appropriate managers review and act on monitoring data.
  • Include resources for monitoring and evaluation activities in project budgets.
  • Ensure monitoring includes feedback to communities.

Checklist

  • Design an appropriate monitoring and evaluation system for the emergency response.
  • Ensure monitoring and evaluation systems consider all aspects of the response management.
  • Establish appropriate objectives and indicators at the design phase.

3.1 Putting a monitoring and evaluation system in place

3.1.1 Steps in designing a monitoring and evaluation system

3.2 Aspects of the response to consider

3.2.1 What to look for when monitoring an emergency response programme

3.3 Objectives and indicators

3.3.1 Checklist for indicators

Checklist

  • Coordinate data collection and analysis responsibilities across the programme.
  • Select a range of appropriate and participatory data collection methods.
  • Conduct timely data analysis.
  • Ensure timely management review of monitoring results and correct any issues arising.

4.1 Data collection and analysis responsibilities

4.2 Data collection methods

4.2.1 Participatory data collection methods

4.3 Data analysis

4.3.1 Quality of information

4.4 Management review of monitoring results

4.5 Proposal tracking and documentation

The term ‘accountability monitoring’ is used to mean the monitoring of our performance on accountability, such as through our compliance with the CARE Humanitarian Accountability Framework.

Accountability monitoring can help CARE to:

  • Check that the accountability systems that have been set up are working effectively.
  • Focus our monitoring on approach, processes, relationships and behaviours, quality of work, satisfaction as well as outputs and activities.
  • Priortise listening to the views of disaster affected people to assess our impact and identify improvements.
  • Provide a feedback opportunity for staff, communities and other key stakeholders to comment on our response and how we are complying with our standards and benchmarks.

Accountability monitoring contributes to CARE’s overall monitoring and evaluation activities. Aspects can be integrated into other project monitoring tools, or carried out as a specific activity e.g. a beneficiary satisfaction survey or FGD (Focus Group Discussion) to solicit feedback and complaint from vulnerable groups in isolated communities as part of a formal complaints mechanism. Ideally, accountability should be built into project proposals from the outset.

CARE’s HAF and Sphere can be used to help develop staff design monitoring tools for use with staff, partners, communities and local authorities.

Some examples can be found at Annex 9.17 Sample of accountability monitoring tools, including:

  • Checklists.
  • Simple questionnaires.
  • Focus group discussion tools.
  • Staff review tools.
  • Monitoring tool to help research into local communities’ views.

Accountability data (including complaints data) needs to be incorporated into monitoring reporting, alongside monitoring of project progress.

Most staff will have experiences of meeting people who are not fully happy with the work or behaviour of CARE or partners in their community or region.  Most of this feedback or complaint is received informally e.g. people approach staff who are visiting the community, or visit CARE’s office in search of assistance or resolution to their problems or grievances. It is also not unusual for staff of one agency to receive a complaint about another agency. Receiving feedback, suggestions and complaints about our work is normal, important and should be welcomed.

Whilst there are occasions where complaints are handled well by field staff, there are many examples when they are not. At times, staff, already overwhelmed with day to day emergency activities, may find it difficult to manage the informal feedback and complaint they receive, might not prioritise complaints, or might forget or lose complaints. Tensions can also arise when a complaint is received about a member of staff and it is not clear how this complaint will be dealt with and by whom.

To improve this, CARE offices should put in place a more formalised system of soliciting, receiving, processing and responding to the feedback and complaints we receive. These systems should aim to provide a safe, non-threatening and easily accessible mechanism that enables even the most powerless to make a suggestion or complaint. On the part of CARE, this requires us to address and respond to all complaints, and to be timely and transparent in our decisions and actions.

Experience shows that complaints mechanisms can have enormous benefits for both communities and for CARE staff.  It can help to establish a relationship of trust between staff and communities and improve the impact of our response.  It can help save time and money that would otherwise be wasted.  It can help build a safer organisation and safer environment for our staff, and for our beneficiaries, especially the most vulnerable amongst them.  On the other hand, setting up a mechanism that does not function well (for example if complaints are not followed up) may contribute to frustration and worsening relationships with communities and local stakeholders.

Complaints procedures can be simple, although they need to be carefully planned and follow certain key principles. A badly designed or managed complaints procedure can be harmful.  Here are 10 discussion points and suggestions for good practice to help establish a complaints mechanism that is:

      • appropriate
      • safe
      • well understood
      • promotes transparency
      • timely
      • effective
      • accessible to all

Checklist to help establish a complaints mechanism

      1. Plan and budget for a complaints mechanism from the beginning of an emergency
      2. Build staff awareness and commitment to a complaints mechanism
      3. Provide a range of ways people can complain
      4. Make sure it can handle extreme cases of fraud and abuse
      5. Be clear about the scope of the complaints mechanism and communicate this clearly
      6. Develop a complaints mechanism procedure document and always follow the established procedure
      7. Clearly communicate the complaints mechanism to all key stakeholders as part of overall information sharing systems
      8. Complete the feedback loop: use the complaints data to improve overall performance and to provide feedback to communities (two way communication and feedback)
      9. Be clear on roles and responsibilities in managing complaints, and provide adequate training and support to staff
      10. Monitor the complaints mechanism to verify that it is effective

See Annex 9.18 Feedback, complaints and response mechanisms for further detail.

Feedback to communities on our monitoring and evaluation results (including complaints data and results from monitoring of our accountability) should be part of CARE’s overall information sharing to communities. An important part of this is to make reports reader friendly and share them as much as possible with all staff. Provide key complaints data in public places e.g. on websites and community noticeboards.

Providing such information to affected communities in an accurate and timely way is a fundamental ingredient of building trust. Trust is in turn a fundamental ingredient of participation. People will only engage meaningfully with individuals or institutions that they believe they can trust.

The following chart demonstrates how a flow of communication should work in this environment.

See Annex 9.19 – Information provision to affected communities for further detail.

Checklist

  • Organise an After Action Review.
  • Conduct an evaluation when required.

CARE’s Policy on Evaluations is available at Annex 9.1. This policy highlights CARE’s commitment to learning from humanitarian response with a view to improve our practices and policies for future responses. All CARE COs are required to comply with this learning policy. Support and advice can be provided by CEG for learning activities.

8.1 Organising an After Action Review

8.2 Commissioning and managing an evaluation

Checklist

Include monitoring and evaluation line items in project budgets.

9.1 Monitoring and evaluation costs

Lessons learned from previous CARE emergency operations have found that CARE COs often lack the capacity to design and implement monitoring and evaluation systems during emergency responses. In particular, COs have difficulty in adapting their regular monitoring and evaluation systems (used for longer-term programming) to more unpredictable emergency situations that are changing rapidly.

As both a relief and development agency, CARE has determined that programme and project standards should apply to all CARE programming, including emergencies, post-conflict rehabilitation and development, whether CARE is directly providing assistance, working with or through partners, or conducting advocacy campaigns. 

CARE’s Humanitarian Accountability Framework should be used to inform the development of monitoring and evaluation systems. These are outlined in detail in Chapter 6 Quality and accountability. CARE’s own standards are outlined intentionally consistent with Sphere minimum standards, Humanitarian Accountability principles as well as the Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in Disaster Relief.

CARE’s Evaluation Policy describes CARE’s commitments to using evaluations to promote systematic reflective practice and organisational learning, and accountability to help contribute to significant and sustainable changes in the lives of people we serve.

Active Learning Network for Accountability and Performance in Humanitarian Action

ALNAP 2006. The participation handbook. Oxfam.

ALNAP-Training materials for Evaluation of Humanitarian Action

ALNAP-Summaries of lessons from previous disaster types

Digital Resources for Evaluators

Digital Resources for Evaluators

ECB Project-Accountability and Impact Measurement

ECB Joint Needs Assessment/Evaluation Database-Summaries of key learning from evaluations and AARs

Humanitarian Accountability Partnership 2007. A guide to the HAP Standard.

IFRC-International Federation of Red Cross and Red Crescent Societies 2002. Handbook for monitoring and evaluation.

IFRC-International Federation of Red Cross and Red Crescent Societies 2005. Guidelines for emergency assessment.

ListenFirst - is a draft set of tools and approaches that NGOs can use to make themselves more accountable to the people they serve. It includes a list of 25 examples of putting accountability into practice.

MandE NEWS-Monitoring and Evaluation NEWS