1.4.5 EAR and Monitoring and Evalutation (M&E)

Monitoring and Evaluation (M&E) is an approach to assessing the activities and effectiveness of various community-based interventions. Monitoring refers to the continual review or overview of the activities of an initiative and of any improvements in implementation. Evaluation refers to the analytical assessments which determine to what extent your intervention has been effective as measured against predetermined goals. It can be used to determine or understand the validity and local relevance of the goals themselves. Community-based ICT initiatives, especially if they receive donor funding, are often required to conduct some M&E. Here we consider how ethnographic action researchers can monitor and evaluate the work of a community-based ICT initiative.

Monitoring

EAR researchers monitor the activities of an ICT initiative by keeping track of activities and uses of facilities. All activities need to be monitored and counted, including instances of access to ICTs, numbers of people involved, numbers of trainings held, and so on. Systems need to be put in place to maintain such records, and all initiative staff and volunteers can play a role in keeping these up to date. An EAR researcher's choice of tools to monitor activities will be guided by the initiative's basic aims and objectives. So, it is important that there is agreement on what these aims and objectives are before an EAR researcher sets out to monitor activities.[ feedback ]

Constant monitoring will help to identify and resolve problems. For example, monitoring may reveal that certain community groups are not accessing your facilities. Once such a problem is identified, EAR researchers can work with others to think about and plan the adjustments that can be made in order to resolve it. Monitoring can even help you to identify and resolve simple problems, such as defective equipment (a computer may regularly break down). By noting how regularly this happens, how many people it effects, how many activities it delays, and considering how important this is to achievement of an initiative's goals, can help to prioritise finding solutions.

Evaluation

Evaluating an ICT initiative means determining to what extent it has been effective, what impact it has had. Typically evaluation is conducted either prior to beginning an initiative or as an initiative comes to an end. These forms of evaluation are called formative evaluation (at the beginning) and summative evaluation (at the end). Whilst formative and summative evaluations are typical to many development initiatives, EAR is different as research continues throughout the life of the initiative. From this perspective EAR provides evaluation tools to assess all stages in the initiative cycle - start, middle and end. EAR is designed to provide embedded and ongoing research and actions. It is well suited to providing evaluation, which requires a specific focus on a stated goal and targeted research into how effectively an initiative is meeting this goal.

Through evaluation an EAR researcher can specifically record and comment on how changes have occurred that relate to initiative goals, amongst which groups these changes have occurred, and what aspects of an initiative's activities contributed most to these changes. An ICT initiative can use evaluations to inform the development of strategies to bring about further changes, or more effectively reach its goals if it is not yet doing so.

There are two key audiences and uses for evaluation outputs:

External agencies such as donors may require evaluation so they can assess the impact the initiative is making, to enable them to decide whether further support is warranted. Many donors have a preference for statistical data that will tell them which pre set project objectives have been achieved. An EAR researcher might chose to use short questionnaire surveys to generate the kinds of statistics donor agencies often request, but would always seek to gain a richer understanding through the use of more qualitative tools. In designing evaluation, the EAR researcher must clearly understand in advance what it is that s/he wants to measure, and how measuring it will help demonstrate the effectiveness of the ICT initiative.

Internal audiences such as project staff may require a specific piece of evaluation to see how well they are doing in relation to specified goals. They would use this evaluation to adjust activities in an effort to be more effective. Within the context of EAR, each time new activities are planned, the cycle of plan, do , observe and reflect comes into operation, and research continues to evaluate how effective any changes have been. Within EAR, Targeted research can be conducted for a number of reasons, including the evaluation of specific activities, and is ongoing throughout the life of an initiative.

Conventional M&E and M&E within EAR are somewhat different, as this table shows:

  Conventional M&E M&E within EAR
Who External experts The EAR researcher, working with staff, volunteers and local communities, facilitating and animating dialogue and participation


What Predetermined indicators, to measure inputs and outputs are decided at the beginning of an M&E process. Indicators will change over time as initiatives change activities and refine their goals to respond to research conducted with local communities and in an effort to be more effective


How Questionnaire surveys, by outside 'neutral' evaluators, distanced from project A range of tools, by EAR researcher who is embedded in the initiative and the communities


Why Donor-driven: To make project and staff accountable to funding agency Appropriateness-driven: To enable initiative to develop appropriate activities and become more effective and to produce evaluations for external agencies

When While monitoring may be ongoing, evaluations are usually conducted through a baseline survey at the beginning of an intervention, and through a survey some time later (maybe 12 months) - evaluation is retrospective in nature. EAR research, including monitoring and evaluation, is ongoing throughout the life of an initiative

Indicators and EAR

It is important that an EAR researcher designs M&E in ways that best suit their own needs and resources. Given the ongoing EAR research process and the data being generated, it will be important to think about how this can inform any M&E plans and reports - internal and external. Like anything that an EAR researcher does, M&E is not undertaken in isolation, but in relation to all of the other activities that s/he is involved in.

For a specific M&E activity, it is essential to clearly define what key goals and objectives the initiative wants to monitor and evaluate. Once this is clarified, you can think about what to look for that would indicate that these goals and objectives have been achieved. This involves the development of 'indicators'. Unlike conventional M&E indicators that are decided and fixed at the beginning of an intervention, within EAR, as research feeds into activities and helps to refine goals indicators may change over time.

Nevertheless, some useful qualitative indicators that can be used in evaluation might include:

  • increased participation in ICT initiatives - facilities and activities;
  • increased public awareness about the initiative;
  • increased public discussion about the initiative;
  • increased public awareness of the issues the initiative seeks to address;
  • increased public debate about the issues the initiative seeks address;
  • increased levels of participation amongst targeted groups;
  • instances of positive action taken as a result of activities;
  • emergence of networks of individuals or groups that link together as a result of activities;
  • community ownership of the initiative in terms of increased active voluntary participation in decision-making; and,
  • community ownership of the initiative in terms of financial contributions (where appropriate).

Some useful quantitative indicators that can be used in monitoring might include:

  • number of trainings held;
  • number of broadcasts produced;
  • number of meetings held;
  • number of print materials distributed;
  • number of enquiries; and,
  • amount of budget spent.