Upload
irea-mccullough
View
220
Download
0
Tags:
Embed Size (px)
Citation preview
1
Monitoring and EvaluationWhat are we talking about?
Module 5
Session 8
2
1. What is it?2. Why do it?3. Who does it?4. When do we do it?5. How do we do it?6. What does this mean for me?
Monitoring and Evaluation
3
1. What is it? Monitoring
Management’s continuous examination of progress achieved during the implementation of an undertaking to track compliance with the plan and to take necessary decisions to improve performance.
Key elements:- continuous examination of implementation progress- tracking compliance against planned objectives- generating data and information on performance to enable corrective measures to be taken
4
1. What is it? Evaluation
Assessment, as systematic and impartial as possible, of an activity, project, programme, strategy, policy, topic, theme, sector, operational area, institutional performance, etc.
It focuses on expected and achieved accomplishments, examining the results chain, processes, contextual factors and causality, in order to understand achievements or the lack thereof.
It aims at determining the relevance, impact, effectiveness, efficiency and sustainability of the interventions and contributions of the organizations
5
1. What is it? Evaluation (cont.)
Key elements:
assessment conducted at a single point in time (before, during or after).
focuses on determining whether what was planned actually happened, and why it did or did not happen.
Assessing: relevance – whether the intervention was appropriate impact – whether it made a difference in the lives of people effective – whether it achieved what it set out to efficient – whether it did so at the lowest cost sustainable –whether it will leading to lasting change.
6
Other similar functions that are not monitoring and evaluation
Inspection: a general examination that seeks to identify vulnerable areas and malfunctions and to propose corrective action.
Investigation: a specific examination of a claim of wrongdoing and provision of evidence for eventual prosecution or disciplinary measures.
7
Other similar functions that are not monitoring and evaluation
Audit: an assessment of the adequacy of management controls to ensure the economical and efficient use of resources; the safeguarding of assets; the reliability of financial and other information; the compliance with regulations, rules and established policies; the effectiveness of risk management; and the adequacy of organizational structures, systems and processes.
Research: a systematic examination designed to develop or contribute to knowledge.
8
Different oversight functions
Research
Evaluation
MonitoringAudit
Inspection
Modellin
g
Perfo
rman
ceCon
tinu
ou
s
Assessm
en
t
Com
plia
nc
e
Wro
ng D
oin
g
9
Where M&E fits in the cycles
Policy
Strategy
Program
Project
Planning
Monitoring
Programming
Evaluating
Implem
enting
10
2. Why do it? Why monitor?
What gets monitored is more likely to get done. If you don’t monitor performance, you can’t tell
success from failure. If you can’t see success, you can’t reward it. If you can’t recognise failure, you can’t correct it. If you can’t demonstrate results, you can’t
sustain support for your actions.
11
Why evaluate?
Understand why and the extent to which intended and unintended results are achieved, and their impact on stakeholders
Important source of evidence on the achievement of results and institutional performance, thus is one basis for corporate accountability
Important contributor to building knowledge and organizational learning.
12
What’s the difference?
They are different, but interrelated functions, as they both contribute knowledge as a basis for accountability and enhanced performance.
Monitoring is an internal, repetitive, operations and management function. Evaluation is often external, periodic/ snapshot, in greater depth and asking different questions.
Monitoring asks the question “Are we doing things right”?
Evaluation asks “Are we doing the right things?” and “Are there better ways of achieving the results?”
13
Problem: Low access to and use of Information and Communications Technology (ICT) by civil servants, youth and women in rural Uganda
Diagnose: Government study to identify why there is low access and use of ICT
Response: Government sets up a 2 year project with the objective to improve the use of ICT among these groups.
Project aims (or outcomes): public access to ICT in Uganda improved; basic computer knowledge increased
Project actions and deliverables: establish 50 telekiosks across the country, rehabilitate 160 ICT training centres; train Ministry of ICT staff; provide technical assistance in policy formulation.
Development project example..
14
Development project example…continued…
Monitor: Inputs - money spent, computers brought, consultants
hired
Activities and Outputs (actions) number of telekiosks operational; number of ICT training centres open to public; number of training sessions held; number of registered domain names.
Aims (outcomes) number of hours per month telekiosks used; number of graduates from ICT training centres; number of registered ISPs.
15
Development project example…continued…
Evaluate: Was the problem correctly identified? Was the response (project) correctly designed to
address it? Has it worked- has access to ICTs improved in these
target groups? If not, why not? What could be done better in a future project of this
type?
Follow-up: Adapt existing project to respond to evaluation findings/ close down project/ start new project
16
Discussion space (10 mins)
Discuss the example of the ICT project Does the example make sense to you? Do you monitor and evaluate your projects? How do you do it?
17
So.. 3. Who does M&E?
Monitoring Typically done by the subject, i.e. self-monitoring.
Evaluation Typically done by external party, to ensure the integrity of the
findings (i.e. remove bias), and provide insight (expertise)
But….depends on the objectives…
Self-Evaluation – done by the subject, when the primary objective of the evaluation is to learn about the strengths and weaknesses of what is being evaluated.
External Monitoring – where there is a need for very exact standards, or there is a lack of trust- external parties may monitor the actions. E.G. elections
18
4. When is it done and who decides?
Monitoring is a continuous activity, reports on monitoring are typically produced quarterly.
Evaluation is done at different phases: Before an initiative is planned. This is called a pre-appraisal or baseline During an initiative. Typically mid-way through, to check how it is going,
whether it is still the right thing to be doing. At the close of an initiative. To determine whether the intended actions
have been completed, and what different it has made. Some time after the initiative. Change is often slow, so it may be
necessary to evaluate some years after to look at the effects. E.G. education- build a school, train teachers, provide meals. Aim=increased enrollment of kids- this may happen 6 months, 1 year or more down the line.
19
4. When is it done and who decides?
Evaluation is also done to address wider concerns:
Where actions in a sector or area have been present for a long time, e.g. Evaluation of the PEAP 1997-2007
Where those involved, managing or governing have a particular concern, e.g. Evaluation of National Agricultural Advisory Development Service (NAADS)
Where those involved or managing are interested to learn more, e.g. Evaluation of Budget Support
20'We did a full work-up - heart, lungs, bank account, credit...you can afford to live another 19 years.'
But Evaluation does not always ask the right questions…….Or tell the subject what it wants to hear!
21
5. How is evaluation done?
Who is involved and when? Depends on the intentions.
Participatory evaluation is with full involvement of users at all phases- design, implementation, write-up, follow-up
Why not always use a fully participatory approach? Time, Independence, Intention.
22
5. How is evaluation done?
What methods are used?
Desk review of relevant documents (project documents, annual reviews, donor-specific, etc)
Key informant interviews: with key partners and information stakeholders both at central and field levels. Drawing on specific check-listed questions
Focus group discussions: internally and external parties both at central and field levels. Gaining consensus on key issues.
Sample surveys: of effects and impacts of initiatives as and where necessary
23
Discussion space (30 mins)
6. What does this mean for me?
Break in to groups of 4 or 5 persons Share your own examples of monitoring and
evaluation. Identify and discuss any good practices and
challenges to conducting m&e in your district.
Report back in plenary