Role of board in monitoring evaluation

Preview:

Citation preview

Role of board in Monitoring and Evaluation

Professor Benon C Basheka, PhD, FCIPS

Uganda Technology and Management University

LEARNING OUTCOMES

By the end of the training, board members should be able to:-

1. Describe the importance of monitoring and evaluation in organizational effectiveness

2. Apply M and E knowledge in the day today activities of their work in ABiTRust

3. Relate M and E requirements to the DEC standards

4. Oversees establishment of an effective M and E system for the organization

TRAINING METHODOLOGY

1. Presentations

2. Individual experiences

3. Discussions

Session coverage

The training will generally cover three units:-

1. General Introduction and Role of Boards of Trustees (with relevance to M and E)

2. Requirements for establishing an M and E system (and role of the board)

3. The Evaluation Process and Roles the board

UNIT 1: GENERAL INTRODUCTION AND ROLE OF THE BOARD IN M & E

INTRODUCTION

• Monitoring and evaluation are increasingly being emphasized in all endeavors

• The need to determine what interventions are working or not and why now necessitate organizations to invest in building sound monitoring and evaluation systems

• In almost all organizations where boards exist, boards generally have oversight functions-overseeing all activities of the organization (M and E Inclusive)

• The range of things where evaluation applies is limited but can include projects, programmes, processes, systems etc

Monitoring/Evaluation is to be seen as:-

• As a profession with a set of standards

• As a field of practice (Monitoring and evaluation units and staff therein)

• As a discipline of study which is an academic field now declared to be in adolescent stages (universities and other higher educational institutions

Organizations have advisory or policy organs

1. Board of governors

2. Board of managers

3. Board of regents

4. Board of trustees, and

5. Board of visitors

Different Modes of Board Governance

1. Advisory Board Model2. Patron Model

3. Co-operative Model 4. Management Team Model

5. Policy Board Model [AbiTrust?]

Abi-TRUST Governance

Founders Committee

Board of Trustees

Management Staff M and E unit

Typical duties of boards of directors include:

1. Governing the organization by establishing broad policies and objectives

2. Selecting, appointing, supporting and reviewing the performance of the chief executive

3. Ensuring the availability of adequate financial resources

4. Approving annual budgets

5. Approving annual plans

6. Approving and reviewing Reports for management activities

Working Methodology of Boards

• In larger organizations, the structure of the board and its committees usually mirrors the structure of the organization's administration.

• Just as there are staff responsible for human resources, fund-raising, finance, planning, and programs, the board creates committees with responsibility for these areas.

• One of the areas of board committee will be monitoring and evaluation

• Even the performance of the board or its committees needs to be measured

Strategically

1. Mobilize resources

2. Look for networks

3. Design and approve policy

4. Provide oversight

5. Participate in evaluation

Boards Strength lies on :-

1. Composition

2. Expertise

3. Experience

4. Qualifications

5. Networks

CONTEXTUALLY

• The Board of Trustees of aBi Trust has a four-fold mandate:

1. To protect the Trust’s assets over time and ensure survival and the prosperity of the Trust in a transparent, accountable and responsible manner

2. To guide the Trust in fulfilling its Vision, mission and objectives

3. To give strategic direction to aBi Trust Management

4. To protect the Trust’s interests.

What is the implication?

1. The mandates and functions of the board cannot be efficiently and effectively achieved without sound knowledge of M and E

2. M and E supplies knowledge for oversight and decision making

3. M and E supplies the board with necessary tools and methodologies

4. M and E provides the right attitude and mindset for involvement

5. M and E shapes the decision making to support building required systems

Is Evaluation new?

Evaluation is as old as the world itself and has moved side by side the journey of human civilization

The scriptures tell us under genesis 1:31 that when God created the earth, the light in the darkness, the firmament in the midst in the midst of the waters, the plants, the animals, and finally man, at the end of the fifth day’…God saw everything that he had made, and behold, it was very good

He used some criteria unknown to us and this enabled him to make an assessment on whose findings he was able to make a fundamental decision of not scrapping what he had done

— Michael Q. Patton

God’s archangel came then, asking, “God, how do you know that what you have created is ‘very good’? What are your criteria? On what data do you base your judgment? Just what results were you expecting to attain? And aren’t you a little close to the situation to make a fair and unbiased evaluation?”

God thought about these questions and that day and God’s rest was greatly disturbed. On the eighth day God said, “Lucifer go to hell.”

The second example

• From the philosophical works of Socrates, Plato and Aristotle to the mathematical methodologies of Pythagoras and Euclid, the ideas of the ancient Greeks shaped many institutions and contributions to many fields including evaluation.

• Existing scholarly accounts inform us that the Delphic oracle of the ninth to the third centuries BC was the first central intelligence database of the ancient world, an interdisciplinary think tank of approximately 90 priests, deemed the best educated experts of antiquity. • They collected and evaluated information and advised ordinary people and leaders,

among them Alexander the Great. Major project management in the fourth century BC existed.

In ancient Greece,

• the practices of their 12 gods, called Olympians because they were stationed on Olympus, the highest mountain in Greece shed some light on how evaluation was done.

• The council of the Olympian gods and goddesses made collective decisions with input from an expert panel, which consisted of Zeus (the president of the gods), Athena (the goddess of wisdom), Hermes (the god of information and commerce), and any other god whose area of expertise would be pertinent to the subject in question.

In their Working methodology• Meetings were problem-oriented participatory sessions,

characterized by intense discussions and searches for best solution.

• The gods' decisions were persuasively communicated to mortals and powerfully implemented with follow-up reports (Theofanides 1999).

• The Olympian style of management and decision making is illustrated in the steps below:-

1. Identify the problem or theme for action. Collect all relevant information and data, through the intelligence work by Hermes, the god of informatics.

2. Search for solutions via dialogue with all participants. Discuss step 1 with all concerned parties and propose alternative solutions.

3. Select the best problem solution or action theme, mainly by conferring with the concerned party or parties

4. Announce the decision of the gods to all mortals concerned through the god of informatics, Hermes. Send Peitho, the goddess of persuasion, to illuminate the best solution in step 3 as the decision of the gods of Olympus

5. Use lightning and thunderbolts to implement the Olympian decisions in step 4 to achieve the desired goals identified in steps 1 and 3.

6. Implement all decisions, supervised by Hermes, the god of informatics, who announces to the Olympian gods the results of their action taken in step

In contemporary times….• Monitoring and evaluation (M&E) is an essential part of any program, project, policy and

is often used in all kinds of contexts. • Development partners increasingly expect their partners to have sound M and E systems• Monitoring and evaluation can tell us :-

• Whether a program, policy or project is making a difference and for whom; • It can identify program areas that are on target or aspects of a program that need to be

adjusted or replaced. • Information gained from M&E can lead to better decisions about program investments. • It can demonstrate to program implementers and funders that their investments are

paying off• It is a tool for ensuring accountability to other stakeholders

Monitoring and evaluation can:

• Help identify problems and their causes;• Suggest possible solutions to problems;• Raise questions about assumptions and strategy;• Push you to reflect on where you are going and how you are getting

there;• Provide you with information and insight;• Encourage you to act on the information and insight;• Increase the likelihood that you will make a positive development

difference.

We conduct evaluations….

To help people make better decisions and achieve better outcomes To provide better services (public and private)

• By: Comparing policy options objectively and rigorously Calculating empirically the size of likely impacts Calculating empirically the diversity/variance of impacts Getting more precise estimates of risk and bias Establishing a cumulative evidence base for decision making

1. Does it work?2. How well does it work?3. Does it do what we want it to? 4. Does it work for the reasons we think it does? 5. Is it cost effective? 6. Are the benefits worth it? 7. What are the unintended consequences?

An evaluation answers questions such as….

Challenges facing developing countries in M and E

Masuku, Ngengeezi and Ijeoma EOC (2015: 5) report the following:-

1. Designing M and E

2. Context Challenges

3. Cooperation and coordination

4. Institutional challenges

….

5. Lack of stakeholder involvement

6. Compliance

7. Linking planning, budget, priorities and M and E

8. Lack of integration with other strategic approaches

Other Challenges include:-

1. Capacity challenges

2. Poor coordination

3. Lack of legislative structures

4. Locus and focus problems

5. Elite capture vs stakeholder involvement

6. Absence of theory of change

7. Lack of evidence and truth

Types of Evaluation

• Evaluation Type by Level

1) Project-level evaluation

2) Program-level evaluation

3) Sector program evaluation

4) Thematic evaluation

5) Policy-level evaluation

• Evaluation Types by Stages of Project Cycle1) Ex-ante Evaluation 2) Mid-term evaluation3) Terminal evaluation4) Impact evaluation5) Ex-post evaluation

Types of evaluation based on the results chain

1. Context evaluation2. Input evaluation3. Process evaluation4. Output evaluation5. Outcome evaluation6. Impact evaluation

Professionalization of evaluation• By 2010, there were more

than 65 national and regional evaluation organizations throughout the world, most in developing countries

• Although specialized training programs have existed for several decades, graduate degree programs in evaluation have emerged only recently

– Australasia– Africa– Canada– Central America– Europe (not every

country)– Japan– Malaysia– United Kingdom

Professional Standards

• Utility

• Feasibility

• Propriety

• Accuracy

• Evaluation Accountability

DISCIPLINES FROM WHICH EVALUATION BORROWS?

•‘Social Research Methods’•Sociology•Economics•Statistics•Development studies•Public Administration•Social Anthropology

•Education•Project Management•Management•Engineering•Policy Analysis•History

Structure of an evaluation-Commissioner’s perspective

1. Concept Paper2. RFQ/Proposal3. Evaluation of EOI/Proposal 4. Contact negotiation5. Providing contacts and support6. Quality control7. Providing information8. Approval of the report9. Discussion of results10. Discussion of consequences11. Managing implementation of recommendations

Structure of an evaluation-Evaluator’s Perspective

1. EOI/Proposal

2. Contract Negotiation

3. Planning workshop meeting

4. Clarifying organizational questions

5. Inception report

6. Data collection, analysis and interpretation

7. Continuous coordination and exchange of information among parties

8. Draft evaluation report

9. Final evaluation report

10. Closing workshop

11. Follow-up

Policy Maker’s Perspective???

1. Emerging questions

2. Directives on how to address the questions

3. Participation as respondents

4. Participation in workshops discussing results

5. Utilization of evaluation results

6. Change in policy as a result of the evaluation

Tasks

1. As board members, what comes in your mind when you hear or read

about Monitoring and evaluation?

2. What came into your mind when you were told you were going to do a

training in M and E?

Principles of Evaluation

In every evaluation, it is clear certain questions have to be addressed:-

1. What is to be evaluated (evaluands)

2. Why should the evaluation be conducted (purpose)

3. What criteria should be applied

4. Who should conduct the evaluation

5. When should the evaluation be conducted

6. How should the evaluation be conducted

1. What do we evaluate?

•The things to be evaluated (evaluands) now days range from :-•Laws•Products •Services •Organizations •People •Processes •Social state of affairs of any kind (Stockman & Meyer 2013:67)

Different Kinds of Interventions to be Evaluated

• On the political Global level:

1. Global Goals (the Millennium Development Goals)

2. International Conventions (the Geneva Conventions)

3. DECD Standards?

Millennium Development Goals

1) Eradicate extreme poverty and hunger

2) Achieve universal primary education

3) Promote gender equality and empower women

4) Reduce child mortality

5) Improve maternal health

6) Combat HIV/AIDS, malaria, and other diseases

7) Ensure environmental sustainability

8) Develop a global partnership for development

• On the political Africa level

1. African political federation

2. African Union peace and military initiatives

3. African participation in ICC

4. African Governance Mechanisms (APRM)

5. New Partnership for African Development (NEPAD)

• On political regional level (East African Community level)1. East African political federation

2. East African Customs union

3. Northern Corridor interventions

4. Oil pipeline interventions

5. Standard railway-guage interventions

On the political country level

1. National Development Plans

2. Vision 2040

3. The NRM Manifesto (2011-2016)

4. Strategies ( NAADS interventions, Basket funding strategies)

• The Programme Level • Things become a bit more concrete when we move on to the programme level

• Interventions take place and are followed up by Monitoring & Evaluation.

• The idea here is to assess how programmes work.

• The following are the most common examples of evaluation objects: • Programmes

• Projects

• Single Measures/Activities/Outputs

• Competences/Resources/Inputs

• The System Level (Board plays a leading role)• At the system level, things again become more abstract and less easy to handle.

• Typical objects are:

• Structures/Systems

• Networks

• Organisations

• Institutions

• Rules/Norms/Curricula/Agreements

• The Performance Level The question shifts to the way a policy/ Programme/system/intervention/etc. evolves. Nowadays there is a much greater focus on the performance aspect of programmes (and also on the systemic view)

than in former times. The main objects of performance evaluations are:

1. Results/Impacts

2. Performances/Processes

3. Management/Governance

4. Operations/Actions

• The Individual Level • The assessment focuses on either group processes or individual behaviour and the attitudes behind

them.

• So the main objects are:

1) Interaction/Group Behaviour

2) Communication/Exchange –

3) Individual Behaviour

4) Cognitive Processes/Attitudes

2. Why evaluation?

1. Providing gainful insights

2. Exercising control

3. Initiation of development and learning processes

4. Legitimization of measures, projects or prgrammes implemented

5. Accountability roles

6. Observe implementation processes

7. Assessing the feasibility of a programme (programme development phase-formative)

8. Supporting Managers in management (during implemenation phases)

3. What assessment criteria?

• Development Assistance Committee (DAC) of the Organisation for Economic Cooperation and Development(OECD) has developed criteria towards which many national organizations orient themselvesA. Relevance

B. Effectiveness

C. Efficiency

D. Impact

E. Sustaianability

Note

• If there are standards like the ones of DAC, they are directly stated by the client

• Sometimes, it is left to the evaluator to determine the criteria as he or she is considered to be an expert who ought to know what criteria best on what is to be evaluated-this is knowledge-or experience-based

• It is rare for the criteria to be set by the target group

4. Who should conduct the evaluation?

• Evaluation is best thought of as a team effort.

• Although one person heads an evaluation team and has primary responsibility for the project, this individual will need assistance from others on your staff.

• An evaluation team will work together on the following tasks: 1. Determining the focus and design of the evaluation.

2. Developing the evaluation plan, performance indicators, and data collection instruments.

3. Collecting, analyzing, and interpreting data.

4. Preparing the report on evaluation findings.

Options on who to conduct evaluation

1) Hiring an outside evaluator (option 1).

2) Using an in-house evaluation team supported by an outside consultant and program staff (option 2).

3) Using an in-house evaluation team supported by program staff (option 3).

Note:-

•Evaluators are diverse:•They might be economists concerned with efficiency and costs; •Management consultants interested in the smooth running of the organization;

•Policy analysts with a commitment to public sector reforms and transparency;

•Scientists concerned to establish truth, generate new knowledge and confirm or disconfirm hypothesis.

5. When should the evaluation be conducted?

1. Before the intervention

2. During the implementation

3. Mid-way the implementation process

4. After the implementation

6. How will the evaluation be conducted?

1. Scientific-oriented approaches

2. Management-oriented approaches

3. Participant-oriented approaches

4. Qualitative-oriented approaches

Session 2: Building Monitoring and Evaluation systems

Board establishes M and E System

• One may define an M&E system as a collection of people, procedures, data and technology that interact to provide timely information for authorized decision-makers

• M and E systems are systems used to monitor and evaluate a project, program or organization to see if it is on track to achieve its overall outcomes

If you have a project for example….

61

A good M/E system will

1.Monitor the use of project inputs2.Monitor the effectiveness of the project implementation process3. Monitor the production of project outputs4.Assess project impacts on the target communities

5.Assess the effectiveness of project outputs in producing the intended short-term and long-term impacts.

6.Assess the extent to which these impacts can be attributed to the effects of the project.

Why build an M and E system?

1) Supports planning activities at the sectoral and program level

2) Provides information for a more efficient allocation of public funds

3) Facilitates program management`

4) Helps re-designing and improving programs

5) Promotes transparency and accountability

6) Enriches policy discussion by incorporating rigorous evidence

M and E Systems has 12 key features

1. Organizational Structures with M&E Functions

The M&E unit whose main purpose is to coordinate all the M&E functions.

Some organizations prefer to outsource such services.

M&E unit should have its roles defined, its roles should be supported by the

organizations hierarchy and other units within the organization should be aligned

to support the M&E functions.

2. Human Capacity for M&E

An effective M&E implementation requires not only adequate staff but they must have the necessary

M&E technical know-how and experience.

Necessary to have the human resource that can run the M&E function by hiring employees who have

adequate knowledge and experience in M&E implementation

Ensure that the M&E capacity of these employees are continuously developed through training and other

capacity building initiatives to ensure that they keep up with current and emerging trends in the field

….

3. Partnerships for Planning, Coordinating and Managing the M&E System A prerequisite for successful M&E systems whether at organizational or national levels

is the existence of M&E partnerships. Partnerships for M&E systems complement the organization’s M&E efforts in the M&E

process and they act as a source of verification for whether M&E functions align to intended objectives.

Partnerships also serve auditing purposes where line ministries, technical working groups, communities and other stakeholders are able to compare M&E outputs with reported outputs.

….

4. M&E frameworks/Logical Framework The M&E framework outlines the objectives, inputs, outputs and outcomes of the

intended project and the indicators that will be used to measure all these. It also outlines the assumptions that the M&E system will adopt. The M&E framework is essential as it links the objectives with the process and

enables the M&E expert know what to measure and how to measure it.

5. M&E Work Plan and costs Closely related to the M&E frameworks is the M&E Work plan and costs. While the framework outlines objectives, inputs, outputs and outcomes of the

intended project, the work plan outlines how the resources that have been allocated for the M&E functions will be used to achieve the goals of M&E.

The work plan shows how personnel, time, materials and money will be used to achieve the set M&E functions

….

6. Communication, Advocacy and Culture for M&E This refers to the presence of policies and strategies within the organization to promote

M&E functions. Without continuous communication and advocacy initiatives within the organization to

promote M&E, it is difficult to entrench the M&E culture within the organization. Such communication and strategies need to be supported by the organizations hierarchy. The existence of an organizational M&E policy, together with the continuous use of the

M&E system outputs on communication channels are some of the ways of improving communication, advocacy and culture for M&E

….

7. Routine Programme Monitoring M&E consists of two major aspects: monitoring and evaluation. This component emphasizes the importance of monitoring. Monitoring refers to the continuous and routine data collection that takes place during

project implementation. Data needs to be collected and reported on a continuous basis to show whether the

project activities are driving towards meeting the set objectives. They also need to be integrated into the program activities for routine gathering and

analysis.

….

8. Surveys and Surveillance This involves majorly the national level M&E plans and entails how frequently

relevant national surveys are conducted in the country. National surveys and surveillance needs to be conducted frequently and used to

evaluate progress of related projects. For example, for HIV and AIDS national M&E plans, there needs to be HIV related

surveys carried at last bi-annually and used to measure HIV indicators at the national level.

9. National and Sub-national databases The data world is gradually becoming open source. More and more entities are seeking data that are relevant for their purposes. The need for M&E systems to make data available can therefore not be over-

emphasized. This implies that M&E systems need to develop strategies of submitting relevant,

reliable and valid data to national and sub-national databases.

10. Supportive Supervision and Data Auditing Every M&E system needs a plan for supervision and data auditing. Supportive supervision implies that an individual or organization is able to supervise

regularly the M&E processes in such a way that the supervisor offers suggestions on ways of improvement.

Data auditing implies that the data is subjected to verification to ensure its reliability and validity.

Supportive supervision is important since it ensures the M&E process is run efficiently, while data auditing is crucial since all project decisions are based on the data collected.

11. Evaluation and Research One aspect of M&E is research and the other is evaluation. Evaluation of projects is done at specific times most often mid- term and at the end of

the project. Evaluation is an important component of M&E as it establishes whether the project has

met he desired objectives. It usually provides for organizational learning and sharing of successes with other

stakeholders.

….

12. Data Dissemination and use The information that is gathered during the project implementation phase needs

to be used to inform future activities, either to reinforce the implemented strategy or to change it.

Additionally, results of both monitoring and evaluation outputs need to be shared out to relevant stakeholders for accountability purposes.

Organizations must therefore ensure that there is an information dissemination plan either in the M&E plan, Work plan or both.

Session three: The Evaluation Process and Role of the Board

The evaluation manager’s role, in consultation

with the steering

committee

The evaluator’s role

Clarify policy/ programme

objectives and intended outcomes

Clarify intended evaluation

purpose, users and uses

Develop relevant evaluation questions

Select the evaluation

approach and methods

Identify data sources and

collection and analysis

procedures

Identify the necessary

resources and governance

arrangementsPrepare the TOR;

commission (and possibly tender) the evaluation

Conduct and Manage the evaluation

Our main focus in this session

The evaluation process – an overview

Terms of Reference

STARTEvaluation Assessment

(2-3 months)

Contracting(3-4 months)

Field Work/ Analysis

(6-8 months)Report & Recommendations

(2-4 months)

Management Responses(1 month)

Executive Approval;

Internet Posting(2-3 months)

Implementing Change/Follow Up

Large evaluations typically take 12-18 months to

complete. Some phases may overlap.

Cycle of conducting and managing evaluations

StandardsUtility

FeasibilityProprietyAccuracy

Engage Stakeholders

Focus the Evaluation

Design

Describe the program

Gather credible evidence

Justify conclusions

Use and share lessons learned

Steps

Steps of the Evaluation process

Step 1- Engage Stakeholders

Who are the stakeholders?

Those involved in program operations, those affected by the program

operations, and users of evaluation results

Step 2 - Describe the Program

What are the goals and specific aims of the program?

What problem or need is it designed to address?

What are the measurable objectives? What are the strategies to achieve the objectives?

What are the expected effects? What are the resources and

activities? How is the program supposed to

work?

What do you want to know? Consider the purpose, uses,

questions, methods, roles, budgets, deliverables etc.

An evaluation cannot answer all questions For all stakeholders.

Step 3 - Focus the evaluation design

Data collected must address the evaluation questions

Evidence must be believable, trustworthy and relevant

Information scope, sources, quality, logistics, methodology & data collection.

Who is studied and when?

Step 4 - Gather credible evidence

Consider the data you have:

• Analysis and synthesis - determine findings.

• Interpretation - what do findings mean?

• Judgments - what is the value of findings based on

accepted standards?• Recommendations –

- what claims can be made?- what are the limitations of your design?

Step 5 - “Justify” Conclusions

Share lessons learned with stakeholders! Provide feedback, offer briefings. disseminate findings

Implement evaluation recommendations Develop a new/revised implementation plan in

partnership with stakeholders

Step 6 - Use and share results

Several options (not mutually exclusive)Reconstructing baseline data ex post: recall method (more later)Use key informants and triangulate (mostly qualitative)Reconstruct a baseline “scenario” with secondary data (not

always practical given absence and quality of baseline studies)Single difference with econometric techniques: some practical

obstacles (workload, time constraints, availability of trained specialists)

Dealing with lack of baseline data

Identify the implementation logic and theory of change Allow for the inception report phase Deal with missing baseline and other gaps Gather data Examine the effort using various criteria Draw conclusions and recommendations Conduct reporting Ensure quality Feedback on the evaluation Management response Dissemination findings Feedback and lessons learnt

Best practices in managing evaluations

If I had know too much

information would make it complicated, I wouldn’t have asked for it!!!

• Value for money a key concern

• Underfunding as wasteful as over-funding

• Balance between cost and quality

• Quality ultimately more important

• But also relevance for purpose

• Make sure all aspects adequately funded including consultation with stakeholders, reporting and dissemination

• Ensure evaluation design appropriate to budget as well as aims of programme

Evaluation budget

Conclusions

1) Monitoring and evaluation is now a condition for development partners and government collaboration

2) Board’s oversight role is well done if there are functioning M and E systems

3) International standards need to be integrated into the local M and E systems

4) M and E units need to be adequately staffed and their capacity enhanced

5) M and E budget needs to be supported by the board

Recommended