23
What is ‘Evaluation’? Nathan Loynes

Evaluation introduction

Embed Size (px)

Citation preview

Page 1: Evaluation introduction

What is ‘Evaluation’?

Nathan Loynes

Page 2: Evaluation introduction

In this presentation:

1. Definitions and disagreements about

evaluation.

2. Logic Models.

3. Outcomes, Indicators and Targets

4. Measuring Outcomes

5. Mark Friedman : Outcome Based

Accountability

Page 3: Evaluation introduction

2

Working Definition of

Programme Evaluation

The practice of evaluation involves

thoughtful, systematic collection and

analysis of information about the activities,

characteristics, and outcomes of

programmes, for use by specific people, to

reduce uncertainties, improve effectiveness,

and make decisions.

Page 4: Evaluation introduction

Scott & Morrison (2005)

Evaluation Focuses on:

• Value & Worth

• Education or Social Programmes

• Activities, Characteristics and Outcomes

• Policy Implications (What should happen

next?)

Page 5: Evaluation introduction

Pawson & Tilley, 1997 (In Scott &

Morrison)

• Realistic Evaluation

1. Take into account the ‘institutional’

nature of programmes

2. Should be scientific

3. Evaluation should not be self-serving.

Page 6: Evaluation introduction

Chen (1996) (In Scott and

Morrison, 2005)

4 Types of Evaluation:

1. Process-Improvement

2. Process-Assessment

3. Outcome-Improvement

4. Outcome-Assessment

Page 7: Evaluation introduction

6

Working Definition of

Programme Evaluation

The practice of evaluation involves

thoughtful, systematic collection and

analysis of information about the activities,

characteristics, and outcomes of

programmes, for use by specific people, to

reduce uncertainties, improve effectiveness,

and make decisions.

Page 8: Evaluation introduction

7

Evaluation Strategy Clarification

All Evaluations Are:

Partly social

Partly political

Partly technical

Both qualitative and quantitative data can be

collected and used and both are valuable

There are multiple ways to address most

evaluation needs.

Different evaluation needs call for different

designs, types of data and data collection

strategies.

Page 9: Evaluation introduction

8

Purposes of Evaluation

Evaluations are conducted to:

Render judgment

Facilitate improvements

Generate knowledge

Evaluation purpose must be specified at the

earliest stages of evaluation planning and

with input from multiple stakeholders.

Page 10: Evaluation introduction

What are Logic Models?

Page 11: Evaluation introduction

10

To Construct a Logic Model

You Must Describe:

Inputs: resources, money, staff/time, facilities, etc.

Outputs: how a program uses inputs to fulfill its mission – the specific strategies, service delivery.

Outcomes: changes to individuals or populations during or after participation.

Inputs Outputs Outcomes

Page 12: Evaluation introduction

11

Here is an illustration that will help you create your own

Logic Model.Inputs

Resources dedicated to or

consumed by the

programme.

E.G.

money

staff and staff time,

volunteers and volunteer time

facilities

equipment and supplies

Outcomes

Benefits for participants during

and after programme activities.

E.G.

new knowledge

increased skills

changed attitudes

modified behavior

improved condition

altered status

Outputs

What the programme does with the inputs to

fulfill its mission.

E.G.

provide x number of classes to x participants

provide weekly counseling sessions

educate the public about signs of child abuse by

distributing educational materials to all agencies that

serve families

Identify 20 mentors to work with youth and

opportunities for them to meet monthly for one year

Contextual Analysis

Identify the major

conditions and

reasons for why you

are doing the work

in your community

Page 13: Evaluation introduction

12

Outcomes, Indicators, Targets

Page 14: Evaluation introduction

13

What is the difference between

outcomes, indicators, and

targets?Outcomes are changes in behavior, skills,

knowledge, attitudes, condition or status.

Outcomes are related to the core business of the programme, are realistic and attainable, within the program’s sphere of influence, and appropriate.

Outcomes are what a programme is held accountable for.

Page 15: Evaluation introduction

14

What is the difference between

outcomes, indicators, and

targets?Indicators are specific characteristics or

changes that represent achievement of an outcome.

Indicators are directly related to the outcome and help define it.

Indicators are measurable, observable, can be seen, heard or read, and make sense in relation to the outcome whose achievement they signal.

Page 16: Evaluation introduction

15

What is the difference between

outcomes, indicators, and

targets?

Targets specify the amount or level of outcome attainment that is expected, hoped for or required.

Page 17: Evaluation introduction

16

Why measure outcomes?

To see if your programme is really making a difference in the lives of your clients

To confirm that your programme is on the right track

To be able to communicate to others what you’re doing and how it’s making a difference

To get information that will help you improve your programme

Page 18: Evaluation introduction

17

Use Caution

When Identifying Outcomes

There is No right number of outcomes.

Be sure to think about when to expect outcomes.

1)Initial Outcomes First benefits/changes participants experience

2)Intermediate Outcomes Link initial outcomes to longer-term outcomes

3)Longer-term Outcomes Ultimate outcomes desired for program

participants

Page 19: Evaluation introduction

18

How do you identify indicators? Indicators are specific characteristics or changes

that represent achievement of an outcome.

Indicators are directly related to the outcome and help define it.

Indicators are measurable, observable, can be seen, heard or read, and make sense in relation to the outcome whose achievement they signal.

Ask the questions shown on the following slide.

Page 20: Evaluation introduction

19

Questions to Ask

When Identifying Indicators

1. What does this outcome look like when it occurs?

2. What would tell us it has happened?

3. What could we count, measure or weigh?

4. Can you observe it?

5. Does it tell you whether the outcome has been achieved?

Page 21: Evaluation introduction

20

a reduction in the employee turnover rate among aides involved in

the program

The BIG question is what evidence do we need to see to be

convinced that things are changing or improving?

The “I’ll know it (outcome) when I see it (indicator)” rule in

action -- some examples:

I’ll know

when I see

and when I see survey results that indicate that aides are

experiencing increased job satisfaction

that retention has increased among home health aides

involved in a career ladder program

Page 22: Evaluation introduction

Mark Friedman (2005)

• Outcomes Based Accountability

• Frustrated by social programmes ‘all talk; no action’

• Need for a ‘Common Language’.

• Need for accurate data

• Need for baselines.

• Differentiate between Inputs, Outcomes, Outputs

Page 23: Evaluation introduction

Summary

• Evaluation is a systematic process.

• Evaluation considers inputs, outputs, and

outcomes.

• Evaluation involves making qualitative and

quantitative judgements.

• Effective evaluation requires that you are

clear about what it is that you are

measuring/judging.

22