37
Evaluation in Africa RISING Pascale Schnitzer and Carlo Azzarri, IFPRI Africa RISINGCSISA Joint Monitoring and Evaluation Meeting, Addis Ababa, Ethiopia, 11-13 November 2013

Evaluation in Africa RISING

Embed Size (px)

Citation preview

Page 1: Evaluation in Africa RISING

Evaluation in Africa RISING

Pascale Schnitzer and Carlo Azzarri, IFPRI

Africa RISING–CSISA Joint Monitoring and Evaluation Meeting, Addis Ababa, Ethiopia, 11-13 November 2013

Page 2: Evaluation in Africa RISING

Outline

• Quantitative (experimental and

quasi-experimental)

• Qualitative

• Mix methods

Page 3: Evaluation in Africa RISING

QuantitativeExperimental

• RCT

• Choice experiments, auctions, games

Quasi-experimental

• Double-Difference (Diff-in-Diff)

• Matching

• RD

• IV and encouragement design

Page 4: Evaluation in Africa RISING

Example: providing fertilizers to farmers

Intervention: provide fertilizer to farmers in district A

Program targets all farmers living in district A

Farmers have to enroll at the local extension office to receive the fertilizer

District B does not receive any intervention

Starts in 2012, ends in 2016, we have data on yields for farmers in district A and district B for both years

What is the effect of giving fertilizer on agricultural yields?

Page 5: Evaluation in Africa RISING

Case I: Before & After

(1) Observe only beneficiaries

(2) Two observations in time: yields at T=0and yields at T=1.

Y

TimeT=2012 T=2016

α = 200

IMPACT=A-B= 200?

B

A

2000

2200

Page 6: Evaluation in Africa RISING

Case I: What’s the problem?Unusual good weather/rain:o Real Impact=A-Co A-B is an overestimate

α = 200

2000

2200

Impact?

Impact?

TimeT=2012 T=2016

B

A

C ?

D ?

Droughto Real Impact=A-Do A-B is an

underestimate

Page 7: Evaluation in Africa RISING

Case II: Those Enrolled & Not EnrolledIf we have post-treatment data on

o Enrolled: treatment group

o Not-enrolled: “comparison” group (counterfactual)

(a) Those that choose NOT to participate

(b) Those ineligible to participate (e.g. neighbor

community)

Did AR have a negative impact?

2200

2500

2800

0

500

1000

1500

2000

2500

3000

Choose to participate Choose not to participate Inelegible to participate

Yields in 2016 by participant type

Page 8: Evaluation in Africa RISING

Case I and IIIn the end, with these naïve comparisons, we cannot tell if the program had an impact

We need a comparison group that is as identical in observable and unobservable dimensions as possible, to those receiving the program, and a comparison group that will not receive spillover benefits.

Page 9: Evaluation in Africa RISING

We need to keep in mind…B&A

Compare: Same individuals Before and After they receive P.

Problem: Other things may have happened over time.

E&NECompare: Group of individuals Enrolled in a program with group that chooses not to enroll.

Problem: Selection Bias. We don’t

know why they are not enrolled.

Both counterfactuals may lead to biased estimates of the impact.

Page 10: Evaluation in Africa RISING

QuantitativeExperimental

• RCT

Page 11: Evaluation in Africa RISING

= Ineligible

RCT

= Eligible

1. Population

External Validity

2. Evaluation sample

3. Randomize treatment

Internal Validity

Comparison

Treatment

X

Page 12: Evaluation in Africa RISING

QuantitativeExperimental

• RCT

• Choice experiments, auctions, games

Page 13: Evaluation in Africa RISING

Choice experiments, auctions, games

• An experiment is a set of observations generated in a controlled environment to answer a particular question or solve a particular problem.

• Subjects make decisions that are not part of their day-to-day decision making (typically in a game environment), they know they are part of an experiment, or both.

• Purposes:1. Test theories2. Measure what are considered “unobservables” (e.g. preferences, beliefs)3. Test sensitivity of experimental results to different forms of heterogeneity

Page 14: Evaluation in Africa RISING

Choice experiments, auctions, games

• Examples:-behavioral game theory-ultimatum games-dictator games-trust games-public good games-coordination games-market experiments (auctions)-risk- and time-preference experiments

Page 15: Evaluation in Africa RISING

Quantitative

Quasi-experimental designs

• Double-Difference (Diff-in-Diff)

Page 16: Evaluation in Africa RISING

Impact =(A-B)-(C-D)=(A-C)-(B-D)

Pro

bab

ility

of

ado

pti

on

B=0.60

C=0.81

D=0.78

T=0Before

T=1After

Time

Participants

Not participants

Impact=0.11

A=0.74

Page 17: Evaluation in Africa RISING

Impact =(A-B)-(C-D)=(A-C)-(B-D)

Pro

bab

ility

of

ado

pti

on

Impact<0.11

B=0.60

A=0.74

C=0.81

D=0.78

T=0Before

T=1After

Time

Enrolled

Not enrolled

Page 18: Evaluation in Africa RISING

Example from Malawi:Total land used (acres)

Treatment Group(Randomized to

treatment)

Counterfactual (Randomized to

Comparison)

Impact(Y | P=1) - (Y | P=0)

Baseline (T=0) [MARBES] (Y) 3.04 2.13 0.91

Follow-up (T=1) [MARBES] (Y) ?? ?? ??

Page 19: Evaluation in Africa RISING

Quantitative

Non-experimental

• Double-Difference (Diff-in-Diff)

• Matching

Page 20: Evaluation in Africa RISING

Propensity-Score Matching (PSM)Comparison Group: non-participants with same observable characteristics as participants. In practice, it is very hard.

There may be many important characteristics!

Match on the basis of the “propensity score”,

Compute everyone’s probability of participating, based on their observable characteristics.

Choose matches that have the same probability of participation as the treatments.

Page 21: Evaluation in Africa RISING

Density

Propensity Score0 1

ParticipantsNon-Participants

Common Support

Page 22: Evaluation in Africa RISING

Quantitative

Non-experimental

• Double-Difference (Diff-in-Diff)

• Matching

• RD

Page 23: Evaluation in Africa RISING

RD: Effect of fertilizer program on adoption

Improve fertilizers adoption for small farmers

Goal

o Farms with a score (Ha) of land ≤2 are smallo Farms with a score (Ha) of land >2 are not small

Method

Small farmers receive subsidies to purchase fertilizer

Intervention

Page 24: Evaluation in Africa RISING

Regression Discontinuity: Design at baseline

Not eligible

Eligible

Page 25: Evaluation in Africa RISING

Regression Discontinuity: post intervention

IMPACT

Page 26: Evaluation in Africa RISING

Quantitative

Non-experimental

• Double-Difference (Diff-in-Diff)

• Matching

• RD

• IV and encouragement design

Page 27: Evaluation in Africa RISING

Babati (WP2): Timeline and design of an evaluation

Feb 13

July 13

Aug-Oct 13

Nov 13 –Mar. 14 Mar. 2016

Initial planting at demonstration

plots

Follow-up field day: farmers rank preferred seeds

Fertilizer and seed distribution

800 farmers in 11 villages

200 receive improved seeds

200 receive improved seeds and

fertilizerEnd-line survey: measure

impacts200 receive

seeds, fertilizer and contracts

200 receive no additional

intervention

Survey

Page 28: Evaluation in Africa RISING

Outline

• Qualitative

Page 29: Evaluation in Africa RISING

Qualitative• Semi-structured or open-ended in-

depth interviews• Focus groups• Outcome Mapping• Participatory Impact Pathways Analysis

(PIPA)

Page 30: Evaluation in Africa RISING

Outcome Mapping (OM)• Contribution of AR to changes in the

actions, behaviors, relationships, activities of the ‘boundary partners’ (individuals, groups, and organizations with whom AR interacts directly and with whom it anticipates opportunities for influence)

• It is based largely on systematized self-assessment

• OM is based on three stages: 1. Intentional design (Why? Who? What? How?)2. Outcome and performance monitoring3. Evaluation design

Page 31: Evaluation in Africa RISING

Outcome Mapping (OM)

By using OM, AR would not claim the achievement of development impacts; rather, the focus is on its contributions to outcomes. These outcomes, in turn, enhance the possibility of development impacts – but the relationship is not necessarily a direct one of cause and effect.

the relationship is not necessarily a direct one of cause and effect

Page 32: Evaluation in Africa RISING

Qualitative

• Participatory Impact Pathways Analysis (PIPA)

Page 33: Evaluation in Africa RISING

Participatory Impact Pathways Analysis (PIPA)• PIPA begins with a participatory workshop where stakeholders make

explicit their assumptions about how their project will achieve an impact. Participants construct problem trees, a visioning exercise and network maps to help them clarify their 'impact pathways‘ (IPs).

• IPs are then articulated in two logic models:1. The outcomes logic model -> the project's medium term

objectives in the form of hypotheses: which actors need to change, what are the changes, which strategies are needed to attain the changes.

2. The impact logic model -> how, by helping to achieve the expected outcomes, the project will impact on people's livelihoods. Participants derive outcome targets and milestones, regularly revisited and revised as part of M&E.

Page 34: Evaluation in Africa RISING

Outline

• Mix methods

Page 35: Evaluation in Africa RISING

Mixed methods

• Combination of quantitative and

qualitative research methods to

evaluate programs

Page 36: Evaluation in Africa RISING

Conclusions• We cannot do everything in every megasite…

• Quantitative surveys are being conducted/planned in every country

• IFPRI has comparative advantage in quantitative approaches, we shall split the tasks with the research teams on qualitative methods -> mixed methods

• Is IFPRI M&E on the right track? What shall we be focusing on more? What shall we not be doing ?

Page 37: Evaluation in Africa RISING

Africa Research in Sustainable Intensification for the Next Generation

africa-rising.net