21
Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Embed Size (px)

Citation preview

Page 1: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Puzzling about what Evaluation is and might be

Thomas D. CookStockholm, October, 2015

Page 2: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

How I see the Current State of Evaluation Theory and Practice

• Apologies in advance that I live in the USA and see the world of evaluation largely thru US spectacles. I am bound to be provincial.

• Apologies also that this is the most personal talk I have given for probably 30 years – no data, lots of opinions, some but little theory

• I am motivated by what I think is a mismatch between what evaluation theory and practice are and what they might be

Page 3: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Consider some professional associations

• AEA. The dominant orientation there is evaluation as management consulting to government rather than private sector

• Emphasis is on being response to public managers’ knowledge needs and to generate information that is likely to be used by them.

• The attendees are from all over the world, and come from many disciplines, but few micro-economists or researchers on prevention, criminal justice or public health.

Page 4: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

AEA

• Most of the research is qualitative or descriptive quantitative (survey or administrative data) or both (mixed method).

• Little is experimental or high quality quasi-experimental.

• Concerted attempt being made by some in leadership to recruit differently and introduce broader range of methods about “effectiveness”, but not everyone in favor

Page 5: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Contrast AEA with APPAM

• APPAM deals largely with evaluations of the outcomes of policies, programs or program elements to seek the “truth” about their impact

• Some but relatively minor emphasis on implementation and other forms of process. Why seek to improve these in a program that might not be effective? Little mixed methods.

• Most attendees are micro-economists who deal with accountability for results and some political scientists who mostly handle implementation

Page 6: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

APPAM• They seek evaluation money from government

officials but try to stay “independent”• They also get much research funds from private

foundations to stay independent• They lobby for independent arms within gov• They heavily prefer random assignment methods,

but will countenance a few others only – RD and perhaps CITS. But they tend to reject other methods claiming to be “causal” –modeling, longitudinal survey analysis, simple D-in-D designs, OLS analyses and even PSM

Page 7: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Bottom Line

• AEA seeks to improve program management and APPAM seeks to improve accountability for outcomes.

• Preferred outcome for AEA is advice for managers while for APPAM it is a “cost-benefit” of what a program has achieved.

Page 8: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Two Evaluation Societies, two Worlds

• AEA Annual Convention 2015, Nov 9 to 14, Chicago, Illinois

• APPAM Annual Convention 2015, Nov 12 to 14, Miami, Florida

• The two worlds will not interact this year. No presumption someone will attend both.

Page 9: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Same Duality captured within Sectors – e.g., Education

• AERA is like AEA – much on political realities, on outcome monitoring, on implementation, little on effects with high quality methods

• SREE – deliberately created as counter to AERA and paid for by government – its conferences are heavily accountability by results, with heavy emphasis on RCTs

Page 10: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Activities called “Evaluation of Social Policies or Programs” goes on also

• Society of Prevention Research – looks much more like APPAM than AEA

• Society for Social Work Research, more like AEA than APPAM

• Society for Child Development – more like APPAM for its evaluation priorities

• All say they do evaluation; separate worlds; do not meet at AEA

Page 11: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

So what?• Organizations all claim to do “evaluation” but

mean radically different things by it – one speaks to program improvement and the other to program accountability for outcomes

• One is more responsible to manager needs and the other to politician needs

• Separated, they are less socially powerful and less theoretically rigorous than if combined

• Evaluation will never be the academic sub-discipline many want without a theory of practice that links its disparate practices and priorities

Page 12: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Programming

• What do we evaluate? Policies, programs, projects and elements, in place or potential

• Policies already in place – monitoring of indicators immediately useful; accountability by immediate results rarely useful; accountability by long term effects useful;

• Possible policies – demonstration experiments very useful

Page 13: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Programming

• Programs already in Place. Monitoring implementation quality very useful; establishing effects quite useful.

• Possible Programs – useful to establish their effects

• Elements within Programs. Useful to establish their effects – this is where Nudge Units fit in

Page 14: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Programming

• Current Projects – no need for outcome evaluation; good for getting insights into program operations.

Page 15: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Valuing

• Evaluation is about saying how good something is; to assess the value of its consequences. Thus, to

• Monitor is not to assign goodness –pre-eval• Improve program ops assigns value to process

but not outcomes – improve the ineffective• Describe pol or prog results is eval, but only to

extent outcome is known to do good things

Page 16: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Valuing

• To describe cost-benefit is about $ value• To describe which social values or interests

are promoted or hurt by results is about social value

• Key difference here is that many of the AEA-like activities appear to be pre-evaluative or non-evaluative. They are justified by utility considerations, not formally evaluative ones

Page 17: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Method Choice

• If evaluation requires judgments about how good some causal result is, then methods for establishing cause are central to evaluation

• Hence debates about experiments, some QEs versus longitudinal correlational methods, qualitative methods, mixed methods

• Less quant methodological feel diminished by the advocates of “rigorous” methods who see their goals and methods rejected – and object

Page 18: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Method Choice

• Much depends on • I. Your preferred theory of cause.• If believe average effects of policy + prog are

important, then do experiments. Blunt stick• If believe programs have highly variable

effects that must be known, then RCTs are not the only choice for you. But you must be prepared to give up on clarity of causal claim

Page 19: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Theory of Valuing

• Or II. You must make an argument that evaluation is not about placing value but about immediate utility for program improvement – e.g., the social return from improving programs when some of them are ineffective is greater than the social return from getting accurate feedback about effects that comes typically many years later.

Page 20: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Conclusions

• I have described 2 worlds claiming to do evaluation that do not speak to each other

• They have different assumptions about the purposes of evaluation and stakeholders for it and thus of the methods they think evaluators need.

• They can exist independently + each flourish

Page 21: Puzzling about what Evaluation is and might be Thomas D. Cook Stockholm, October, 2015

Conclusions

This means eval as a field is less than its parts•Its discussions are about methods and not about the sub-theories of evaluation from which a broad theory and practice of evaluation might be constructed.•Is this as much as a problem as I see it?•Do we just “grin and bear it”?•What can be done about it?