Michael Quinn Patton N.I.H. Sept 14, 2004 The Debate About Randomized Controls in Evaluation: The Gold Standard Question

  • View
    215

  • Download
    2

Embed Size (px)

Transcript

  • Slide 1
  • Michael Quinn Patton N.I.H. Sept 14, 2004 The Debate About Randomized Controls in Evaluation: The Gold Standard Question
  • Slide 2
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Political Context US Dept of Ed Scientifically Based Evaluation Methods policy ( RCTs (Institute of Educational Sciences) www.ed.gov/programs/edresearch/applicant.html World Bank critics Poverty Action Lab, MIT: RCTs as the gold standard in evaluation only 2% of World Bank programs properly evaluated . (NY Times, 28/4/04, p. A4)
  • Slide 3
  • Michael Quinn Patton N.I.H. Sept 14, 2004 American Evaluation Association position and debate AEA dissent from RCTs as the gold standard Counter-statement from RCT advocates in AEA
  • Slide 4
  • Michael Quinn Patton N.I.H. Sept 14, 2004 AEA position (Nov 4, 2003): (1) Studies capable of determining causality. Randomized control group trials (RCTs) are not the only studies capable of generating understandings of causality. In medicine, causality has been conclusively shown in some instances without RCTs, for example, in linking smoking to lung cancer and infested rats to bubonic plague. The secretary's proposal would elevate experimental over quasi-experimental, observational, single-subject, and other designs which are sometimes more feasible and equally valid.
  • Slide 5
  • Michael Quinn Patton N.I.H. Sept 14, 2004 RCTs are not always best for determining causality and can be misleading. RCTs examine a limited number of isolated factors that are neither limited nor isolated in natural settings. The complex nature of causality and the multitude of actual influences on outcomes render RCTs less capable of discovering causality than designs sensitive to local culture and conditions and open to unanticipated causal factors.
  • Slide 6
  • Michael Quinn Patton N.I.H. Sept 14, 2004 RCTs should sometimes be ruled out for reasons of ethics. For example, assigning experimental subjects to educationally inferior or medically unproven treatments, or denying control group subjects access to important instructional opportunities or critical medical intervention, is not ethically acceptable even when RCT results might be enlightening. Such studies would not be approved by Institutional Review Boards overseeing the protection of human subjects in accordance with federal statute.
  • Slide 7
  • Michael Quinn Patton N.I.H. Sept 14, 2004 In some cases, data sources are insufficient for RCTs. Pilot, experimental, and exploratory education, health, and social programs are often small enough in scale to preclude use of RCTs as an evaluation methodology, however important it may be to examine causality prior to wider implementation.
  • Slide 8
  • Michael Quinn Patton N.I.H. Sept 14, 2004 (2) Methods capable of demonstrating scientific rigor. For at least a decade, evaluators publicly debated whether newer inquiry methods were sufficiently rigorous. This issue was settled long ago. Actual practice and many published examples demonstrate that alternative and mixed methods are rigorous and scientific. To discourage a repertoire of methods would force evaluators backward. We strongly disagree that the methodological "benefits of the proposed priority justify the costs."
  • Slide 9
  • Michael Quinn Patton N.I.H. Sept 14, 2004 (3) Studies capable of supporting appropriate policy and program decisions. We also strongly disagree that "this regulatory action does not unduly interfere with State, local, and tribal governments in the exercise of their governmental functions." As provision and support of programs are governmental functions so, too, is determining program effectiveness. Sound policy decisions benefit from data illustrating not only causality but also conditionality. Fettering evaluators with unnecessary and unreasonable constraints would deny information needed by policy- makers.
  • Slide 10
  • Michael Quinn Patton N.I.H. Sept 14, 2004 While we agree with the intent of ensuring that federally sponsored programs be "evaluated using scientifically based research... to determine the effectiveness of a project intervention," we do not agree that "evaluation methods using an experimental design are best for determining project effectiveness. We believe that the constraints in the proposed priority would deny use of other needed, proven, and scientifically credible evaluation methods, resulting in fruitless expenditures on some large contracts while leaving other public programs unevaluated entirely. End of AEA position (Nov 4, 2003). Source: www.eval.org
  • Slide 11
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Foundational Premises Research and evaluation are different and therefore Evaluated by different standards.
  • Slide 12
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Evaluation Standards Utility Feasibility Propriety Accuracy For the full list of Standards: www.wmich.edu/evalctr/checklists/standardschecklist.htm
  • Slide 13
  • Michael Quinn Patton N.I.H. Sept 14, 2004 GOLD STANDARD: METHODOLOGICAL APPROPRIATENESS not Methodological orthodoxy or rigidity
  • Slide 14
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Conditions for which randomized control trials (RCTs) are especially appropriate A discrete, concrete intervention (treatment) singular, well-specified Implementation can be standardized Implementation can be controlled Valid and reliable measures exist for the outcome to be tested Random assignment is possible Random assignment is ethical
  • Slide 15
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Examples of Appropriate Use of RCTs Drug studies Fertilizer and crop yields A single health practice: -- brushing teeth -- exercise regimen
  • Slide 16
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Examples where RCTs are not appropriate in my view Complex, multi-dimensional and highly context-specific community health interventions Example: FB
  • Slide 17
  • Michael Quinn Patton N.I.H. Sept 14, 2004 What is methodologically appropriate and possible: In-depth case studies of particular community interventions and outcomes Comparative pattern analysis of natural variation Systems analysis
  • Slide 18
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Example where RCT is not possible A Study of the Deaths of Persons who are Homeless in Ottawa A Social and Health Investigation Report to the City of Ottawa August 2003 Vivien Runnels Coordinatrice/Coordinator, Research and Operations Centre de recherche sur les services communautaires/Centre for Research on Community Services Facult des sciences sociales/Faculty of Social Sciences Universit d'Ottawa, University of Ottawa www.socialsciences.uottawa.ca/crcs/eng/publ.asp www.socialsciences.uottawa.ca/crcs/eng/publ.asp
  • Slide 19
  • Michael Quinn Patton N.I.H. Sept 14, 2004 What is possible and appropriate: Multiple sources of data about each case Triangulation of sources Modus operandi analysis Epidemiological field methods
  • Slide 20
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Example where RCT is not needed in my view Face validity is high The observed changes are dramatic The link between treatment and outcome is direct
  • Slide 21
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Individual, every day examples of causal attribution: Sore muscles after exercise Hangover after drinking Sex with a single partner and resulting pregnancy or STD Keyboard failure -unplugged
  • Slide 22
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Examples of professional expertise and causal analysis Auto mechanic Airline and auto crashes Causes of fires Forensic science, autopsies Physicians
  • Slide 23
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Example of Formative Program Evaluation Drop-out problem Baseline stats, target group analysis (disaggregate data), interview drop-outs and staff, review case records, make program changes, monitor results
  • Slide 24
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Examples of summative program evaluation Treatment of chronic alcoholics Worm medicine and school attendance in LDCs
  • Slide 25
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Intervention comparisons The search for viable alternatives to an existing intervention or program that may produce comparably valuable benefits at comparable or lesser cost to find the best use of the resources.
  • Slide 26
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Comparing the costs and benefits of alternative interventions is typically more important in evaluation than RCT designs. Pragmatically, comparison of a treatment to a control is less interesting and meaningful because not doing anything is seldom the realistic policy alternative. This distinguishes the normal approach in program evaluation from that in experimental design for research purposes. (See Scriven, Evaluation Thesaurus, 1991: 112)
  • Slide 27
  • Michael Quinn Patton N.I.H. Sept 14, 2004 Matching the evaluation design to the stage of program or intervention development and the state of knowledge.
  • Slide 28
  • Michael Quinn Patton N.I.H. Sept 14, 2004 William Trochim, Cornell University (EvalTalk, Dec 4, 2003): Medical research arguably has the strongest tradition of randomized experimental design for causal assessment. But that can only