46
Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Embed Size (px)

Citation preview

Page 1: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Programs, Policies, and Evaluations – Chapter 1

SOCI 4466EA – Program and Policy Evaluation

Professor Jeff Wright

Page 2: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

What is evaluation research?

Use of social research procedures to systematically investigate the effectiveness of social intervention programs (social programs/human service definition)

Program evaluation is often applied to other arenas (i.e., business planning, military procurement, water quality control, etc.)

Page 3: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Consumers of evaluation research include: policy makers, funding organizations, program managers, taxpayers, and program clientele

Consumers expect answers to program continuance, modification, and celebration questions such as: What are the appropriate target populations for intervention?; Is the intended intervention being implemented well?; Are the intended services being provided?; Is the intervention effective in attaining the desired outcomes?; How much does the program cost?; Is the program cost reasonable in relation to its effectiveness and benefits? (Rossi, Freeman and Lipsey, 1999)

Page 4: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

A brief history of evaluation research Despite historical roots that extend to the 17th

century, systematic evaluation research is a relatively modern development

Prior to WWI – focus upon assessing literacy and occupational training, public health initiatives, mortality and morbidity

1930’s – advocacy for rigorous research methods to assess social programs (e.g., New Deal programs)(Rossi, Freeman and Lipsey, 1999)

Page 5: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

1960s – evaluation becomes a “growth industry in the U.S.”, largely owing to interest in evaluating the federal war on poverty initiative

1970s – evaluation emerges as distinct specialty field (i.e., development of journals, professional associations, think-tanks, etc.)

1980s – According to Cronbach et al., “Evaluation has become the liveliest frontier of American social science”

Present – rapid growth in U.S. evaluation is over, but still an important social science specialty area (Rossi, Freeman and Lipsey, 1999)

Page 6: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

“Evaluation-Lite” – 1980s to Present Despite considerable advancement in social science research methods,

data maintenance, data retention and data analysis, governments have most often opted for “evaluation lite”

Why?? Cutbacks to social programs to control inflation and reduce deficits; disenchantment with the modest effects of social programs; fear in the power of evidence

Most often within government, evaluation is defined generically to be similar in meaning to assessment, review, and research. Less often it is used in its more “traditional” or “academic” sense, where it is associated with a study done by an external evaluator with a research

methodology derived from social policy research methods (Segsworth, 2001)

Page 7: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Evaluation policy and practice in Ontario – 1980s to Present 3 phases of evaluation policy and practice since

mid-1980s; each corresponds with a change of government

Peterson Liberals introduced “activity review” Rae NDP government introduced “program

review” Harris government introduced “performance

measurement” (Segsworth, 2001)

Page 8: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Activity Review

Prior to 1988, evaluation did not have a set of specific procedures prescribed for it; rather it was an attitude..an awareness of results

Activity review defined: “systemic studies assessing: (1) the adequacy and relevance of program objectives; (2) the effectiveness of the programs used to achieve objectives; (3) the appropriateness of efficiency mechanisms used to deliver programs to the intended recipients (Management Board of Cabinet, 1988, p.1)

Page 9: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Activity review was only to be applied to programs where there was considerable uncertainty regarding their impact

In addition, programs that for “political or other reasons” were unlikely to be altered regardless of the results of activity review, could be avoided

100 such reviews occurred by 1989 and garnered Provincial Auditor support

By 1989 increased attention to fiscal exercises (i.e., base review) moved evaluation to program review (Segsworth, 2001)

Page 10: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Program Review

Defined as: the development of budget planning that looks beyond a single year and the introduction of an expenditure review and evaluation process that integrates the policy priorities of Government into allocation decisions

3 different program review models were used: 1. An examination of the province’s current base expenditures to see “whether they are achieving their intended objectives and whether funds are being spent efficiently”; 2. Corporate Review Committee of Deputy Ministers given 4 weeks to find savings of $1B; 3. Development of Ministry expenditure control targets and meeting of DMs and Cabinet to find savings of $2.4B (all within 1 month)

Kaufman (1996) sees the process as being “transformed from an original model of in-depth review and reform based largely within ministries to a more centrally co-ordinated exercise in expenditure control”(Segsworth, 2001)

Page 11: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Performance measurement and accountability PC focus on improving Ontario’s financial

management, thereby making financial reporting easier to understand and strengthening accountability

Mandate led to the preparation of annual business reports that would outline goals and priorities, explain targets for assessing effective performance and outline how progress toward them would be measured, and report on progress towards the established goals and explain reasons for changes in the plans (Segsworth, 2001)

Page 12: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Progressive Conservative government may be melding several program review approaches

In March 2000, cabinet approved a new policy of base budget and policy-based reviews

“some elements of the policy, such as a concern to assure the government of continued program effectiveness, suggest a possible role for evaluation research(Kaufman, 1996)

Because the policy is driven by concerns of cost neutrality and program affordability, it may result in exercises aimed at redesigning the problem with a smaller target population or of savings to be achieved (Segsworth, 2001)

Page 13: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Example: Performance Measurement – MCSS 1999-2000

Core Business: Income and Employment Supports Goals/Outcomes – recipients of Ontario Works achieve

self-reliance and return to work Measures - # of people receiving social assistance

(Ontario Works) is reduced Targets/Standards – continue to increase self-reliance by

reducing welfare dependency by 4% in 1999-2000 1999-2000 – Ontario Works caseload is reduced by 4%

(Segsworth, 2001)

Page 14: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Adapting Evaluation to the Political and Organizational Context of the Program

Evaluation is not a “cut and dried” activity but rather one where a plan is tailored specifically to the individual program and the outcomes which the program should reasonably be expected to generate

Typically the most challenging part of the evaluation is the negotiations which must occur with stakeholders

These negotiations must be sensitive to the political context of the program, natural resistance of evaluees toward having their performance measured, and the need for “in-flight” plan correction

Ideally, evaluators should try to avoid being drawn in to ritualistic evaluation – namely by thoroughly assessing the motives of the evaluation sponsor/s (see Exhibit 1-H) (Rossi, Freeman and Lipsey, 1999)

Page 15: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Evaluation research in practice

Balancing act between competing forces (i.e., clients, taxpayers, administrators, politicians)

Data collection places unusual and unexpected demands upon operationally oriented evaluees

The evaluation should seek optimal validity in the context of minimizing obstructions to day to day program operations

Need for collaborative planning between the evaluator and other stakeholders

Evaluation planning is constantly modified to reflect the constantly changing decision-making milieu of social programs being evaluated (Rossi, Freeman and Lipsey, 1999)

Page 16: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Scientific Vs. pragmatic evaluation postures Is evaluation an art, or should it rigidly adhere to

scientific social experimentation methods? Campbell argues the former while Cronbach argues

the latter. According to Cronbach, “whereas scientific studies

strive principally to meet research standards, evaluations should be dedicated to providing maximally useful information for decision-makers given the political circumstances, program constraints, and available resources” (Rossi, Freeman and Lipsey, 1999)

Page 17: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

“in practice therefore, the evaluator must struggle to find a workable balance between the emphasis to be placed on procedures that help ensure the validity of the evaluation findings and those that make the findings timely, meaningful and useful to the consumers”

“in many cases, evaluations will justifiably be undertaken that are “good enough” for answering relevant policy and program questions even though program conditions or available resources prevent them from being the best possible designs from a scientific standpoint”

There will also be critics who assail the end-product for being “too academic or ivory tower” or conversely for being methodologically sloppy (Rossi, Freeman and Lipsey, 1999)

Page 18: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

SOCI 4466EA – Introduction to Program and Policy Evaluation

Professor Jeff Wright

Page 19: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

1. Background – programs, policies, and evaluations What is evaluation research? History of evaluation Overview of key concepts Discussion of evaluation in practice

Page 20: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

2. Tailoring evaluations

What is tailoring (and is this a good thing or a bad thing?)

What should be considered with tailoring? The nature of the evaluator, stakeholder,

and evaluee relationship Integration

Page 21: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

3. Identifying issues and formulating questions What questions should be asked and how

should they be prioritized?

Page 22: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

4. Program needs assessment

Role of the evaluator Types of needs assessment Defining program targets Describing service needs

Page 23: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

5. Expressing and assessing program theory Evaluability assessment Defining program theory Assessing program theory

Page 24: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

7. Strategies for outcome evaluation Key concepts Controlling for confounds Description and analysis of varying designs Quantitative and qualitative data

Page 25: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

8. Randomized outcome designs

What are they? Analysis and limitations of randomized

experiments

Page 26: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

9. Quasi-experimental outcome evaluation What is it and how does it differ from a

pure experiment? Constructing comparisons and limitations

Page 27: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

10. Assessing full-coverage programs Non-uniform full-coverage programs Reflexive and shadow controls

Page 28: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

11. Measuring efficiency

Key concepts Cost-benefit analysis Cost-effectiveness analysis

Page 29: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

12. Maximizing the influence of evaluation Reporting results effectively Maximizing the use of results Making evaluation units effective

Page 30: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Chapter 2 – Tailoring Evaluations

SOCI 4466EA – Program and Policy Evaluation

Professor Jeff Wright

Page 31: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Introduction

The nature of a program dictates its evaluation design Evaluation tasks are dependent upon the purposes of the

evaluation; the conceptual and organizational structure of the program; and the resources available

Evaluation is both a creative and collaborative endeavor “the evaluation designs that result from the tailoring

process tend to be adaptations of one or more of a set of familiar evaluation approaches or schemes” (Rossi, Freeman and Lipsey, 1999)

Page 32: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

What aspects of the evaluation must be tailored? The questions the evaluation is to answer (focus) The nature of the evaluator-stakeholder

relationship (to promote effective interaction) The methods and procedures the evaluation will

use to answer the questions (feasible yet rigorous) (Rossi, Freeman and Lipsey, 1999)

Page 33: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

What considerations should guide evaluation planning? The purpose of the evaluation The program structure and circumstances The resources available for the evaluation

Page 34: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Purposes of the evaluation

Purposes can be quite varied (i.e., managerial decisions, political pressures, academic interest, advocacy)

Key for the evaluator to clearly establish purpose at the very outset

Methods include: source document review, interviews with different key informants, and other review of pertinent history and background

Page 35: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Common purposes include: Program improvement Accountability Knowledge generation Political ruses or public relations

exercises(Rossi, Freeman and Lipsey, 1999)

Page 36: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Program structure and circumstances Key program particulars:

The stage of program development (i.e., test, stable)

Administrative and political context (degree of consensus, conflict or confusion amongst stakeholders)

The structure of the program conceptual and organizational makeup

Page 37: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Stages of program development include: New programs – where it is often premature to

consider outcome questions and more appropriate to evaluate process

Established programs – these are most common evaluations and tend to focus on issues of coverage, effective service delivery, and the impact and efficiency of those services

Page 38: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Administrative and political context of the program Each stakeholder perspective entails “different objectives,

and procedures to attain those objectives, that have correspondingly different conceptions of program effectiveness associated with them”

Sometimes it is appropriate to plan and implement an evaluation that does not consider the views of all stakeholders

Regardless, the program sponsors may turn on the evaluator

It is the nature of political environments to discredit information that is disaligned philosophically (Rossi, Freeman and Lipsey, 1999)

Page 39: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

The conceptual and organizational structure of the program A program needs to clearly articulate its purpose in order

to be evaluated Program theory – explicitness of plan of operation, the

logic connecting program activities to outcomes, and the rationale for why is does what it does

Best to have an evaluator involved with the program planners at the initial program planning stages

Organizational structure considerations – number of target populations, diffusion of service sites, level of partnership with other service providers (Rossi, Lipsey and Freeman, 1999)

Page 40: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

The resources available for the evaluation Evaluation tasks and timelines need to be determined for

requisite personnel, materials,etc. Outsourcing may be required for parts or even all of the

evaluation Need support of program management, staff and other

closely related stakeholders (active resistance is costly) Issue of records accessibility Issue of quality of data Evaluator should pilot test draft data collection

methodology (Rossi, Freeman and Lipsey, 1999)

Page 41: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

“All but the most sophisticated evaluation sponsors usually want evaluations that have breadth, depth, and rigor and are cheap and quick. The result is all too often both overburdened evaluators working frantically against deadlines with inadequate resources and frustrated evaluation sponsors perturbed about shortfalls and delays in receiving the product they have paid for”

Page 42: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Evaluation questions and methods General process:

Sponsor puts forward general/abstract questions Evaluator refines the questions through

consultation Evaluator may offer up additional questions

based upon independent review

Page 43: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Common evaluation questions relate to: Need for service/s Program conceptualization or design Program operations and service delivery Program outcomes Program cost and efficiency

Page 44: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Common methodologies related to the aformentioned research questions include: Needs assessment Assessment of program theory Assessment of program process Impact assessment or outcome evaluation Efficiency assessment

Page 45: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

Stitching it all together – a 10 step systematic approach 1. What is the program to be evaluated? 2. Why is the program being evaluated? 3. How are people to be prepared for the

evaluation? 4. What are the main issues/questions with

which the evaluation is to deal? 5. Who will do what?

Page 46: Programs, Policies, and Evaluations – Chapter 1 SOCI 4466EA – Program and Policy Evaluation Professor Jeff Wright

6. What are the resources for the evaluation?

7. What data need to be collected? 8. How will the data be analyzed? 9. What will be the reporting procedure? 10. How will the report be implemented?

(Rossi, Freeman and Lipsey, 1999).