24
© 2008, MJ Bober and JM Marshall; all rights reserved 1 What This Course explores … Evaluation as a subspecialty of performance technology … vs. the “e” in ADDIE

© 2008, MJ Bober and JM Marshall; all rights reserved 1 What This Course explores … Evaluation as a subspecialty of performance technology … vs. the

Embed Size (px)

Citation preview

© 2008, MJ Bober and JM Marshall; all rights reserved

1

What This Course explores …

Evaluation as a subspecialty of performance technology … vs. the “e” in ADDIE

© 2008, MJ Bober and JM Marshall; all rights reserved

2

What is inquiry anyway?

A studious, systematic examination of facts or principles; research

Investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in light of new facts, or practical application of such new or revised theories or laws

© 2008, MJ Bober and JM Marshall; all rights reserved

3

Inquiry -- from three different perspectives

Basic: scientific investigation to develop or enhance theory

Applied: testing theory to assess its ”usefulness” in solving (instructional or educational) problems

Evaluation: determining whether a program, product, or process warrants improvement, has made a difference or impact, or has contributed to general knowledge or understanding

© 2008, MJ Bober and JM Marshall; all rights reserved

4

“Defining” evaluation(culled from sources other than

Owen) Merriam Webster Dictionary (online):

“… to determine the significance, worth, or condition of something, usually by careful appraisal and study.”

Patton (1996):“… the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming” (p. 23).

© 2008, MJ Bober and JM Marshall; all rights reserved

5

“Defining” evaluation (culled from sources other than

your text) Weiss (1998):

“… the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy” (p. 4).

Fitzpatrick, Sanders, & Worthen (2004):“… the identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit), quality, utility, effectiveness, or significance in relation to those criteria” (p. 5).

© 2008, MJ Bober and JM Marshall; all rights reserved

6

“Defining” evaluation

Core to these definitions: They tackle HOW evaluation is to be conducted (e.g.,

systematically and purposefully … through thoughtfully-planned data collection).

They tackle WHY it is to be conducted (e.g., to improve program outcomes or program effectiveness).

They attend to PROCESS and METHODOLOGY (e.g., assessment compared to a set of explicit or implied standards).

They aim to enhance KNOWLEDGE and DECISION-MAKING. They are NOT activity driven.

Scriven: Evaluation as a transdisicpline …

© 2008, MJ Bober and JM Marshall; all rights reserved

7

The research/evaluation dichotomy: real or contrived?

Owen emphasizes that:

What distinguishes evaluation from “applied research” (as we tend to define it) is not method or subject matter, but intent--the purpose for which it is done.

What distinguishes evaluation from “applied research” is that it leads to conclusions and recommendations/action items … and to get to them requires identifying standards and performance data and the integration of the two.

© 2008, MJ Bober and JM Marshall; all rights reserved

8

The research/evaluation dichotomy: real or contrived?

• We argue that evaluation also involves studious inquiry … but its intents are far different.

• Well-constructed evaluations inform theory -- evaluation helps “researchers” understand how theory translates to practice.

• Evaluation research is guided by explicit (not merely implicit) standards of conduct (Guiding Principles, Program Evaluation Standards)

© 2008, MJ Bober and JM Marshall; all rights reserved

9

The research/evaluation dichotomy: real or contrived?

Weiss (1998) identifies several areas of differenceUtilityProgram-derived questionsJudgmental qualityAction settingRole conflictsPublicationAllegiance

© 2008, MJ Bober and JM Marshall; all rights reserved

10

The research/evaluation dichotomy: real or contrived?

Weiss also identifies areas of similarity, including the researcher’s attempt to: describe understand relationships between and among

variables trace out the casual sequence from one variable to

another

Evaluation and research may also share methods and data collection strategies.

© 2008, MJ Bober and JM Marshall; all rights reserved

11

The research/evaluation dichotomy: real or contrived?

In summary, then, evaluation differs from other kinds of research in that... central questions are derived from policymakers and

practitioners, results are generally used to improve programs,

projects, products, or processes, it tends to occur in turbulent action settings, results are often reported to nonresearch audiences.

© 2008, MJ Bober and JM Marshall; all rights reserved

12

Working with complex terminology

Theory Approach Model Principle Guideline Heuristic Framework Frame of reference Orientation

© 2008, MJ Bober and JM Marshall; all rights reserved

13

Working with complex terminology

One can be dedicated to inquiry … while not invested in developing or enhancing theory, per se.

© 2008, MJ Bober and JM Marshall; all rights reserved

14

Why evaluate?

To provide evidence regarding the short- and long-term effects of programs or projects

To determine a program or product’s cost-effectiveness

To improve existing programs or products To document successes and mistakes To assure stakeholder buy-in

© 2008, MJ Bober and JM Marshall; all rights reserved

15

Why evaluate?

According to Owen, evaluation helps people make a wide array of instrumental action decisions, e.g.: making midcourse corrections (content, facilitation,

deliverables, goals/objectives/outcomes) continuing, expanding, or institutionalizing a program

… or cutting, ending, or abandoning it testing a new program idea (integrating a new

component) choosing the best of several alternatives (from delivery

to target audience) determining whether funding levels are appropriate …

or how to “cut back’ on costs without affecting “integrity”

© 2008, MJ Bober and JM Marshall; all rights reserved

16

Why evaluate?

According to Owen, evaluation helps people make a wide array of “organizational” decisions, e.g.: recording program history providing feedback to practitioners highlighting program goals establishing accountability understanding social intervention

© 2008, MJ Bober and JM Marshall; all rights reserved

17

Why evaluate?

Many argue evaluations should be judged by their utility and actual use (Patton, 1996)

Evaluation that is utilization-focused helps us think differently about which forms or approaches we choose. This means

emphasizing primary intended users of the informationmaking the effort “personal”recognizing that the effort is “situational”thinking creatively about how to organize and conduct

the effort

© 2008, MJ Bober and JM Marshall; all rights reserved

18

Why evaluate?

“Since no evaluation can be ‘value-free,’ utilization-focused evaluation answers the question of whose values will frame the study by working with clearly identified, primary intended users who have responsibility to apply evaluation findings and implement recommendations” (Patton, p. 21).

© 2008, MJ Bober and JM Marshall; all rights reserved

19

Fostering evaluation use

Typical barriers to evaluation Fear of being judged Cynicism about whether anything can ever change Skepticism about the worth of evaluation Concern about the time and money costs of a study Frustration from previous experiences

Evaluation is most worthwhile when its uses, areas of focus, and questions are stakeholder generated.

© 2008, MJ Bober and JM Marshall; all rights reserved

20

Sponsors v. stakeholders

An evaluation sponsor is generally the person or agency funding or commissioning the study.

Evaluation stakeholders are those with a vested interest in the study’s results, e.g.: instructional designers program managers instructors product users program participants

© 2008, MJ Bober and JM Marshall; all rights reserved

21

Sponsors v. stakeholders

Evaluation stakeholders: more to consider … There may be multiple levels of stakeholders, some of

whom have an “indirect” interest in program, product, or process effectiveness.Stakeholders typically have diverse and often competing interests.

When possible: limit the number of “true” stakeholders; consider others “interested audiences”

© 2008, MJ Bober and JM Marshall; all rights reserved

22

Making evaluation matter

Three factors can drive the success of an evaluation: human factors – reflect evaluator and user characteristics

(e.g., attitudes toward and interest in the program, professional experience, organizational position)

context factors – consist of the requirements and fiscal constraints facing the evaluation

evaluation factors – refer to the actual conduct of the evaluation (e.g., the procedures used) and the quality of the information provided

© 2008, MJ Bober and JM Marshall; all rights reserved

23

Thinking about personal conduct

As an evaluator, you are expected... to be competent to be honest and demonstrate integrity to show respect for people to be politically savvy to work systemically to make data-based decisions

As an evaluator, you have choices to make from the start… Utility: don’t evaluate without a good reason for doing so Feasibility: can this evaluation really be performed; have all

political overtones and ramifications been considered Propriety: have potential conflicts of interest been considered; will

it be “bias-free” Accuracy: will you be able to analyze the data so that a “true”

picture is presented

© 2008, MJ Bober and JM Marshall; all rights reserved

24

Evaluators deal withseveral common constraints

Time

Budget

Data

Politics