38
1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

Embed Size (px)

Citation preview

Page 1: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

1© 2013 by Nelson Education Ltd.

CHAPTER ELEVEN

Training Evaluation

Page 2: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

2© 2013 by Nelson Education Ltd.

LEARNING OUTCOMES

Define training evaluation and the main reasons for conducting evaluations

Discuss the barriers to evaluation and the factors that affect whether or not an evaluation is conducted

Describe the different types of evaluations Describe the models of training evaluation and

the relationship among them

Page 3: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

3© 2013 by Nelson Education Ltd.

LEARNING OUTCOMES

Describe the main variables to measure in a training evaluation and how they are measured

Discuss the different types of designs for training evaluation as well as their requirements, limits, and when they should be used

Page 4: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

4© 2013 by Nelson Education Ltd.

INSTRUCTIONAL SYSTEMS DESIGN MODEL

Page 5: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

5© 2013 by Nelson Education Ltd.

INSTRUCTIONAL SYSTEMS DESIGN MODEL

Training evaluation is the third step of the ISD model and consists of two parts: The evaluation criteria (what is being measured) Evaluation design (how it will be measured)

These concepts are covered in the next two chapters Each has a specific and important role to play in the

effective evaluation of training and the completion of the ISD model

Page 6: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

6© 2013 by Nelson Education Ltd.

TRAINING EVALUATION

Process to assess the value – the worthiness – of training programs to employees and to organizations

Page 7: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

7© 2013 by Nelson Education Ltd.

TRAINING EVALUATION

Not a single procedure; a continuum of techniques, methods, and measures

Ranges from simple to elaborate procedures The more elaborate the procedure, the more

complete the results, yet usually the more costly (time, resources)

Need to select the procedure based on what makes sense and what can add value within resources available

Page 8: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

8© 2013 by Nelson Education Ltd.

WHY A TRAINING EVALUATION?

Improve managerial responsibility toward training Assist managers in identifying what, and who,

should be trained Determine cost–benefits of a program Determine if training program has achieved

expected results Diagnose strengths and weaknesses of a program

and pinpoint needed improvements Justify and reinforce the value of training

Page 9: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

DO WE EVALUATE?

There has been a steady decline in determining ROI – Level 3 and 4 evaluation

9© 2013 by Nelson Education Ltd.

Page 10: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

10© 2013 by Nelson Education Ltd.

BARRIERS TO EVALUATION

Barriers fall into two categories:1. Pragmatic

• Requires specialized knowledge and can be intimidating

• Data collection can be costly and time consuming

2. Political• Potential to reveal ineffectiveness of training

Page 11: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

11© 2013 by Nelson Education Ltd.

TYPES OF EVALUATION

Evaluations may be distinguished from each other with respect to:

1. The data gathered and analyzed

2. The fundamental purpose of the evaluation

Page 12: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

12© 2013 by Nelson Education Ltd.

TYPES OF EVALUATION

1. The data gathered and analyzed

a. Trainee perceptions, learning, and behaviour at the conclusion of training

b. Assessing psychological forces that operate during training

c. Information about the work environment• Transfer climate and learning culture

Page 13: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

13© 2013 by Nelson Education Ltd.

TYPES OF EVALUATION

2. The purpose of the evaluation

a. Formative: Provide data about various aspects of a training program

b. Summative: Provide data about worthiness or effectiveness of a training program

c. Descriptive: Provide information that describes the trainee once they have completed a training program

d. Causal: Provide information to determine if training caused the post-training behaviours

Page 14: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

14© 2013 by Nelson Education Ltd.

MODELS OF EVALUATION

A. Kirkpatrick’s Hierarchical ModelOldest, best known, and most frequently used model

The Four Levels of Training Evaluation:– Level 1: Reactions– Level 2: Learning– Level 3: Behaviours– Level 4: Results– ROI

Page 15: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

15© 2013 by Nelson Education Ltd.

CRITIQUE OF EVALUATION

There is general agreement that the five levels are important outcomes to be assessed

There are some critiques:

• Doubt about the validity

• Insufficiently diagnostic

• Kirkpatrick requires all training evaluations to rely on the same variables and outcome measures

Page 16: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

16© 2013 by Nelson Education Ltd.

MODELS OF EVALUATION

B. COMA ModelA training evaluation model that involves the measurement of four types of variables

1. Cognitive

2. Organizational Environment

3. Motivation

4. Attitudes

Page 17: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

17© 2013 by Nelson Education Ltd.

MODELS OF EVALUATION

The COMA model improves on Kirkpatrick’s model in four ways:

1. Transforms the typical reaction by incorporating greater number of measures

2. Useful for formative evaluations

3. The measures are known to be causally related to training success

4. Defines new variables with greater precision

Note: Relatively new model – too early to draw conclusions as to its value

Page 18: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

18© 2013 by Nelson Education Ltd.

MODELS OF EVALUATION

C. Decision-Based Evaluation ModelA training evaluation model that specifies the target, focus, and methods of evaluation

Page 19: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

19© 2013 by Nelson Education Ltd.

MODELS FOR TRANSFER

Decision-Based Evaluation Model Goes further than either of the two preceding

models:

1. Identifies the target of the evaluation– Trainee change, organization payoff, program improvement

2. Identifies its focus (variables measured)

3. Suggest methods

4. General to any evaluation goals

5. Flexibility: Guided by target of evaluation

Page 20: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

20© 2013 by Nelson Education Ltd.

MODELS FOR TRANSFER

As with COMA, the DBE model is recent and will need to be tested more fully

All three models require specialized knowledge to complete the evaluation; this can limit their use in organizations without this knowledge

Holton and colleagues’ Learning Transfer System Inventory (seen in Chapter 10) provides a more generic approach

See Training Today 11.2 for more on its use for evaluation

Page 21: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

21© 2013 by Nelson Education Ltd.

MODELS FOR TRANSFER

Training evaluation requires data be collected on important aspects of training

Some of these variables have been identified in the three models of evaluation

A more complete list of variables is presented in Table 11.1, and Table 11.2 shows sample questions and formats for measuring each type of variable

Page 22: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

22© 2013 by Nelson Education Ltd.

EVALUATION VARIABLES

A. Reactions

B. Learning

C. Behaviour

D. Motivation

E. Self-efficacy

F. Perceived/anticipated support

G. Organizational perceptions

H. Organizational resultsSee Table 11.2 in text

Page 23: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

23© 2013 by Nelson Education Ltd.

VARIABLES

A. Reactions

1. Affective reactions: Measures that assess trainees’ likes and dislikes of a training program

2. Utility reactions: Measures that assess the perceived usefulness of a training program

Page 24: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

24© 2013 by Nelson Education Ltd.

VARIABLES

B. LearningLearning outcomes can be measured by:

1. Declarative learning: Refers to the acquisition of facts and information, and is by far the most frequently assessed learning measure

2. Procedural learning: Refers to the organization of facts and information into a smooth behavioural sequence

Page 25: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

25© 2013 by Nelson Education Ltd.

VARIABLES

C. Behaviour

Behaviours can be measured using three approaches:

1. Self-reports

2. Observations

3. Production indicators

Page 26: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

26© 2013 by Nelson Education Ltd.

VARIABLES

D. Motivation

Two types of motivation in the training context:

1. Motivation to learn2. Motivation to apply the skill on-the-job (transfer)

E. Self-Efficacy

Beliefs that trainees have about their ability to perform the behaviours that were taught in a training program

Page 27: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

27© 2013 by Nelson Education Ltd.

VARIABLES

F. Perceived and/or Anticipated Support

Two important measures of support:

1. Perceived support: The degree to which the trainee reports receiving support in attempts to transfer the learned skills

2. Anticipated support: The degree to which the trainee expects to supported in attempts to transfer the learned skills

Page 28: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

28© 2013 by Nelson Education Ltd.

VARIABLES

G. Organizational Perceptions

Two scales designed to measure perceptions:

1. Transfer climate: Can be assessed via a questionnaire that identifies eight sets of “cues”

2. Continuous learning culture: Can be assessed via questionnaire presented in Trainer’s Notebook 4.1 in Chapter 4 of the text

Page 29: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

29© 2013 by Nelson Education Ltd.

VARIABLES

G. Organizational Perceptions (cont'd)

Transfer climate cures include: Goal cues Social cues Task and structural cues Positive feedback Negative feedback Punishment No feedback Self-control

Page 30: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

30© 2013 by Nelson Education Ltd.

VARIABLES

H. Organizational Results

Results information includes:

1. Hard data: Results measured objectively (e.g., number of items sold)

2. Soft data: Results assessed through perceptions and judgments (e.g., attitudes)

3. Return on expectations: Measurement of a training program’s ability to meet managerial expectations

Page 31: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

31© 2013 by Nelson Education Ltd.

DESIGNS IN TRAINING EVALUATION

The manner with which the data collection is organized and how the data will be analyzed All data collection designs compare the trained

person to something

Page 32: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

32© 2013 by Nelson Education Ltd.

DESIGNS IN TRAINING EVALUATION

1. Non-experimental designs: Comparison is made to a standard and not to another group of (untrained) people

2. Experimental designs: Trained group compared to another group that does not receive the training – assignment is random

3. Quasi-experimental designs: Trained group is compared to another group that does not receive the training; assignment is not random

Page 33: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

33© 2013 by Nelson Education Ltd.

DATA COLLECTION DESIGN

Page 34: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

34© 2013 by Nelson Education Ltd.

DATA COLLECTION DESIGN

Pre Post Pre Post

A: Single group post-only design(Non-experimental)

B: Single group pre-post Design(Non-experimental)

Page 35: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

35© 2013 by Nelson Education Ltd.

DATA COLLECTION DESIGN

Pre Post Pre Post

C: Time series design(Non-experimental)

D: Single group designwith control group

Trained Untrained

Page 36: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

36© 2013 by Nelson Education Ltd.

DATA COLLECTION DESIGN

Pre Post Pre Post

E: Pre-post design with control group

F: Time series designwith control group

Trained Untrained

Page 37: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

37© 2013 by Nelson Education Ltd.

DATA COLLECTION DESIGN

Pre Post

G: Internal Referencing Strategy

Training on Relevant Items

Training on Irrelevant Items

Page 38: 1 © 2013 by Nelson Education Ltd. CHAPTER ELEVEN Training Evaluation

38© 2013 by Nelson Education Ltd.

SUMMARY

Discussed the main purposes for evaluating training programs as well as the barriers

Presented, critiqued, and contrasted three models of training (Kirkpatrick, COMA, and DBE)

Recognized that Kirkpatrick model is most frequently used, yet has limitations

Discussed the variables required for an evaluation as well as methods and techniques required to measure them

Presented the main types of data collections designs Discussed factors influencing choice of data collection

designs