Upload
vuongkiet
View
213
Download
0
Embed Size (px)
Citation preview
Will Thalheimer, PhD
Answer this question in chat while we wait to get started:
Why do You / Your Team / or Organization Evaluate Learning?
The Learning‐Transfer Evaluation Model (LTEM): A Research‐Inspired Alternative to the Four Levels
Will Thalheimer, PhD
Slides available at: www.is.gd/will999stuff
A. I do not know enough to say (or NA).
In general, are you able to do the learning measurement you want to do in your organization?
B. NO. I wish we could substantially change what we are doing now.
C. MOSTLY. I am mostly happy but would like to see improvements.
D. YES. I am generally happy with what we’re doing now.
The Decisive Dozenfor Learning Design and Learning Measurement
http://is.gd/ddResearch
1. Content
2. ExposureBaseline
3. Guiding Attention
4. Creating Correct Conceptions
5. Repetition
6. Feedback
7. Variation
Engagement & Understanding
8. Retrieval Practice
9. Context Alignment
10. SpacingRemembering
11. Persuasion
12. PerseveranceApplication
Quite simply, the BEST book on smile sheet creation and utilization, Period!
Karl M. KappProfessor of Instructional Technology
Bloomsburg University
Thoughtful and sensible advice for feedback tools that will provide valid and actionable data.
Robert O. BrinkerhoffProfessor Emeritus, Western Michigan University
& Director, Brinkerhoff Evaluation Institute
Evidence‐based practice at the master level.
Julie DirksenAuthor of Design For How People Learn
Agenda
Ultimate Goals for Learning Evaluation
LTEM – Learning‐Transfer Evaluation Model
Issues in Learning Evaluation
Issues with the Four‐Level Model
Learner‐Feedback Questions (Smile Sheets)
History of Frustration in Learning Evaluation
“The evaluation of training programs in terms of ‘results’ is progressing at a very slow rate.”
Donald Kirkpatrick1960
“For the most part, the benefits of industrial training have been taken on faith. Few demands have been made to evaluate it in a rigorous manner.”
Ronald BurkeResearcher
1969
“With some notable exceptions… relatively little work has been devoted to making evaluation more useful and worthwhile.”
Robert Brinkerhoff1981
“ASTD’s research revealed that the actual practice of evaluation doesn’t often follow the strict recommendations of evaluation literature.”
1990
Only 20% were able to do the learning measurement they wanted to do…
2007
“In every year [from 2005‐2007], more than 90 percent rated measurement as the number one or number two area they would like to improve.”
Josh BersinPrinciple and Founder
Bersin, Deloitte Consulting LLP2008
Dixon, G., & Overton, L (2017). Making an impact: How L&D leaders can demonstrate value. Available at: www.towardsmaturity.org/impact2017
A. I do not know enough to say (or NA).
In general, are you able to do the learning measurement you want to do in your organization?
B. NO. I wish we could substantially change what we are doing now.
C. MOSTLY. I am mostly happy but would like to see improvements.
D. YES. I am generally happy with what we’re doing now.
In general, are you able to do the learning measurement you want to?
Given this long history:
We should be both humble
and optimistic!
Hendrick had been CLO for three months and had begun to wrest control of the thrashing octopus that was his company’s learning‐and‐development department. He started by getting learning evaluation under control, based on the dictum “what gets measured, gets managed.”
The first pilot was with the strategically‐important Leadership for Hippos course. The new evaluation strategy involved the following:
A. NOT AT ALL effective.
B. SOMEWHAT effective.
C. MOSTLY effective.
D. VERY effective.
What do you think Sandra, a world‐class learning evaluation expert, will tell Hendrick about the effectiveness of his evaluation design?
Most importantly, whatdo you think are the
strengths and weaknesses of Hendrik’s approach?
• Utilizing psychometrically‐validated questions on a representative sample of the highest‐priority content knowledge taught in the course and divided into six competency categories.
• Requiring learners to be correct on 4 of the 5 questions for each competency (30 questions total) on a test given during the last two hours of the four‐day workshop.
There is no perfect measurement tool!
“Our measures are not perfect, but they should instead be thought of as approximations.”
Deborah L. Bandalos, Author of the 2018 book:
Measurement Theory and Applications for the Social Sciences
“Metric fixation is in fact often
counterproductive, with costs to individual satisfaction with work,
organizational effectiveness, and economic growth.”
Jerry Muller, In Interview with
Princeton University Press
MeasurementLet us be humble,skeptical,
and wise!!
Data & Analysis
Why is this farmer gathering and analyzing growing rates,
crop yields, infestation rates, soil quality, amount of fertilizer,
rainfall, sunshine, revenue per sales channel,
et cetera?
To Help HimMake His Most Important Decisions!
Data & Analysis
What criteria should he have for the data he’s collecting?
• It should be ACCURATE/VALID
• It should be RELEVANT
• It should be HIGHLY PREDICTIVE
• It should be COST EFFECTIVE
• Most importantly, it should enable his organization to make its most important DECISIONS!
Data & Analysis
We also need to collect data that is:
• Is ACCURATE/VALID
• Is RELEVANT
• Is HIGHLY PREDICTIVE
• Is COST EFFECTIVE
• Helps us make our most important DECISIONS!
Learning Professionals
What are our most important decisions?
Are we doing enough to get learners support in applying the learning?
Are we sufficiently motivating our learners to inspire them to act?
Is training useful or should we provide other or additional supports?
Is this learning method working or should we use another one?
Is this skill content useful enough to teach?
Demonstrate the Value
Improve theLearning
Three Reasons to Use Measurement in Learning
# 1
# 2# 3
Evaluation Advocacy:Using Evaluation to Nudge Us All
to Better Learning and Better Outcomes
A More Muscular Approach to Evaluation
Evaluation that helps us make our most important decisionsand helps us get the resources and support we need!
Sending Messages to Nudge Improvements
The Kirkpatrick Four-Level Model
Level 1
ReactionLevel 2
LearningLevel 3
BehaviorLevel 4
Results
The Kirkpatrick-Katzell Four-Level Model
Read about Raymond Katzell’s role:
https://is.gd/originator
Level 1
ReactionLevel 2
LearningLevel 3
BehaviorLevel 4
Results
“The Kirkpatrick framework has a number of theoretical and practical shortcomings.”
“[It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders…” (p. 91)
https://is.gd/research22review
“Kirkpatrick's framework is not grounded in theory and the assumptions of the model have been repeatedly disproven over the past 25 years…
…(Alliger & Janak, 1989; Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Holton, 1996; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008; see Kraiger, 2002, pp. 333–335 for a critical review of Kirkpatrick's theory).”
Sitzmann and Weinhardt (in press from 2017)
Now are you depressed?
How much are learner‐feedback questions correlated with learning outcomes?
A. High marks indicate that the training was likely to be VERY SUCCESSFUL in creating learning.
B. High marks indicate that the training was likely to be at least SOMEWHAT SUCCESSFUL in creating learning.
C. High marks on smile sheets tell us VERY LITTLE about the success of our training programs in creating learning.
Smile Sheetsto
Learning
r=.09
Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997).
A meta-analysis of the relations among training criteria.
Personnel Psychology, 50, 341-357.
Smile Sheetsto
Learning
r=.09
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D.
(2008). A review and meta-analysis of the nomological network of
trainee reactions. Journal of Applied Psychology, 93, 280-295.
1990’s 2000’s
Weak Relationship is below .30 and .09 is VERY WEAK
So…SMILE SHEETS tell us VERY LITTLE about Learning
2017 – University Teaching
r = .20
“Despite more than 75 years of sustained effort, there is presently no evidence supporting the widespread belief that students learn more from professors who receive higher… ratings.”
Likert-like Scales provide Poor DataA. Strongly AgreeB. AgreeC. Neither Agree Nor DisagreeD. DisagreeE. Strongly Disagree
54321
Sharon Shrock and Bill Coscarelli, authors of the classic text, now in its third edition, Criterion‐Referenced Test Development, offer the following wisdom:
On using Likert‐type Descriptive Scales (of the kind that uses response words such as “Agree,” “Strongly Agree,” etc.):
“…the resulting scale is deficient in that the [response words] are open to many interpretations.” (p. 188)
Research shows that learnersdon’t always know their own learning…
Learners are Overly Optimistic Zechmeister & Shaughnessy (1980).
Learners can’t always OvercomeFaulty Prior KnowledgeKendeou & van den Broek (2005).
Learners Fail to Properly Use ExamplesRenkl (1997).
Learners Fail to Give ThemselvesRetrieval PracticeKarpicke, Butler, & Roediger (2009).
Two Recent Reviews Emphasize Learners’ Lack of Knowledge of LearningBrown, Roediger & McDaniel (2014); Kirschner & van Merriënboer (2013).
1. Red‐flagging training programs that are not sufficiently effective.
2. Gathering ideas for ongoing updates and revision of a learning program.
3. Judging strengths and weaknesses of a pilot program to enable revision.
4. Providing instructors with feedback to aid their development.
5. Helping learners reflect on and reinforce what they learned.
6. Helping learners determine what (if anything) they plan to do with their learning.
7. Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the training and/or the instructors.
8. Upholding the spirit of common courtesy by giving learners a chance for feedback.
9. Enabling learner frustrations to be vented—to limit damage from negative back‐channel communications.
Modified based on work by Robert Brinkerhoff
Reasons for Smile Sheets
Effectiveness
of the
Learning
Reputation
of the
Learning
Supporting Learners
Quite simply, the BEST book on smile sheet creation and utilization, Period!
Karl M. KappProfessor of Instructional Technology
Bloomsburg University
Thoughtful and sensible advice for feedback tools that will provide valid and actionable data.
Robert O. BrinkerhoffProfessor Emeritus, Western Michigan University
& Director, Brinkerhoff Evaluation Institute
Evidence‐based practice at the master level.
Julie DirksenAuthor of Design For How People Learn
Ultimate Goal Primary Goals Secondary Goals Tertiary Goals
HOW ABLE ARE YOU to put what you’ve learned into practice on the job? Choose One.
A. I am NOT AT ALL ready to use the skills taught.
B. I have GENERAL AWARENESS but will NEED MORE GUIDANCE to put the skills into practice.
C. I need MORE HANDS‐ON EXPERIENCE to be GOOD at using these skills.
D. I am FULLY COMPETENT in using these skills.
E. I am CAPABLE at an EXPERT LEVEL in using these skills.
A Better Smile Sheet Question
0 10 20 30 40 50 60
HOW ABLE ARE YOU to put what you’ve learned into practice on the job? Choose One.
Unacceptable
Acceptable
Alarming
Superior
Percentage of Respondents
Superior/Overconfident?
NOT AT ALL READY
HAVE AWARENESS,NEED MORE GUIDANCE
NEED MOREHANDS ON EXPERIENCE
FULLY COMPETENT
COMPETENTAT EXPERT LEVEL
Ultimate Goal Primary Goals Secondary Goals Tertiary Goals
After the course, when you begin to apply your new knowledge at your worksite, which of the following supports are likely to be in place for you? Select as many items as are likely to be true.
A. I will have my PROGRESS MONITORED BY MY SUPERVISOR in applying the learning.
B. I will have someone available TO COACH OR MENTOR ME in applying the learning.
C. I will have easy access to a COURSE INSTRUCTOR to contact for guidance and support.
D. I will have JOB AIDS to guide me in applying the learning to real job tasks.
E. I will be PERIODICALLY REMINDED of key learning concepts/skills over the next few months.
F. I will NOT get much direct support, but will rely on my own initiative.
A Question About Follow-Through…
But won’t learners dislike
these new type of
questions?
Percent saying NEW QUESTIONS BETTERthan traditional questions:
80%SmileSheets.com
Things
No Time For
After training, my manager and I will discusshow I will use the learning on my job.
strongly disagree ‐‐‐‐‐ 1 2 3 4 5 6 7 ‐‐‐‐‐ strongly agree
What’s Wrong?
What plans, if any, do you have for talking with your manager in the next 10 days about how you will use the learning in your work?
A. My manager and I have made plans for at least TWO meetings.B. My manager and I have made plans for at least ONE meeting.C. My manager HAS MENTIONED the idea, but we don’t yet have firm plans.D. I will SEEK OUT my manager and ask for at least one meeting on this.E. We are LIKELY TO DISCUSS my use of the learning as we work together.F. It is DOUBTFUL that we will spend much time discussing my use of the learning.
Better!
Smile Sheets should be ONLY ONE PARTof our learning evaluation efforts
Smile Sheets
Understanding
Remembering
Motivation to Apply
After Supports
Meeting Target Goals?
Job Performance
Organizational Results
Learner Expectations
Other Expectations
Supports:
Management Support?
Workplace Obstacles?
Reinforcement?
Reminders?
Learners able to:
Understand?
Remember?
Make Decisions?
Apply the Learning?
34‐Page Report
Special Thanks:
• Julie Dirksen• Clark Quinn• Roy Pollock• Adam Neaman• Yvon Dalat• Emma Weber• Scott Weersing• Mark Jenkins• Ingrid Guerra‐Lopez• Rob Brinkerhoff• Trudy Mandeville• Mike Rustici
Industry Empathy
Grok Problem
Triggering Event
Generate Quick
Solution
Gather Blind‐Spot Feedback
Improve Solution
Reality‐Test
Solution
A
D
D
I
E
Assessment
Design
Development
Implementation
Evaluation
The Kirkpatrick-Katzell Four-Level Model
Level 1
ReactionLevel 2
LearningLevel 3
BehaviorLevel 4
Results
What Messages Does the Four‐Level Model Send?
Messaging of Four-Level Model of Learning Evaluation
Harmful Messages (Sent or Missed)
• Does Not Warn Us Against Ineffective Evaluation Practices
• Ignores the Role of Remembering
• Level 2 Learning is Mashed into One Bucket
Beneficial Messages
• Don’t Just Focus on Learning! Focus on Results too!
• Learner Opinions Are Not Most Important
Messaging our Evaluation Model Should Have: A Few Examples
• Just because learners ENGAGE IN LEARNING doesn’t mean they will have learned. Therefore, measuring attendance is an inadequate way of evaluating learning.
• Just because learners PAY ATTENTION doesn’t mean they learned. Measuring attention is an inadequate way of evaluating learning.
• Just because learners ACTIVELY PARTICIPATE in learning doesn’t mean they learned. Measuring participation is inadequate
• Just because learners say they LIKE A LEARNING EVENT doesn’t mean they learned. Therefore, surveying learners on their general satisfaction is an inadequate way of evaluating learning.
• Just because learners REPORT THEY HAVE EXPERIENCED EFFECTIVE LEARNING METHODS doesn’t guarantee they learned. Therefore, surveying learners on their experience with learning methods, must be augmented with objective measures of learning.
• Just because learners CAN RECITE FACTS AND TERMINOLOGY doesn’t mean they know what to do. Therefore, measuring knowledge recitation is an inadequate way of evaluating learning.
• Just because learners COMPREHEND A CONCEPT doesn’t mean they will be able to use that concept in a work situation. Therefore, measuring knowledge retention is an inadequate way of evaluating learning.
• Just because learners DEMONSTRATE COMPETENCY during a learning event doesn’t mean they’ll remember how to use the competency later. Therefore, measuring competency during or soon after a learning event is an inadequate way of evaluating learning.
• There are a NUMBER OF GOALS WE SHOULD HAVE as learning designers, including supporting our learners in building: comprehension, remembering, decision making competence, task competence, and perseverance in applying what they’ve learned to their job or other performance situations.
Understanding‐‐‐‐‐‐‐‐
Remembering
Understanding‐‐‐‐‐‐‐‐
Remembering
Understanding‐‐‐‐‐‐‐‐
Remembering
Work Performance
Performance In Learning
Compared to
Kirkpatrick‐Katzell
Four‐Level Model of Learning Evaluation
Level 1 ‐‐ Reaction
Level 2 ‐‐ Learning
Level 3 ‐‐ Behavior
Level 4 ‐‐ Results
Two Ways to Use LTEM1. Assessing Your Evaluations2. Learning Design & Development:
Working Backward from Your Goals
2018
2018
2019
2020
Two Ways to Use LTEM1. Assessing Your Evaluations2. Learning Design & Development:
Working Backward from Your Goals
1. Sales Increase by 5%
2. Managers Coach Better
3. Simulated Coaching
4. Scenario Questions
5. IF‐THEN Decisions
6. Perf‐Focused Questions
There is no perfect measurement tool!
LTEM is not a panacea!
Do your evaluations with wisdom!!
What Questions Do You Have?
Read the Report:https://is.gd/LTEM999
• LTEM Report and Model — https://is.gd/LTEM999
• Katzell’s Contribution — https://is.gd/Katzell
• Updated Smile‐Sheet Questions 2018 — https://is.gd/Questions2018
• A Better Net Promoter Question — https://is.gd/replaceNPS
• Be Careful When Benchmarking — https://is.gd/DoNotBenchmark
• Debate About Kirkpatrick Model —https://is.gd/epicbattle
• Better Responses on Smile Sheets — https://is.gd/betterresponses
• Common Mistakes in Evaluation — https://is.gd/evaluation54mistakes
Subscribe to my Newsletter: — https://www.worklearning.com/sign‐up/
The Learning‐Transfer Evaluation Model (LTEM): A Research‐Inspired Alternative to the Four Levels
Will Thalheimer, PhD
Phone: +1‐617‐718‐0767Email: [email protected]: WorkLearning.comBook: SmileSheets.comTwitter: @WillWorkLearn
https://www.worklearning.com/contact/
Slides available at: www.is.gd/will999stuff