Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
THE CRITICAL ROLE OF AN EFFECTIVE SYSTEMATIC TRAINING EVALUATION PRACTICE ON
LEARNING VALUE WITHIN A STATE OWNED COMPANY: A REVIEW AND CRITIQUE
by
JOYCE RAMIAH
submitted in accordance with the requirements
for the degree of
MAGISTER TECHNOLOGIAE
in the subject
HUMAN RESOURCE MANAGEMENT
at the
UNIVERSITY OF SOUTH AFRICA
SUPERVISOR: PROF A BEZUIDENHOUT
JUNE 2014
40315231 i
ABSTRACT
The Critical Role of an Effective Systematic Training Evaluation Practice on Learning
Value within a State Owned Company: A Review and Critique
Keywords:
Systematic training evaluation; effective; learning and development; the training
cycle; Practice; Addie; Value; Kirkpatrick four levels; Phillips five levels.
Introduction
The critical role that an effective systematic training evaluation practice plays is in its
ability to collect value systematically. The collected data is utilised to make a
judgement or to evaluate learnings contribution. Without the training evaluation
practice learning value is obscured.
The instructionnal systems design (ISD) Addie is assessed as the systematic
structure that can greatly assist the training evaluation practice. It supports the
proposal of collecting value throughout the training lifecycle for the holistic view of
learning value. The Kirkpatrick and Phillips (K/P) summative framework will be used
to assess the current practice in collecting post-learning data.
Addie was highly valued by the survey participants. The current summative practice
in the organisation stops mainly at level two of the Kirkpatrick/Phillips model. The
practice lacks comprehensive data collection at recommended best practice levels
and is therefore, not effective, efficient and systematic in its approach to declare
learning value.
UNISA STUDENT NO. 40315231 ii
ACKNOWLEDGEMENTS
To my supervisor: Professor Adele Bezuidenhout, it was an eye opening experience
to be under your direction. Thank you for all the guidance and help it was greatly
appreciated because without you I was going nowhere. You are a wonderful person
and you will be of help for all future masters students.
To: Dr. F. Badenhorst for his great support.
iii
DEDICATIONS
To: my Lord and Saviour, Jesus Christ, all the honour and glory for evermore is due
until we meet face to face.
God is love (1 John 4:16) Love is manifested in Christ and the Cross, It is completed
in the Resurrection and its power
To: my husband, Theophilus thank you for all your love, support, encouragement
and help.
To: my children thanks to you all for keeping me humble.
To: my colleagues’ thank you for completing the survey questionnaire. Note the
findings is what you all gave me.
To: the late Legendary Dr/Professor D. L. Kirkpatrick.
UNISA STUDENT NO. 40315231 iv
DECLARATION
I, Joyce Ramiah the undersigned I.D. no. 5402160061084, student no.40315231,
declare that this Mini-dissertation “ The Critical Role of An Effective Systematic
Training Evaluation Practice on Learning Value Within A State Owned
Company: A Review and Critique”, is my own work and is submitted in partial
fulfilment of the degree Magister Technologiae. All references and citations have
been acknowledged by means of a complete reference system.
Signature__________________ Dated _______________
UNISA STUDENT NO. 40315231 v
ABBREVIATIONS
ADDIE Analysis, Develop, Design,
Implement and Evaluate
I
ISD Instructional systems design
ASTD American Society for
Training and Development
K/P Kirkpatrick /Phillips
ATR Annual training report L&D Learning and Development
BHRD Bachelor in Human
Resource Development
LMS Learning management system
BI Business Intelligence NSDS National Skills Development
Strategy
s BTD Bachelor s in Training and
Development
NQF National Qualification Frameworks
CCFO’S Critical Cross field
Outcomes
NQFA National Qualifications Framework
Amended Act No.66 66 of 2008
Amended CHE Council for Higher
Education
ODETDP Occupationally Directed
Education Training and
Development Practices
andand&andDevelopment
Practitioner CIPD Chartered Institute for
Personnel Development
OD Organisational Development
EOC Executive Operational
Committee
QA Quality assurance system
ER Employee Relations QMS Quality management system
ERS Employee Resource
Services
ROE Return on expectation
ESI Evaluation Success Index
(ASTD & (i4cp))
ROI Return on investment
ETD Education Training and
Development
RPL Recognition of Prior Learning
UNISA STUDENT NO. 40315231 vi
ETDP Education, training and
development practitioners
SAT Systematic approach to training
ETQA Education Training Quality
Assurance
SAQA South African Qualifications
Authority
ETLD Education, training, learning
and development
SDA Skills Development Act
FET Further Education and
Training
SDF Skills Development Facilitator
FETC Further education and
training college
SDLA Skills Development Levies Act
HCM Human capital management SKA’S Skills, Knowledge and Attitudes
HCD Human capital development SOC State Owned Company
HETA Higher Education and
Training Act
TAT Traditional Approach to Training
HET Higher Education and
Training
VOI Value Of Investment
HRD Human Resource
Development
VLX Very large extent
HRTC Human Resources Training
Committee
WPLP Workplace learning professional
ILT/EL
Instructor led training
(classroom) and E-learning
U. K. United Kingdom
S.A. South Africa U.S.A
United States of America
UNISA STUDENT NO. 40315231 vii
TABLE OF CONTENTS
TITLE………………………………………………………………………....................none
ABSTRACT.................................................................................. ................................i
ACKNOWLEDGEMENTS………………………………….............................................ii
DEDICATION……………………………………………………………….......................iii
DECLARATION……………………………………………………………........................iv
ABBREVIATIONS………………………………………………………….....................v-vi
TABLE OF CONTENTS……………………………………………………………….vii-xiv
LIST OF TABLES……………………………………………………………………….....xiv
LIST OF FIGURES..…………………………………………………………………...xiv-xv
BIBLIOGRAPHY……………………………………………………………………………xv
APPENDICES………………………………………………………………………………xv
CHAPTER 1: CONTEXTUALISATION OF THE RESEARCH STUDY AND
PROBLEM STATEMENT
Chapter Section... Page No
1.1 INTRODUCTION AND BACKGROUND....................... .................................1-4
1.1.1 The Purposes of Training Evaluation......................... ..........................4-5
1.1.2 The Value and Benefits of Training Evaluation....................................6-9
1.1.3 Training Evaluations: A Global View……………………………….....10-11
1.1.4 Training Investments Rational for Seeking Value............................11-12
UNISA STUDENT NO. 40315231 viii
1.1.5 Global Trends, Challenges and Barriers…………………………......13-15
1.1.6 Training Evaluations in South Africa……………………………….....15-17
1.1.7 Training Evaluations in the Research Organisation…………................17
1.2 RESEARCH PROBLEM ................................................................................18
1.3 RESEARCH STATEMENT.............................................................................18
1.4 RESEARCH OBJECTIVES............................................................................19
1.4.1 Primary Objective.................................................................................19
1.4.2 Secondary Objective....................................................... ................19-20
1.5 LITERATURE REVIEW………………………….............................................20
1.5.1 Theoretical.Objectives....................................................................20-21
1.5.2 Empirical Objectives............................................................................21
1.6 RESEARCH DESIGN................................................................................21-22
1.6.1 Research Population……………………………………………………..23
1.7 DATA ANALYSIS.............................................................. ..............................23
1.8 LIMITATIONS OF THE RESEARCH........................................................23-24
1.9 ETHICAL CONSIDERATIONS........................................ ..............................24
1.10 CHAPTER LAYOUT..................................................................................24-25
1.11 CHAPTER CONCLUSIONS...........................................................................25
CHAPTER 2: LITERATURE REVIEW IN THE FIELD OF LEARNING, TRAINING
SYSTEMS AND TRAINING EVALUTIONS
2.1 INTRODUCTION.................................................................................................26
2.2 TRAINING, LEARNING AND DEVELOPMENT SYSTEMS...........................26-27
UNISA STUDENT NO. 40315231 ix
2.2.1 Definitions of Terminology………………………………………………..27
2.2.1.1 Addie Training Cycle………………………………………………27
2.2.1.2 Training ……….……………………………………………………27
2..2.1.3 Learning ……….…………………………………………………..27
2.2.1.4 Training Effectiveness……………………………………………28
2.2.1.5 Training Measurement and Evaluation.………………………...28
2.2.1.6 Systems Approach to Training…………………………………..28
2.2.2 Training systems practice……………………………………………………..29-33
2..3 THE ADDIE TRA INING CYCLE…………………………………………………34
2.3.1 The Different views of Addie…………………………………………35-36
2.3.2 Critique for and Against Addie………………………………………37-38
2.3.3 The Evaluation Centric Model and Value Expressed………………38-40
2.4 TRAINING EVALUATION, MODELS AND FRAMEWORKS……………........41
2.4.1 Kirkpatrick Foundational Model………………………………………………41-43
2.4.2 Critique and Move to Return on Expectations……………………………...43-44
2.4.3 Phillips Five Level Framework……………………………………………… 44-45
2.4.3.1 The Return on Investment Model………………….....45-46
2.4.3.2 Phillips Learning Value Chain……………………….46-48
2.4.4 Challenges with the Phillips Model……………………………………………48
2.4.5 The Evaluation Implementation Process Map………………………………..49
2.5 RESEARCHERS VIEW: THE COMBINED ADDIE AND
KIRKPATRICK/PHILLIPS FOR LEARNING VALUE: ……………………………..50-51
UNISA STUDENT NO. 40315231 x
2.5.1 Stage one…………………………………………………………………52
2.5.2 Stage two………………………………………………………………52-53
2.5.3 Stage three…………………………………………………………………53
2.5.4 Stage four………………………………………………………………53-54
2.5.5 Stage five……………………………………………………………….54-55
2.5.6 Stage six………………………………………………………………...55-56
2.6 OVERVIEW OF ALTERNATIVE EVALUATION METHODS……………..56-63
2.7 CHAPTER SUMMARY………………………………………………………...63-64
CHAPTER 3: RESEARCH DESIGN AND METHODOLOGY
3.1 INTRODUCTION…………………………………………………………………..65
3.2 RESEARCH OBJECTIVE…………………………………………………………65
3.2.1 Generic Objective.……………………………………………………..65-66
3.2.2 Specific Objectives………………………………………………………...66
3.2.3 Research Questions ………………………………………………….66-67
3.3 RESEARCH DESIGN……………………………………………………………..68
3.4 RESEARCH METHOD………..……………………………………………….68-69
3.4.1 Research Population and Sampling…………………………………69-71
3.4.2 Data Collection……………………………………………………………..71
3.5 MEASURING INSTRUMENT………………………………………………....72-73
3.5.1 Piloting of Instrument……………………………………………………...74
3.5.2 The Reliability of the Instrument…………………………………………74
UNISA STUDENT NO. 40315231 xi
3.5.3 The Validity of the Instrument…………………………………………….74
3.6 CHAPTER SUMMARY…………………………………………………………...75
CHAPTER 4: RESEARCH FINDINGS AND RESULTS
4.1 INTRODUCTION……………………………………………………………….76-77
4.2 RESEARCH FINDINGS………………………………………………………77-78
4.3 QUANTITATIVE ANALYSIS......................................................................78-79
4.4 RESULTS AND INTERPRETATION OF FINDINGS………………………….79
4.4.1 Response Rate…………………………………………………………79-80
4.4.2 Description of the Population…………………………………………80-86
4.4.3 Survey Questionnaire Responses……………………………………….87
4.4.3.1 Research question i…………………………………………....87-88
4.4.3.2 Research question ii…………………………………………..88-97
4.4.3.3 Research question iii……………………………………….....97-99
4.4.3.4 Research question iv………………………………………..99-101
4.4.4 Correlation Analysis………………………………………………..101-104
4.4.5 Opened-ended questions………………………………………….104-110
4.4.6 Overall Findings…………………………………………………….110-111
4.5 CHAPTER SUMMARY………………………………………………………….111
CHAPTER 5: RECOMMENDATIONS AND CONCLUSION OF THE STUDY
5.1 INTRODUCTION……………………………………………………………112-114
5.2 OVERVIEW OF STUDY…………………………………………………………114
UNISA STUDENT NO. 40315231 xii
5.3 FINDINGS OF STUDY…………………………………………………………..114
5.3.1 Research question i…………………………………………………114-115
5.3.2 Research question ii……………………………………………………...115
5.3.3 Research question iii……………………………………………………..116
5.3.4 Research question iv………………………………………………..116-117
5.4 VALUE OF THE STUDY........................................................................117-118
5.5 LIMITATIONS OF STUDY………………………………………………….118-119
5.6 FUTURE RESEARCH………………………………………………………119-121
5.7 RECOMMENDATIONS…………………………………………………………..121
5.7.1 Recommendation i………………………………………………….121-122
5.7.2 Recommendation ii………………………………………………….122-123
5.7.3 Recommendation iii……………………………………………………....123
5.7.4 Recommendation iv…………………………………………………123-124
5.7.5 Recommendation v………………………………………………….124-125
5.8 CHAPTER SUMMARY AND CONCLUSIONS...………………………..........125
THE LIST OF TABLES
Chapter One: None
Chapter Two:
Table 2.1 Types of Approach to Training and Evaluation Value………………....30
Table 2.2 Kirkpatrick Levels and Evaluation Objectives…………………………..43
UNISA STUDENT NO. 40315231 xiii
Table 2.3 Phillips five Levels……………………………………………………...44-45
Table 2.4 Learning Value Chain………………………………………………….47-48
Table 2.5 Models that are based on Kirkpatrick/Phillips Frameworks………......59
Table 2.6 The Researchers Value View Model ………………………………..60-61
Table 2.7 Researchers Adapted Compilation of Industry Best Practice…………62
Table 2.8 ASTD (ESI)…………………………………………………………………63
Chapter Three
Table 3.1 National HCM Target Group ………………………………………….70-71
Table 3.2 Survey Questionnaire Description……………………………………….73
Chapter Four
Table 4.1 Comparison of Distribution and Response Rate……………………….79-80
Table 4.2 Frequency Table of Demographic Description of Respondents…… ..81-82
Table 4.3 Addie’s Importance in Training Evaluation……………………………...87-88
Table 4.4 Models Known and In Use……………………………………………...........89
Table 4.5 Best Practice Evaluation Guidelines in comparisons with Research
organisation Evaluation Level practice…………………………….…………………….90
Table 4.6 Current Levels of Kirkpatrick/Phillips Practice……………………………...91
Table 4.7 Location and timing of Level one and Level two Currently Practiced..92-93
Table 4.8 The current Most Popular Assessment tools……………………………….93
UNISA STUDENT NO. 40315231 xiv
Table 4.9a Timing of Level three Evaluation Practice-positive -responses………...94
Table 4.9b Timing of Level four Evaluation Practice-negative -responses………...95
Table 4.10 Approaches to Level three evaluation…………………………………95-96
Table 4.11 Approaches to Level four evaluation…………………………………..96-97
Table 4.12 The Most Common Problems and Barriers to Training Evaluations……98
Table 4.13 Use of Results of Training Evaluations Currently/future…………………99
Table 4.14 Use of Evaluation Findings ………………………………………….100-101
Table 4.15 The Evaluation Success Index (ESI) of nine items………….........101-102
Table 4.16 Managers views on evaluations……………………………………..104-105
Table 4.17 Etdps/practitioners views on evaluation…………………………….105-107
Table 4.18 Developmental needs of respondents………………………………107-109
LIST OF FIGURES
Chapter One: none
Chapter Two:
Figure 2.1 Traditonal view of Addie…………………………………………………….35
Figure 2.2 The 5 Stage Addie Recursive training cycle..........................................35
Figure 2.3 Addie with Evaluation Centralised..........................................................39
Figure 2.4 Kirkpatricks Model four levels……………….………………………………42
Figure 2.5 Phillips five level framework…………………………………………………45
Figure 2.6 The evaluation process flow…………………………………………………49
UNISA STUDENT NO. 40315231 xv
Figure 2.7 The adapted view combined Addie and summative levels……………..50
Figure 2.8 The Addie Value Chain stars and spokes: Researchers View………….51
Figure 2.9 The Sloans five Pillar Model…………………………………………………58
Chapter Four
Figure 4.1 Age groups of respondents………………………………………………….83
Figure 4.2 Gender groups of respondents……………………………………………..83
Figure 4.3 Race groups of respondents………………………………………………...84
Figure 4.4 Rank groups of respondents…………………………………………….…..84
Figure 4.5 Regional groups of respondents……………………….…………………...85
Figure 4.6 Tenure of groups of respondents……………………………………….85-86
Figure 4.7 Educational Qualifications of respondents……….……………………….86
Figure 4.8 ESI comparision of findings with research organisation……………… ..103
BIBLOGRAPHY……………………………………………………………………..126-133
APPENDICES
Appendix A..............................................................Approval letter to do the Research
Appendix B…………………………………………………………..Survey Questionnaire
Appendix C……………………………Approval to use ASTD and (i4cp) Research ESI
Appendix D…………………………………………………………………SDF Feedback
Appendix E…………………………………………………………......Statistical Analysis
UNISA STUDENT NO. 40315231 1
CHAPTER: 1
CONTEXTUALISATION OF THE RESEARCH STUDY AND
PROBLEM STATEMENT
1.1 INTRODUCTION AND BACKGROUND
This chapter will provide the context for the research project on: the significant
value of an effective systematic training evaluation practice in the search of
learning value. This chapter will therefore provide an outline and focus for the
study by briefly describing the theoretical and methodological approaches.
Finally the chapter will conclude with, a discussion of ethical considerations,
limitations and proposed chapter layout of this study.
This study examines the current training evaluation practices at a corporate
university in the search for learning value. According to Lui-Abel (2012: 410),
Noe (2010: 82-86) and Elkeles and Phillips (2007: 32-33) a corporate university
is a centralised learning centre for a large organisation. Corporate universities
are funded by organisations, as they primari ly service the employees of that
organisation. One of the key characteristic of a corporate university is that it
aligns the learning strategy with the business strategy. Investments in corporate
universities are large and therefore demand that the value to be declared
(Lawson, 2009: 286). The training providers are both internal and external
stakeholders with whom collaborations and relationship for the benefit of skills
development is nutured. External stakeholders are sector education and training
authorities (SETA’S), academe/universities and various external providers of
specialised workshops and conferences.
Lui-Abel (2012: 411) further states that, in one of the four quadrants model of the
deliverables for a corporate university, “training measurement and evaluation” is
UNISA STUDENT NO. 40315231 2
a strategic demand. Learning value is attached to an effective systematic training
evaluation practice, due to its potential to boost the amount of value data that
can be collected on learning. In other words the more effective and systematic a
training evaluation practice is, the more the value of learning is exposed for
declaration in either quantitative or qualitative terms.
The Addie training cycle is proposed as the systematic structure from which the
research will view or interrogate the current training evaluation practice. The
systems approach to training (SAT) and training evaluation has its foundations in
the early seventies, (Rossi, Lipsey & Freeman, 2004: 1). The Addie training cycle
is a systematic approach and is also known as the Instructional Systems Design
(ISD) which will be discussed further in chapter two. In other words, Addie is a
tool or structure which can guide the evaluation practice in the critical areas
where value is created, in the search for learning value.
Bachetta (2012: 1) concurs that the systems approach to training (SAT) could
enhance the collection of valuable data to make judgements on the value add.
The “Addie” training cycle is a model that recommends all the training activities to
engage in, to meet a training need. Furthermore, Bozarth (2008: 4) also agrees
that the well-known Addie training cycle or instructional systems design (ISD) is a
model that the research investigation could exploit to detemine the value of
learning. The training cycle provides opportunities for learning to create value.
Collecting data by implementing training evaluations is a cornerstone to finding
the learning value of training.
Training evaluation needs data to make judgements of value-add. According to
Agarawla (2012: 365) the value creating opportunities, on the training cycle is
collected by investigating the process and outcome criterion. The process criteria
consist of the following types of evaluation: the confirmative, diagnostic and
formative. Whereas the outcome criteria or during and post-learning data, deals
UNISA STUDENT NO. 40315231 3
with the summative evaluation elements. The summative criteria will be
assessed using the Kirkpatrick/Phillips model or metrics. Legally implying that
there is procedural and substantive processes in the practice of training
evaluations that point to the value creation efforts of Learning and Development
(L&D).
Biech (ed.) (2008: 196) also promotes the Addie training cycle as a structure or
system that will support the training evaluation efforts and practice in search of
the value of learning. The Addie training cycle indicates the major activities
through which Learning and Development (L&D) can create learning value. In
other words, the steps or the stages of the Addie model delineates the training
activities which are implemented to provide the training need.
Kirkpatrick and Kirkpatrick (2010: 3) state that the initiator in summative training
evaluations was Donald Kirkpatrick. He provided a model called the four-level
theory or taxonomy. The taxonomy was developed in 1959 and was one of the
first models for training evaluation. Although the theory focused on the
summative evaluation of training only, but it was welcomed. Other divisions in
business operations were ahead for sometime in terms of monitoring and
evaluation models well before the training evaluation model was developed
(Beevers & Rea, 2010: 201). The Researchers assumption is that the business
models gave Kirkpatrick the ideas for a training evaluation framework.
According to Phillips and Phillips (2007: viii) is that, Donald Kirkpatrick had left
the theory unattended for sometime and therefore, not making the four-level
theory practical for use. Jack Phillips after he had completed a meta-analysis of
Kirkpatricks theory changed some aspects to develop an adjusted framework.
The levels one to four were also reviewed with a slight variation to level
outcomes and then the fifth level was incorporated, now known as return on
investment (ROI) methodology. Phillips established the five-level framework with
UNISA STUDENT NO. 40315231 4
practical tools for usage by evaluators of training. Hence the Kirkpatrick/Phillips
(K/P) model will be used in this research paper acknowledging the originator
Kirkpatrick for the four levels and Phillips for the subsequent work and the fifth
level or return on investment (ROI) framework. Further discussion on the models
will be attempted in chapter two.
The following sections examine the intentions of training evaluations in little
more detail in training organisations and to set a background to this study.
1.1.1 The Purpose of Training Evaluation
Phillips and Phillips (2007: 4) and Opperman and Meyer (2008: 187) agree on
the following principles that encapsulate the objectives of training evaluation for
stakeholders:
To identify and improve dysfunctional processes in (L&D), including training
evaluations.
To evaluate the diagnostics of the training needs to satisfy client requests.
To evaluate the learning objectives developed and the design of the
learning experience so as to meet the need or to improve the design.
To appraise the effectiveness of learning programs and the value of
investment involved.
To evaluate value of interventions in alignment with business strategy.
To build relationships with all training stakeholders.
To provide feedback for developmental purposes on the trainers/facilitators
and also for performance ratings.
To meet the demands of legislations.
To continue with effective programs and providers .
UNISA STUDENT NO. 40315231 5
To discontinue ineffective programs and terminate unreliable non
performing providers.
Kirkpatrick and Kirkpatrick (2010: 2-3) also suggest that learning professionals
evaluate their deliverables to organisations, because in the absence of that,
nevertheless the value of learning is being judged anyway. Divisions in
organisations that are creators of wealth need to prove the value of their
contribution to the bottom-line, why then not training. Training is not exempted
from proving the value of its investment in developing employees.
The bottom-line entails all types of value not only the monetary value. In other
words, in auditing our learning practices and processes the possibility can arise
to unearth learning value. In short, measuring value is fundamental to running
learning and development as a professional business function (Trolley, 2009: 1).
It is not only to justify what L&D has done historically, although that would be a
by-product (Noe, 2010: 217 and Werner & DeSimone, 2009: 199). In other words
an effective training evaluation practice supports the gathering of data on value
created in all its activities to meet the training need. Training evaluations is about
judging, shaping and magnifying the value that L&D adds. Through answering
questions similar to the following through the increase of effectiveness of
learning and its efficacy in training evaluations (Phillips & Phillps, 2007a: 44-45
and Wick, Pollock & Jefferson, 2010: 10):
How much money to spend on a given program and was it worthwhile?
What vendors to select for content?
What blended learning mix should be used?
How do we measure which groups need help?
How do we drive completions?
How well did people “learn”? Who? Why?
How can we reduce the cost of this program?
What business impact did it have?
The intentions of an evaluation activity should be to produce or unearth evidence
that depicts its value and benefits, to be discussed next.
UNISA STUDENT NO. 40315231 6
1.1.2 The Value and Benefit of Training Evaluations
The value of evaluation needs to be recognised by workplace learning
professionals (wplp) as a tool for training to genuinely produce results (Elkeles &
Phillips, 2007: 4). Training evaluations should be part of the budget discussion
when interventions are planned and not be an afterthought (American Society for
Training and Development (ASTD ) & Institute For Corporate Productivity (i4cp)
Research, 2009: 5).
The value of an effective training evaluation practice is the quality and quantity of
data that is gathered systematically: to make a judgement on the value of
learning (Noe, 2010: 215; Opperman & Meyer, 2008: 183). In other words the
more effective and systematic a training evaluation practice is, the more the
value of learning is laid bare. Trainingjournal (2004: 67) cites (Kearns) added
concepts of value in the four variables below and states that evaluation should
start with the benefits to business.
Quality of its Products and services that are improved, reinvented and
developed to benefit customers.
Quantity of Products that are increased during production due to
implemennting lean manufacturing or six sigma methods.
Pricing/ benefits increased due to market share.
Low production costs due reduced rework and optimal staff numbers.
Phillips and Phillips (2010: 5), states that there is a new definition for “value” or
how training evaluations contribute value. Value has a unique set of
characteristics and these are stated as follows:
Data must have elements of quality and quantity.
Financial and non financial indicators are present.
The time frames must be varied.
UNISA STUDENT NO. 40315231 7
Meet the needs of all stakeholders.
The methods and analysis must be consistent and be of conservative
standards.
The providers must be well trusted as sources of trustworthy information.
The processes must perform or accomplish the right thing.
It must create a need for movement.
Noe (2010: 13) alludes to the creativity and innovation that lays dormant in
individual employees, that needs training to bring it to the fore. Therefore human
assets need investment, time and nurturing to create value. To create value for
an organisation the quality of Learning and Development (L&D) must be well
strategised, aligned and managed. The learning function creates value through
its, activities, systems and professionals. The function must be lead by a Chief
Learning Officer (CLO), who knows the business and the function of L&D in an
organisation and its strategic capabilities (Elkeles & Phillips, 2007: 10).
Phillips and Phillips (2009: 44) concur that C-level executives know and believe
in the value of learning and want practitioners to measure impact and results for
reporting. But, states instead L&D is mostly measuring K/P’s level one and two
only. The consternation with measurement of training is that the levels that are
measured mean nothing to the C-suite executives. The accusation against
evaluation measurements, is that it caters for the needs of the learning division
and it is of no use to business. According to Wick et al; (2010: 9) training
concentrates on learning objectives, but if aligned to business strategy, learning
objectives are in fact business objectives.Therefore, it can be concluded that
training must measure all levels for interventions but report on higher level
evaluation results to decision-makers. The lower levels can be used to improve
what matters to the improvement of L&D activiities.
UNISA STUDENT NO. 40315231 8
Agarwala (2012: 364) indicates that training evaluation supports the value
determination of learning. The search for data in the training systems to make an
assertion of value is an imperative. Erasmus, Loedolff, Mda & Nel (2012: 246)
state that training evaluations have a few vital purposes and one being providing
value to the organisation, whilst Biech (2008: 50) states that value is expressed
or dictated by the clients requesting the training. Coetzee, Botha, Kiley and
Truman (2007: 296) also agree that stakeholders know what value is, similarly to
quality it is the receiver who determines what is quality. Simply construed it is the
stakeholders or clients who constantly correlate the benefits and the value of
items or services and make a judgement on the value or quality received.
Kirkpatrick and Kirkpatrick (2012:124) state that training evaluation is
implemented based on the inputs and expectations of the client. The learning
intervention must address the identified gaps in Skills, Knowledge and Attitudes
(SKA’s), to provide appropriate solutions and then evaluate all contributing
efforts. The inputs include the financial and non financial investments, both
tangible and non tangibles, both hard and soft, both quantitative and q ualitative
(Beevers & Rea, 2010: 175)
The benefits of implementing training evaluations according to Opperman and
Meyer (2008: 187) include the value of a learning program data and to verify
results for management reporting. Phillips and Phillips (2007a: 7-10) state a
number of benefits of measurement and evaluation and if the training budget is
competing for attention one of the benefits could be justification of required
investment (Wick, et al; 2009: 5). Bersin (2008: 31) states that measurement
supports the judgement to be made. Meyer and Orpen (2012: 278) further state
that evaluation requires instruments that collect valid data for a judgement to be
made on the effectiveness of a training intervention. Simply construed training
evaluation practices are synonymous with data collection tools, procedures and
processes.
UNISA STUDENT NO. 40315231 9
Beevers and Rea (2010: 161) state and agrees also that as a profession HRD
practitioners need to be well versed in evaluating training provision and
therefore, will then reap the following list of benefits of training evaluations:
That the L&D programs provided are continuously reviewed and improved
for our customers and no training for the sake of numbers only.
Obtain the support of learning stakeholders because training is aligned to
business needs and therefore will have an impact on individuals, team
and organisation.
Provides confidence in the L&D team.
Garners respect from business and are seen as credible business
partners.
Training evaluation must provide evidence of learning value.
Evaluation results are used to make critical decisions by the C-suite
executives and L&D.
Biech (2008: 695) discusses the ASTD 2004 competency model that makes the
implementation of training evaluation a key skill for learning professionals. The
ASTD 2013 competency is more of the same, but enhanced for social media.
Different organisations have stated the role of training in value creation when
surveyed, but that it has its challenges for collecting value from the evaluation
process and the alignment for results (CIPD Survey on talent development 2013:
33). The preceding statements indicate that organisations are searching for value
in learning via evaluations, but challenges are apparent. These challenges and
barriers requires new, innovative thinking especially standardisation of
measurement tools or new models for a new era (Allen & Sites, 2013: 1). The
following examines the global view of training evaluations first, before moving
onto the challenges and barriers.
UNISA STUDENT NO. 40315231 10
1.1.3 Training Evaluations: A Global View
Beevers and Rea (2010: 167) indicate that the primary purposes of training
evaluations is to provide feedback to prove, improve, review and learn. The
quality imperative demands that practices and processes need to be scrutinised
for areas that require improvement to make the practice fit for use. Generally
individual programs are evaluated for effectiveness but the practice needs to be
efficient and effective and also systematic to support the value search. Generally
single training interventions or program effectiveness is mostly only summatively
evaluated.
Guerra-Lopez (2008: 118) concurs that in the analysis of the training evaluation
practice, that it is crucial to assess processes that both support and enhance it or
and thwart or hampers evaluation efforts. The assessment of the training
evaluation practice and processes are required for continuous improvement of
the HRD management function. In other words evaluation practices that are
ineffective and inefficient require scrutiny for credible evaluation results and
future value.
Agarwala (2012: 365) concurs that the organisation that subscribes to training
evaluation as a tool within the L&D function are successful in delivering value.
The project can be deemed a meta-practice analysis of training evaluation. This
investigation will focus on the effectiveness and efficiency of the current training
evaluation system, both strengths and problem areas within the organisation
training evaluation system will be identified (Guerra-Lopez, 2008: 26). Agarwala
(2012: 367) states that the management of evaluation towards a more effective
practice is paramount for L&D survival and reputation. Systematic training
evaluations will entail evaluating the whole system from needs analysis to
evaluation (Bozrath, 2008: 5). The summative or end of training evaluation will
lack rich data that can be collected during the needs assessment, development
of learning objectives and course design. Opperman and Meyer (2008: 186) also
promote an integrated approach to training evaluations.
UNISA STUDENT NO. 40315231 11
Kirkpatrick and Kirkpatrick (2010: ix) state that the Kirkpatrick model provides a
process for summative evaluation whilst the Phillips framework provides the
metrics for each type of training evaluation, viz, diagnostic, formative and
summative evaluations.
Kirkpatrick and Kirkpatrick (2010: 168) also emphasis that training evaluations
need to build a “strong chain of evidence” to make value judgements credible. In
other words, there is a need to establish the final goals upfront, before embarking
on the evaluation route for value demonstration. Kirkpatrick and kirkpatrick
(2010: 1) also state that for return on expectation (ROE) or return on investment
(ROI) to be credible strong documented evidence is necessary.
According to Stone (2009: 2) for performance and productivity enhancement,
training contributes towards a competitive sustainable individual and enterprise.
High performance individuals or an organisation is a current trend for sustained
survival of organisations facing tremendous challenges which is exacerbated by
global economic woes (KPMG Report, 2012: 1). Next is a discussion on seeking
value for the investment.
1.1.4 Training Investments Rational for Seeking Value
Globally the training investment is huge and therefore the value of learning that
utilised those budgets need to be realised (Erasmus, et al; 2012: 239). Based on
economies of scale all investments demand accountability more so after the
current economic downturn. In other words, investors seek returns. Kirkpatrick
and Kirkpatrick (2010: 5) also cite that globally organisations invest annually in
training, learning and development and therefore all investments demand
accountability, return and value demonstration. Wick et al; (2010: 256) agrees
that systematic training evaluation practices provide answers to questions of
value, accountability and return on investment.
UNISA STUDENT NO. 40315231 12
Elkeles and Phillips (2007: 187) contend that learning investment demands an
evaluation of the training efforts to declare its value to stakeholders. Therefore
this research project will investigate the current evaluation practices to determine
whether they are effective in demonstrating the value of learning. In other words,
there is a need to assess the efficiency of the evaluation system before
effectiveness, value and value-add can be determined. The current practices of
the evaluation of training interventions and projects and the value of the
interventions depends on an effective and systematic practice to realize value.
Trolley (2009: 1) and Phillips and Phillips (2007: 10) agree that training needs to
be businesslike and manage budgets circumvently to create value. Business
always produces quarterly reports, statements of results for decision makers to
review projections and targets proposed. Simply stated training is competing for
scarce resources in the organisations and therefore an effort to refocus and
change is an imperative (Wick, et al; 2010: 256). The United States of America
(USA) according to an ASTD report (2011), has spent over $150 billion on
training.
Slovovitch (2012: 1) in his webinar makes a cliché statement to get the attention
of training for spending with impunity, by requesting training to stop wasting
money on training. He further states that between 1991 to 2013 the increase in
investment in training has risen to a staggering 290 percent. Training as a
strategic business partner needs to support the business in its future endeavour
to be sustainable and relevant by being part of the savings drive. Pervaiz (2012:
1) states that training has an impact on employee turnover which alludes to the
value proposition in talent management.
No matter the models and practices there are always trends, challenges and
barriers to any application, to be next under the spotlight.
UNISA STUDENT NO. 40315231 13
1.1.5 Global Trends, Challenges, and Barriers to Training Evaluation
Phillips and Phillips (2007a: 3), Agarwala (2012: 365) identify the following global
trends that are in favour of training evaluations:
Organisations have increased investment not only for training but
evaluation.
Ceo’s definitely are seeking training evaluations that show results and
impact.
Organisations are seriously moving to higher levels viz. three to five.
Increase in evaluation is driven by sponsors and clients.
Systematic and proactive evaluation approach is making new inroads,
which means that evaluation is starting at the needs analysis stage and is
not a summative activity only.
Line management forms a vital link in the training and evaluation efforts .
Training must drive training evaluation efforts .
Evaluation forms part of the needs analysis and is designed into delivery
of learning.
Learning Management Systems (LMS) and other technologies is
supporting evaluation with data collection and collation.
Planning and strategy is critical to the evaluation cycle.
Comprehensive evaluation and measurement have enhanced training
budgets for organisations and the opposite is true for organisation lagging
behind in comprehensive evaluation practices.
ROI/ROE metrics are growing in demand.
Elkeles and Phillips (2007: 18-27) indicate that the Chief Learning Officer (CLO)
or the heads of learning have a number of challenges facing them in the current
UNISA STUDENT NO. 40315231 14
economic climate. Managing learning expectations and the budgets spent on
employee development will contribute to the sustai nability of the organisation but,
is just one. American Society for Training and Development (ASTD) (2009: 3);
Phillips (2010: 1) surveyed Chief Executives Officers (CEO’s) and Chief Learning
Officers (CLO’s) who agree that learning does add value, but learning is
measuring levels that do not show the impact or results for the business.
There are a number of application or process problems, barriers and challenges
facing evaluation endeavours globally as well. (Phillips & Phillips, 2007a: 4-7)
states the following:
Lack of emphasis by top management for training evaluation.
Business also complains about the lack of alignment of learning outcomes
whilst learning complains about the lack of support with training at the
critical stages of before, during and after.
Trainers lack skills to conduct training evaluation.
Training staff are not clear on evaluation criteria in other words because
training is reactive no baseline data is collected according to Kearns (2005)
no needs assessment so outcome criteria are not set well ahead of the
training implementation.
Training evaluation is seen as time consuming and waste of money.
Too many theories models that are complex and therefore practitioners’
lack understanding and shy away from them.
Evaluation value is not known and there is no strategy set therefore no
planning.
Stakeholders do not support the process, it is not only a training activity.
Evaluation data not providing value for funding.
Using the data inappropriately or not at all.
UNISA STUDENT NO. 40315231 15
Lack of consistency, standards and sustainability of an approach or metrics,
not a new model every month.
Kirkpatrick & Kirkpatrick (2010: 23) state that the Addie stages from needs
analysis to implementation of interventions is attempted enthusiastically but
evaluations falls short of holistic implementation. Phillips and Phillips (2007: 13)
also allude to the fact that most of the areas on the training cycle receive the
necessary attention except for training evaluations. Training evaluation is often
seems as an afterthought or summative only and stops at level 2 (Kirkpatrick &
Kirkpatrick, 2010: 13).
Wick et al; (2010: 260) alludes to the fact that evaluation is an important tool to
build-up capacity and abilities. But nonetheless the evaluation process must be
managed to be effective, efficient, sufficient and systematic in producing
measurements with appropriate tools to make judgements about the training.
The global view is similar for all L&D functions, but the following brings the reality
home of training evaluation for South Africa (Meyer, et al; 2012: 4). Discussion
follows.
1.1.6 Training Evaluation in South Africa (SA)
Training evaluations in South Africa is in the domain of the Higher and Further
Education and Training, and the South African Qualification Authority (SAQA)
and National Qualification Framework (NQF) the National Qualification Act 67 of
2008 (Meyer & Orpen, 2012: 281). SAQA Act 58 of (1995) has since its inception
made quality, access, training evaluations, assessments, and competency
central pillars of producing quality qualifications for the SA workforce.
Assessments, evaluation, moderation, and verification of competency are unit
standard based and therefore learnt. The process requires competent assessors,
moderators, and verifiers to collect valuable data for learning and individuals.
SAQA also has prescribed percentages for each of the K/P levels for the
establishment of quality education and training systems.
UNISA STUDENT NO. 40315231 16
Coetzee et al; (2007: 258) state that the ideal for the assessment process is that
the trainer should not be part of the process, which actual leads to the notion that
training evaluation will require the same. In South Africa currently, training
evaluation have a set of unit standards (Meyer & Orpen, 2012: 280). The
management of evaluation practices towards an effective, efficient, and credible
state is the responsibility of all training stakeholders. The training practitioners
though must take the lead in the evaluation process. Managers and their learners
also contribute to the process in a number of ways (Noe, 2010: 196).
Training evaluation is a quality control process for qualifications and certifications
of trained people. The quality assurance system in the organisation must own the
training assessment and evaluation processes amongst others. Simply stated
quality assurance in the training division must provide the training policy,
procedures, and processes, which includes strategies for each activity on the
Addie cycle. In other words, the evidence collecting procedures and tools must
be available for application in practice (Meyer & Orpen, 2012: 282). Quality
assurance and management in education and training in South Africa has gained
prominence because of the Skills Development Act of 1999 (SDA/Levy)
(Erasmus, et al; 2012: 321). When conducting training evaluations the unit
standard states that the tools are provided.
The Minister of Higher Education and Training in South Africa (DHET), Nzimande
alluded to the fact that since the inception of the SDA & Levy Acts in 2000 to the
current time, over R37.5 billion was collected and R23 billion was spent on ski lls
development, but no value or return provided as it was spent on mainly short
courses. The Minister is concerned about value demonstration for the past ten
years for the skills development endeavours in the country. Anecdotal value was
created by association of expenditure, but not demonstrated scientifically. In
other words, there are challenges in demonstrating the value of training or is
simply not requested as part of the reporting systems of Skill development and
Seta Acts.
UNISA STUDENT NO. 40315231 17
The corporate governance and the Public Finance Management Act (PFMA
2008) have also demanded the need for holistic management practices
especially the accountability of expenditure in State Owned Companies (SOC).
1.1.7 Training Evaluation in the Research Organisation
Organisations are continuously searching for value in all investments or the need
to show the money (Phillips & Phillips, 2007b: 1). The research organisation
according to the strategic roadmap 2012 to 2015 and Ndlovu (2013: 1) (feedback
form on a structured questionnaire developed by researcher to obtain feedback
from skills development facilitator SDF see appendix D) has spent approximately
R110 m in the last three years on training, so what is its value declaration. What
is the cost benefit analysis?
The organisation is currently experiencing a number of challenges. All divisions
are under scrutiny not least of all the corporate university or learning and
development (L&D) division. Training evaluations is a tool that can provide proof
of the value of learning to its stakeholders. Stakeholders are decision-makers,
training, business managers, employees (learners), labour associations, and
providers. The value declaration of learning using training evaluation as a
management tool for accountability is the answer. The research organisation has
a mandate to evaluate training and report on return on investment (ROI). Noe
(2010: 216) and Wick et al; (2010: 185-186) agree that the investment of money,
time and resources in training in an organisation requires accountability.
According to Wick et al; (2010: 256) training is competing for scarce resources.
An effective systematic training evaluation practice is critical to learning value
extraction. Simply stated the value of learning depends on an effective
systematic training evaluation practice to be exposed.
UNISA STUDENT NO. 40315231 18
1.2 RESEARCH PROBLEM
The problem facing learning and development (L&D) is to declare the value of
learning. The training evaluation practice is therefore a tool that learning and
development has at its disposal. The training evaluation practice collects
evidence or data to prove that learning is adding value to the organisation. But,
“Is the current training evaluation practice effective, efficient and systematic in
providing evidence to declare the value of learning?”
1.3 RESEARCH STATEMENT
An effective systematic training evaluation practice has a critical role to play in
the search for learning value: as it is the tool that collects evidence and data on
the activities of L&D if based on the Instructional Systems Design (ISD) known
as Addie and on the summative models of Kirkpatrick/Phillips to collect data of
learning value.
The main question that needs to be addressed is therefore:
“Is the current training evaluation practice an effective, efficient and systematic in
providing evidence to determine the value of learning?”
The research proposition for this study can be formulated as follows:
An effective systematic training evalution practice plays a critical role in the
declaration of the learning value in an organisation.
Whilst the opposite will be true as well being that training evaluation practice that
is ineffective, inefficient and non-systematic or absent will hamper the declaration
of learning value in an organisation.
UNISA STUDENT NO. 40315231 19
1.4 RESEARCH OBJECTIVES
1.4.1 Primary Objective
The primary objective of the research project is to investigate the current training
evaluation practice at the corporate university of a state owned company (SOC).
The objective is to review and critique the current practice of training evaluation
to establish whether it is an effective systematic practice that supports the search
for the value of learning. The current practice of training evaluation will be
examined to determine the following: the degree of the effectiveness of the
practices in providing evidence to determine the value of learning. The depths
and richness of the evidence to declare holistic value systematically and
therefore are all types of evaluation considered in collecting data on value
created.
1.4.2 Secondary Objectives
The investigation of secondary objectives will support the research in answering
the main query are formulated as sub-questions as follows:
Sub-question 1: How does the Addie training cycle support the systematic
training evaluation process if used in assessing, developing, designing,
implementing and evaluating?
Sub-question 2: How does the Kirkpatrick/Phillips model support the effective
summative data collection process; the percentage attempted per level, what is
used to evaluate the levels, when is it evaluated and the value of the evaluation
level?
Sub-question 3: What are the problem areas, barriers or challenges to effective
training evaluations?
Sub-question 4: How can the training evaluation process be improved in the
future to be systematic, effective and efficient practice, in comparison to the nine
items called the Evaluation Success Index (ESI) which are findings of
UNISA STUDENT NO. 40315231 20
international surveyed organisations on what is deemed to be an effective
evaluation practice (ASTD’s, 2009)?
1.5 LITERATURE REVIEW
According to Leedy and Ormrod (2013: 51) the literature study is important to
embarking on a research project because it provides context to the research
problem. Further, it forms the basis for unveiling current and past information
related to the research topic. The basis of a research project is to establish all
previous information in the forms of books both hard and soft copies on the
subject under investigation. The investigation helps to focus the study and
provide guidance to completing the project.
1.5.1 Theoretical Objectives
A theoretical literature review will aim to:
Review training, learning and development (TLD) practices locally and
internationally.
Review the Addie training cycle or instructional systems design (ISD) and
its impact on training evaluations systems.
Review Evaluation models, frameworks and value metrics in the field of
training especially the Kirkpatrick and Phillips models and others.
How to manage an effective systematic training evaluation practice for
learning value.
The impact of training and evaluations on value of learning in the Global
field and in South African organisations.
To find practical studies of systematic evaluation of training in organisations
or international field.
UNISA STUDENT NO. 40315231 21
Review reports on the systematic training evaluations .
Review research on what are effective evaluations success indicators.
Establish what makes evaluations valuable and successful.
Review surveys and questionnaires on effective evaluation practices in
evaluating evaluation and the data collection instruments for each level, but
where, when, what, why and how will be in focus.
1.5.2 Empirical Objectives
The objectives that this aims to achieve include:
To appraise the support of the Instructional Systems Design (ISD) known
as Addie training cycle in the systematic approach to training evaluations
for the diagnostic, formative and summative evaluations.
To appraise the support of the Kirkpatrick/Phillips five level framework for
the summative training evaluations.
To appraise the problems in the process, procedure and data collection
methods.
To appraise what will be the future state of an effective practice to pro vide
data to prove the value of learning.
To appraise what is deemed effective and systematic training evaluations
against the ASTD evaluation success index (ESI) ASTD and (i4cp)
Evaluation Research (2009: 1 - 65).
1.6 RESEARCH DESIGN
The research design includes both the design and methods to be applied in
reaching a conclusion. According to Leedy and Ormrod (2013: 184) the design
UNISA STUDENT NO. 40315231 22
most suitable to come to a conclusion about the problem is a quantitative
descriptive design because a quantitative survey will elicit responses to a large
number of questions based on the variables to reach a conclusion in this study.
The design is an applied approach which means that it is a problem within an
organisation which is dynamic in nature. The developmental design will be a
cross-sectional study as it will collect data at a single period over November
2012 to April 2013 (Leedy & Ormrod, 2013: 188).
According to Leedy and Ormrod (2013: 191) the descriptive quantitative research
survey method is a design that will describe information collected from a large
population. The information measures one or more variables, by collecting data
using a survey questionnaire. The survey design will require responses to
questions posed and will collect data that can be easily quantifiable. A
Cronbach’s Alpha of above 0.700 will be judged to be acceptable for a reliability
score. Because there are a number of questions and sub items and a larger
response will be required, being above thirty percent survey response rate at
least. The larger number of questions require a larger number of responses
which will help determine consistency of response. In other words the reliability
of a question will be if a large group of participants answer the same question
with the same response.
The five point Likert scale will be used for the majority of questions that are
mainly closed or forced selection questions, except for the rare open ended
ones. (Maree, Cresswell, Ebershöhn, Eloff, Ferreira, Ivankova, Jansen,
Niewenhuis, Pietersen, Plano Clark & van der Westhuizen, 2007: 167). There will
be approximately thirty-eight primary questions with a variance of one to ten sub-
questions in the questionnaire. The services of an on-line survey company will be
employed to convert the questionnaire to an on-line format and will distribute to
the given list of practitioners so as to maintain anonymity.
UNISA STUDENT NO. 40315231 23
1.6.1 Research Population
The target population for primary data collection will be all the members of the
human capital team, that provide training to employees viz. the national Learning
and Development (L&D) team, the Talent/Organisational Development (OD)
team and the Employee Relations (ER) team who train employees. Therefore the
census method for data collection will be applied which means that all members
will be afforded an opportunity to be part of the research population. The survey
will request participants to provide their perception and opinions on the current
training evaluations practice in the organisation (Leedy & Ormrod, 2013: 189).
1.7 DATA ANALYSIS
The data analysis will be a descriptive quantitative analysis for all questions
except the qualitative questions and the correlations to assess the training
evaluations practice implemented. The qualitative questions are open-ended
questions that respondents state in a sentence form and are therefore unique in
nature but provide views and opinions that enrich the close-ended forced
responses. The preliminary analysis will be provided by the on-line survey
company. The in-depth data analysis will be accomplished using the (SAS JMP
version 2010: 1) software package that can make the inferences, with the
guidance and support by a specialist technician (Leedy & Ormrod, 2013: 302).
1.8 LIMITATIONS OF THE RESEARCH
The limitations of the study are that it is conducted in a single organisation; it wi ll
be within the state owned company (Soc). The Micro-level view could be used to
make changes within it systems and especially systematic training evaluation.
The findings could be used as a point of reference for similar sized organisations.
It is also limited to the practitioners in Human Capital Development (HCM). The
study does not have any control groups for the survey.
UNISA STUDENT NO. 40315231 24
There are a number of different types of evaluation metrics, models and
frameworks found in the literature search on training evaluation. There are
different approaches as well, to be further discussed in chapter two. It falls
beyond the scope of this research study to use all types of training evaluation
metrics and therefore only the Addie and the Kirkpatrick/Phillips models will form
part of the questionnaire.
1.9 ETHICAL CONSIDERATIONS
Permission will be requested by the researcher to engage the research
population of the organisation as it will involve the employees (see Appendix A).
The researcher was requested that the organisation remain anonymous. Great
care will be taken to give participants an option to partake or not in the survey
questionnaire and also permission will be requested to use the information
provided in the questionnaire as well. In other words it will be voluntary
participation and all participants will be given a report on request after the
findings are concluded (see questionnaire appendix B)
1.10 CHAPTER LAYOUT
1.10.1 Outline of the Research Report
The research will consist of the following sections:
Chapter: 1 Introduction, Contextualisation of the Study
Chapter: 2 Literature Review in Training, Learning and Development, Addie
cycle, Evaluation models and framework Systems.
Chapter: 3 Research Design and Methodology
Chapter: 4 Data Findings and Analysis:
Chapter: 5 Recommendations and Conclusions
UNISA STUDENT NO. 40315231 25
Chapter: 6 Appendices of Questionnaires and Memorandums
1.11 CONCLUSIONS
The proposal introduced the mandatory requirement, to declare the value of
learning. The training evaluations practice is a tool that collects data of value
that was created and therefore if it is measured effectively, efficiently and
systematic, it has the potential to declare the value of learning.
The support of the Addie cycle and the Kirkpatrick/Phillips evaluation models will
be utilised to collect information on the current practice to determine whether it is
an effective and systematic practice. The Kirkpatrick and Phillips (K/P)
summative framework will be utilised once it is etablished that it is known and
used to assess the current practice’s effectiveness in collecting evidence post-
learning data. The challenges faced and how they can be resolved. The following
chapter will be a review of pertinent literature on training, learning and
development, the Addie and other training cycles, evaluation models and
frameworks.
UNISA STUDENT NO. 40315231 26
CHAPTER 2
LITERATURE REVIEW: TRAINING SYSTEMS, TRAINING
EVALUATION AND LEARNING VALUE
2.1 INTRODUCTION
The previous chapter made a proposal that an effective systematic training
evaluation practice plays a critical role in exposing learning value. A preliminary
literature review was attempted also in the previous chapter. A literature review
provides researchers with mental and theoretical frameworks and paradigms.
Leedy and Ormrod (2013: 51) state that a literature review empowers
researchers with a depth of knowledge concerning the research problem. The
research problem falls within the applied research domain and therefore requires
theory, models and similar information from research to work towards a
conclusion of the problem statement.
When reviewing the literature, sources will provide information on learning value
chains or learning metrics that inform training evaluation for measuring training
impact and value. These sources will include the major international and local
contributing bodies of knowledge in the field of training, learning and
development (L&D): being the American society for training and development
(ASTD), the Chartered Institute for Personnel Development (CIPD), Chief
Learning Officer magazine, Training Magazine and topics on local training
evaluation practices.
2.2 TRAINING, LEARNING AND DEVELOPMENT SYSTEMS
Kearns (2005: 135-145) states that the training evaluations process provides the
link between value and learning and further states that the profit motive needs to
UNISA STUDENT NO. 40315231 27
be replaced by a value motive. Without deliberate evaluation efforts the value of
learning is unknown. Training evaluations are a critical component of the training,
learning and development systems, the following are definitions that are central
to this study.
2.2.1 Definitions of Terminology
The following definitions are short overviews of some of the terms in this
research project, and are not at all exhaustive.
2.2.1.1 Addie Cycle
Addie is an acronym of the stages of the systematic approach to training (SAT)
cycle, which begins at assess training needs, develop objectives, design the
material, implement the training and finally the summative evaluation stage
(Biech, 2008: 196). The stages can vary from four to twenty depends on the
organisations need.
2.2.1.2 Training
Noe (2010: 5) concurs that the definition of Training to be the organisations plan
to provide skills, knowledge and attitudes, which are competencies to be able to
perform current jobs that are part of organisations reason for existence. Training
is specific to each organisations technical or functional requirement for
productivity and high performance. However, strategically aligned training
provides value to the bottom-line (Wick, et al., 2010: 256).
2.2.1.3 Learning
Wilson (2012: 15) alludes to the fact that learning is a complex activity that
causes permanent change in the cognitive, affective, and reflective domains and
it can be involuntary, planned, and or continuous.
UNISA STUDENT NO. 40315231 28
2.2.1.4 Training Effectiveness
Noe (2010: 216) refers to training effectiveness as benefits that stakeholders
derive from the implementation of training whilst design effectiveness relates to
the systematic approach that impacts training effectiveness or quality, efficiency
and impact (Opperman & Meyer, 2008: 187).
2.2.1.5 Training Measurement and Evaluation
According to Beevers and Rea (2010: 162) evaluation is defined as “to ascertain
or set the amount or measure of value; to judge the worth of”, whilst in training it
is about measuring, analysing a number of inputs and outputs of the L&D
provision, with the view of establishing the effectiveness and value. Evaluation
also judges the data that was measured (Noe, 2010: 214, Opperman & Meyer
2008: 187). The evaluation of training is a deliberate effort of collecting data on
training to convert into valuable information to make a judgment of value,
effectiveness, and efficiency of training (Noe, 2010: 216; Erasmus, et al; 2012:
321). In other words training evaluations evaluates process criteria and
outcomes criteria to determine value of the intervention (Agarwala, 2012: 365).
2.2.1.6 Systems Approach to Training
The South African Pocket Oxford Dictionary, 2000: 982, defines Systematic as a
methodical, efficient, and organised process of approaching a task. The basis of
the systematic process is training theories, frameworks, and models. The
theoretical models, frameworks, and previous research provide the foundation for
comparison and investigation into systematic training evaluation for learning. The
Systematic approach to Training (SAT) or Addie or training cycle, is defined as
training that is based on the systems design process which has a start and a
continuous end. The Process begins with Needs Assessment and terminates
with a Summative evaluation report, but is also recursive (Bozarth, 2008: 5).
UNISA STUDENT NO. 40315231 29
2.2.2 Training Systems Practices
This study will focus on systematic training evaluations and their practice in a
corporate university or learning institue, to search for the value added by training.
The corporate university in this study is the strategic learning and development
arm or human resource development (HRD) centre for this organisation (Lui-
Abel, 2011: 413). Lui-Abel (2011: 415) states that one of the organisation’s goals
is to provide programme “evaluation and measurement” as a value proposition.
This research project will necessitate the investigation of the current practice of
training evaluation at a corporate university or learning institutes situated in
seven regions, in order to report the value in qualitative and quantitative terms
(Phillips & Phillips, 2007a: 3 and Beveers & Rea, 2010: 55).
According to Kirkkpatrick and Kirkpatrick (2010: 32) the human capital function
has to become a strategic partner to business. As a system, it improves and
causes transformation in employee skills, knowledge and attitudes (SKA ’s) on a
continuous and planned basis (Noe, 2010: 56 and Lui-Abel, 2011: 413). The
evaluation of this transformation is critical to the learning value for stakeholders
and the changing business environments. According to Lawson ( 2009: 286) to
prove the value of training the learning function, is a strategic transformation
agent that systematically provides value by means of skills development for
individuals, teams and the organisation based on its strategic direction (Noe,
2010: 58-59 and Lui-Adel, 2011: 413).
According to the Biech (2008: 15), since the Second World War training and
learning have been catalysts for the booming industrial era. In other words, this
rapid growth required a skilled workforce which eventually produced a prolific
“knowledge era”. According to Noe (2010: 15) knowledge workers are skilled
employees who contribute through what they know. They are technology savy
and are constantly challenged to change and learn new ways and skills to remain
current or relevant to the evolving needs of business.
UNISA STUDENT NO. 40315231 30
Russ-Eft (2009: 465) states that the systems approach to training (SAT) and
instructional systems design (ISD) structure will help determine if the training
evaluation practice is systematic. Evaluations relating to the traditional approach
to training (TAT) reached level one and two of the Kirkpatrick/Phillips (K/P)
model, while the SAT and performance consulting approach to training (PCAT)
models implement higher levels of evaluation and the approach supports
business alignment, results and impact see Table 2.1 (Barbazette, 2008: 88).
Table 2.1 Approaches to training and the practised evaluation levels
Approach Goals/View Training evaluation
levels
Traditional approach to
training (TAT)
Training function knows
what is required and is able
to meet legislative demands
Reaction - Level 1
Learning transfer –
Level 2 (during or after
training)
Performance
consulting approach to
training (PCAT)
Business needs are
prioritised and the
organisation decides on the
training needs
Includes Level 1 and
2.Behaviour change –
Level 3-Transfer to job
Level 4-Results/impact
Systems approach to
training (SAT), ISD or
Addie training cycle in
which training is seen
from a systems point
of view
Collaboration and
joint endeavours from
the first stages that
follow through to the
integrative reporting.
Addresses all levels
including evaluation of
Level 5, ROE/ROI
Source: Barbazette (2008: 88)
Barbazette (2008: 2-46), Trolley (2009: 1) and Lawson (2009: 286) agree that the
L&D function must be managed to provide the bottom-line results required by
stakeholders. Bottom-line results translate into the wellbeing of a brand and its
associates. Bottom-line results that are indicated on the balance sheet as profit
UNISA STUDENT NO. 40315231 31
generated.The literature also proposes that a thorough systems audit of the
function is necessary to assess the value added and value chain in the
organisation by defining what the function brings to the organisation that add to
its prosperity. The systems audit will also improve roles and responsibilities in the
following key areas:
The strategic roles of management.
Performance consultation ( Wick, et al; 2010: 24).
Supportive internal partnerships ( Eraut, 2011: 195-196)
Using project management as a ski ll in POLC ( management skills).
Evaluation and assessment of interventions.
Outsourcing and external partnerships for quality purposes.
Managing practitioners according to skill and for placement.
Marketing the L&D function.
Planning training events using internal media and open days.
Proper scheduling and administration.
According to Stone (2009: 19-20), training that is strategically aligned to business
outcomes adds value, effectiveness and efficiency to individuals, teams and the
organisation. Further Wick et al; (2010: 30) concurs that learning objectives need
to be aligned to business objectives.Training evaluations provide data to
conclude a report on the value of training or the intervention to stakeholders. The
investment in learning or the training function is a decision that organisations are
compelled to make. On the one hand, learning organisations see it as a strategic
competitive advantage that can be used to create wealth and future
sustainability, while on the other hand, if training is only funded because of
legislation or is viewed as a necessary evil, training then lives up to that
misnomer.
UNISA STUDENT NO. 40315231 32
Elkeles and Phillips (2007: 15) allude to the choice between a possible value of
investment (VOI) and not investing in the training of employees. Erasmus et al;
(2013: 239) allude to the fact, that training is implemented to produce learning
value and should also be accountable. There is an established connection
between training and the evaluation of training (Agarwala, 2012: 363). Evaluation
is, thus, the tool for determining the value created during a systematic approach
to training or the training cycle. Consequently, training evaluation is the
responsibility of all training professionals who, after completing the training cycle,
need to declare learning value and be accountable for the expenditure incurred.
Critics of the learning function in organisations state that practitioners have lost
touch with the realities of business and lack business knowledge or acumen
(Elkeles & Phillips, 2007: 4). Learning is therefore incorrectly construed in terms
of needs, rather than L&D and management working together using the systems
approach to training to bring about business needs alignment. The power and
politics between business managers and learning is evident in the type of
relationships building which is crucial for success.
Bersin (2008: 31) and Beevers and Rea (2010: 52-72) agree that in the current
dispensation the L&D organisation relies on technology. Systems data is deemed
to be part of business intelligence (BI). Business intelligence is data that resides
in the organisations systems. Various business systems provide data that should
be leveraged for systematic training evaluation efficacy and efficiency. The
systems approach to training (SAT) includes the following organisational
elements, divisions and systems (Opperman & Meyer, 2008: 190): the following
elements are supportive of the systematic approach:
Strategic vision.
Mission and values.
Business models.
UNISA STUDENT NO. 40315231 33
Current and future plans and operations.
Government mandates and licensing requirement of SOC’S.
New government ventures.
Human and government relations.
Public finance management act 2008 (PFMA).
Technology models business intelligence , plans and operations.
Legal and regulatory compliance plans.
Procurement and bbb-bee.
Financial resources.
Human resources systems (Thorne & Mackay, 2007: 23).
Standards, quality and productitvity targets.
Bachetta (2012: 1) concurs that SAT could enhance the collection of valuable
data. The Addie training cycle is a model that proposes the training activities to
be undertaken in order to meet the training need. The L&D function, in turn
utilises (SAT) to systematically meet the training need. In other words, Addie is a
tool or structure which can guide the search for the value of learning.
This research project will investigate the process of training evaluations based on
the Addie training cycle, against the diagnostic and formative types of evaluation
generally overlooked as a source of value and, finally, against the summative
evaluation methods used in the organisation. The next section will be a
discussion on the value of the systematic approach to training evaluations and
the support of the Addie training cycle.
UNISA STUDENT NO. 40315231 34
2.3 THE ADDIE TRAINING CYCLE
The Addie training cycle is illustrated in figure 2.1 below. It indicates the activities
that training/learning function will need to complete in creating value for
employees, the organisation’s bottom line and the L&D bottom line (Bozarth,
2008: 5). Bottom-line measures are the the financial aspects of operating a
business. In other words, the profit margin, while for L&D the bottom-line is the
return on investment. Investments must grow and have a positive effect on profit
(Lawson, 2009: 286).The outcome of this should be borne in mind by all training
stakeholders if the training budget is seen as an investment and not a expense.
(Wick, et al; 2010: 165). Investors in training and receivers and providers of
learning interventions must seek value in L&D and its proposition for ski lls
development.
The countrys’ decision-makers and regulators think that skills development is the
bottom line, but unfortunately this is only the tip of the iceberg. The search for
learning value must be tracked throughout the entire Addie training cycle, as well
as being systematic in it endeavours to provide an effective quality control tool.
According to Noe (2010: 9 ) training provision that is unsystematic will reduce the
benefits, that is reactive appraoch overlooks stages in the Addie cycle and
implements only what training clients’ request.
When certain stages are skipped, evaluation becomes difficult and credibility is
lost. Evaluation is thus avoided (Noe, 2010: 8). According to (ASTD and (i4cp),
2009) Research state that training divisions that overlook the value of
evaluations do so because the CEO or management does not request evaluation
efforts or reports. It is therefore difficult to isolate value, only the levels that are
easy to evaluate or of little importance to business are attempted, that is, levels
one and two of the Kirkpatrick/Phillips (K/P) five-level framework.
UNISA STUDENT NO. 40315231 35
2.3.1 The Different Views of Addie
Figure 2.1. Traditional View Source: Bozarth (2008: 4)
The simple linear flow of Addie belies the value that it creates, in reality each
topic requires intense activities by workplace practitioners both line and training
to create value. This basic view displays a dashboard of the main elements
involved in meeting a training need.
Figure 2.2 the recursive or iterative Stages of the Addie training cycle
source: Chan (2010: 6 &17).
Stage One:
Training Needs Analysis
A
Stage Two:
Develop Learning
Objectives
D
Stage Three:
Design Material
D
Stage Four:
Implement Learning
Intervention I
Stage Five:
Summative Training
Evaluation E
Implement Develop Design Analysis
Analysis Develop Design Deliver Evaluate
UNISA STUDENT NO. 40315231 36
The recursive approach views (SAT) as continuous cycle starting with analysis
and moving through the various stages and again returning to the analysis.
The linear sequential or waterfall view of the training cycle stages is as follows:
(Analyse) -analyse training needs.
(Develop) -develop the training objectives.
(Design) -design training material or experience.
(Implement or deliver) -implement the training.
(Evaluate) -evaluate the training (summative evaluation).
Erasmus et al; (2012: 249) state that the principles of evaluation indicate that the
process is ongoing; that is, that it is pre-planned and not an afterthought. Noe
(2010: 7) agrees with this principle as can be seen in the ISD utilised, which is a
seven step process which makes evaluation planning in advance of delivery.
Accordingly, evaluation is a control and checking method for quality; it is for more
than just an event or activity but looks at the L&D system as a whole.
According Noe (2010: 4) the training cycle that is depicted in his literature is a
seven step model, that was adapted to more then the five levels discussed
above.This seven step cycle supports systematic training evaluation because
evaluation is planned at step five and is not an afterthought or end. According to
Beevers and Rea (2010: 235) whether there are four or more stages in the
training cycle, and whatever the form of the diagram that illustrates it, each stage
contributes to creating learning value. Beevers and Rea (2010: 234-235) also
based their model on the Addie cycle but depict a four stage training cycle, but
agrees with the South African requirements for a level for learning assessment
stage (Kiley, 2007: 250).
UNISA STUDENT NO. 40315231 37
2.3.2 Critique for and against Addie or ISD
Allen and Sites (2013: 1) and generally the critics state that the current ISD
Addie training cycle has undergone many changes since its development in the
late 1930s. Critics also state that it is slow to respond to training needs, which
probably lead to the demise of the Addie training cycle. Noe (2010: 8) states the
view of professionals that ISD is flawed, because in organisations the neat
stages are not followed as it is time consuming and it is assumed to end at
evaluation. Value creating efforts should take thought and planning and if that is
Addie’s problem then stakeholders should be taught to also plan as well because
training should not be a reactive or in crisis mode continuously.
Allen and Sites (2013: 1-14) criticises and states that the Addie system is slow,
inflexible, hinders innovation and is unable to take advantage of technology.
Moreover, quality and timeliness in providing solutions is not optimised. Although
this could be true to some extent, some type of system should be used to
manage the HRD function. For new practitioners, focused systems and models
provide the structure for L&D deliverables. Accordingly, a new approach, called
the Successive Approximation Model (SAM), has been proposed by Alan and
Sites (2013: 1).
Bersin (2008: 31) states that it is a mistaken assumption to make is that without
systems, an organisation will be able to function effectively and efficiently.
However, the professional competencies required for workplace learning
professionals (WLPs) will be difficult to identify without a guide to follow. Training
systems such as the Addie cycle contributed to the ASTD Competency Models
old and new (Biech 2008: 695) and ASTD.
However, it is the researcher’s belief confirmed by Chartered Institute of
Personnel Development (CIPD) (2013: 34), that the Addie cycle does provide a
UNISA STUDENT NO. 40315231 38
structure for training and since it has been tried and tested by many practitioners
it does have its value. The researcher’s belief is that all types of models and
frameworks in every field of study have been an improvement on the original
research. Noe (2010: 9) agrees that the Addie cycle needs to be systematic and
flexible to meet the training need. In this way, platforms have been set-up for
further development and review. Iterative models are said to waste time and are
slow to respond, especially during periods of high change such as the current
economic crisis. But everything is not on high change all the time, majority of
training should be proactive to add value to learning.
According to Meyer et al; (2012: 427-428) training systems are important tools
for success of the function. E-learning and M-learning needs to investigate
whether the Addie cycles works with its approach. In addition, technology and
social media might require a new and different cycle (Biech, 2008: 198). 1).
However, Addie is still a very useful systems approach model and it gives a
structure for training and learning to follow by providing a complete service to
organisations. The Addie cycle also supports the collection and documentation of
achievements or value at each stage of the model.
2.3.3 The Evaluation Centric Model and Value
Whereas, Phillip and Phillips (2007: 23) , Bozarth (2008: 6) and Wilson (2012:
411) also propose a centralised format for Addie which places evaluation in the
centre of the training cycle. Figure 2.3 below illustrates this.
The format in figure 2.3 communicates the importance of evaluation in managing
training. Evaluation should not only be done summatively but is central to the first
four phases as well. The conformative, diagnostic and formative evaluations are
conducted as process evaluations and after the implementation step has the five
levels of summative training evaluation. If the adapted model is to be used the
evaluation at each step will require different checks and controls in the form of
tick sheets and feedback loops. The four-stage systematic evaluation model will
UNISA STUDENT NO. 40315231 39
Evaluation
Analysis
Develop
Design
Deliver
also help deal with some of the problems arising due to time constraints and
technological advances. The Learning and Development (L&D) function should in
the current dispensation uti lise the Systems Approach Training (SAT) to
systematically evaluate training, due to the colloborative nature of learning
versus training.
Figure 2.3 Adapted by Researcher Centralised view of Addie Evaluation
(Importance of Evaluation) and evaluation types
Source: adapted from Phillips and Phillips (2007: 23), Bozarth (2008: 6) and
Wilson (2012: 411).
The model in figure 2.3 can be expressed descriptively in the following terms
adapted by researcher from Phillips and Phillips (2007: 23). (Wick, et al; 2010: 6
d”s)
Analysis and evaluation produces proper analysis that confirms the
need is a training need if wasteage is avoided that declares saving value
for organisation and L&D costs with budget, collects baseline data for
comparison with post training data.
Summative
Formative Diagnostics
Conformative
Diagnostics
UNISA STUDENT NO. 40315231 40
Develop objectives and evaluation impacts refined objectives and
business alignment produces value for organisation and learners in meeting
the training need and the value added to the bottom-line.
Design material and evaluation highlights the areas to be reviewed
in the training material and missing material gathered before training begins
equals value to all training stakeholders due to enhanced quality of training
material. Finally the designing needs to produce material that meets the
learning styles of each generation attending the training.
Deliver the training and evaluate leads to the five summative levels
promotes value for designers and developers, trainers, assessors, learners
competency and SKA transfer followed later by evaluation of behaviour
change, results, impact, ROI and ROE.
Kiley (2007: 252) agrees that systematic training evaluations make diagnostic
and formative evaluation important identifiers of the value of learning data.
Simply stated, if baseline benchmarks are devised and dealt with when
diagnosing the needs, value will be easy to identify and declare. The alignment
of learning objectives to business objectives is therefore addressed and value is
systematically created. When the design of the material with all the tools for
assessment of competency and pre-post-tests is comprehensively addressed,
then learning value is created, but this requires a documentation process and
procedure.
The ISD framework provides a structure to gauge whether diagnostic, formative,
confirmative and summative evaluations are addressed systematically.
Opperman and Meyer (2008: 4) and Bozarth (2008: 5) agree that successful
evaluation integrates all elements of the ISD or training cycle. This leads to a
discussion of the much-practised according to Kirkpatrick and Kirkpatrick (2010:
vi) training evaluation models and frameworks that are central to this study. Thus
the discussion on the summative models of Kirkpatrick, Phillips and others follow.
UNISA STUDENT NO. 40315231 41
2.4 TRAINING EVALUATION MODELS AND FRAMEWORKS
2.4.1 Kirkpatrick Foundational Model
The first attempts to make training evaluation a reality by applying the four-step
model was devised by Donald Kirkpatrick in the early 1960s (Kirkpatrick &
Kirkpatrick, 2010: ix). Kirkpatrick’s first model was a theory by the originator of
training evaluations and, even though it was primarily summative, but it did
establish a platform for additional work. The challenge with the original four
levels was that, although the steps required much effort, they were nevertheless
simplistic. The model actually prompted training professionals to deliberate
around its concepts and move forward to others.
The primary complaint against it was that it lacked the data collection tools to
make the model practical for application and it did not provide for data collection
from the needs analysis to summative evaluation (Bersin 2008: 66). Training
professionals have subsequently used and added tools to create observational
and descriptive methods to evaluate training. The Kirkpatrick model is thus
currently still under review by the originator (Kirkpatrick & Kirkpatrick, 2010: 13).
According to Vasquez and Norwood (2013: 1) the Kirkpatrick partners confirmed
by Kirkpatrick and Kirkpatrick, (2012: 324) data should create a strong chain of
evidence of historical data, which is the state before the intervention of training,
which is the baseline data that Kearns (2005) proposed for comparison. The data
is both qualitative and quantitative.
Phillips and Phillips (2007a: vii) also cites Kirkpatrick’s four levels as the original
theoretical model, which was subsequently reviewed, improved and increased in
terms of the number of levels. In the 1980s, Phillips and Phillips included data
collection templates and added a fifth level called return on investment (ROI).
UNISA STUDENT NO. 40315231 42
Hence, acknowledging Kirkpatrick as the originator and Phillips as later
contributor to what is the Kirkpatrick/Phillips framework.
Kirkpatrick and Kirkpatrick (2010: 3) have cautioned training divisions in
organisations to prove their value or face severance. The basic problem with the
initial four levels of evaluation was that it was summative and lacked diagnostic
and formative evaluation phases. Although it appeared useful on paper, it lacked
substance for application, especially with regard to data collection instruments.
Nevertheless, 80 percent of organisations use it even though it lacks practical
application (Kirkpatrick & Kirkpatrick, 2010: ix).
Figure 2.4 Kirkpatricks Model four levels and its sequential flow by
researcher
1. Reaction Level To Training Factors
2. Learning Objectives
Skills And Knowledge Outcomes
3. Behaviour Change In The
Workplace
4. Business Impact And
Results
UNISA STUDENT NO. 40315231 43
Table 2.2 Kirkpatrick Levels and Evaluation objectives (Including return
on expectation (ROE))
Level What is evaluated?
1 To what degree participants react favourably to the learning event
2 To what degree participants acquire the intended knowledge, skills
and attitudes based on their participation in the learning event
3 To what degree participants apply what they learned during training
when they are back on the job
4 To what degree targeted outcomes occur, as a result of the learning
events and subsequent reinforcement
5 Return on Expectation 2010 (Latest level addition)
The Donald Kirkpatrick Four/five Step Model Source: (Kirkpatrick &
Kirkpatrick, (2010: ix) forewords by Donald L. Kirkpatrick
2.4.2 Criticism of Kirkpatrick four levels and Move to Return on
Expectation Level 5 (ROE)
The main problems and complaints about the model were resolved when Phillips
and Phillips (2007: 10) reviewed the four-level taxonomy. Donald Kirkpatrick also
reviewed the model in the 1990s. By then, the training field had evolved and by
2010 had begun to include return on expectations (ROE) and the Kirkpatrick
Business Partner Model (KBPM) (Kirkpatrick & Kirkpatrick 2010: 25). Kirkpatrick
and Kirkpatrick (2012: SU 105) emphasise that training evaluations need to build
a “strong chain of evidence” to make value judgements credible.
Kirkpatrick and Kirkpatrick (2010: x) allude to the fact that training is “on trial”
because the value created by learning for the organisation and its employees is
currently unclear and unknown to stakeholders. The new ideas and proposals on
training evaluation models are numerous and some differ greatly from the K/P
UNISA STUDENT NO. 40315231 44
framework. Many of the criticisms of the Kirkpatrick four levels related to its
heavy reliance on theory, until the recent move was made to ROE and the
business partner model (KBPM) (Kirkpatrick & Kirkpatrick, 2012: SU 105).
Yeo (2011:1) states that when training evaluation are in focus most ly, evaluation
discussions seem to gravitate to the original Kirkpatrick taxonomy. Accordingly,
the levels are understood to be of use only once the training is implemented and
so the diagnostic and formative evaluation value is lacking in the evaluation
reports. The lack of data collection instruments also makes Kirkpatrick’s model
daunting for users, although many others have provided tools for the own
purpose (Bozarth, 2008: 275).
2.4.3 Phillips Five Level Framework
While the Phillips five-level framework seems to be well developed due to the
interest in ROI, without including the four Kirkpatrick levels ROI evaluation will be
difficult. Phillips has done many application case studies with the ASTD research
in L&D in international organisations over the years and has refined the theory.
Although the K/P framework is known, very few practitioners progress beyond
level two of the evaluations as many are intimidated by the process. The quality
controls and assurance systems do not provide data collection instruments or
direction for the evaluation process. Therefore, ROI and ROE should not be an
afterthought.
Table 2.3 The Phillips five levels and objectives
Level What the level entails
0 input
1 Reaction, and planned actions- participants satisfaction and
activities being planned
UNISA STUDENT NO. 40315231 45
2 Learning- change in knowledge, ski lls and attitude
3 Behaviour change – transfer of learning from the classroom to the
workplace
4 Impact on organisation- results, changes are measured in the
business environment
5 Return on investment- comparison of program benefits and
program costs
Source: (Phillips & Phillips, 2007a: 18)
Figure 2.5 The Phillips Five Level Summative Evaluation Model.
The figure 2.5 depicts Phillips five-level summative evaluation model, but four of
the five levels were part of Kirkpatrick model. Source: (Phillips & Phillips, 2007a:
18)
Suggestions to investigate the use of evaluations have involved some of the
other methods, but Opperman and Meyer (2008: 224) state that the practice of
training evaluation is a planned and deliberate integrated systematic process
1. Reaction Level
To Hygiene And Training
Factors 2. Transfer
Of Learning Objectives/Outcomes
In Skills And Knowledge
3. Behaviour Change At The
Workplace
4. Business Impact And Results On The Org And
Learner
5. Return On Investment
Phillips KIRKPATRICK
UNISA STUDENT NO. 40315231 46
which reveals the value of learning in an organisation. Phillips and Phillips (2007:
3) have documented in various studies that the value created by learning has
been proven to exist by professionals using models and systems in research.
2.4.3.1 Return on investment (Phillips & Phillips, 2007a : 19-31)
Return on investment (ROI) addresses the final step of the evaluation of
processes according to Philips five levels of training evaluation. According to
literature somehow practitioners hardly attempt this level and yet, Internationally
and locally it is time for the Human Resources Divisions to prove to Business that
the budget invested yearly actually adds value and this is the opportunity to
grasp and prove that value, amidst tough economic times worldwide. The
evaluation of learning and development at all levels of Phillips framework
provides stakeholders with value that is added to their business units, experts in
the field especially the Phillips suggest only ten percent of programs according to
their criteria should require ROI calculation.
In the implementation of the evaluation process Phillips found that for return on
investment methodology (ROI) to be accepted as credible the evaluation should
be centralised to the whole training cycle and not be merely summative. In other
words, the evaluation that includes the diagnostic data value so that the
summative data is not a post-mortem of the delivered intervention only (Phillips &
Phillips, 2007b: 23, Stone, 2012: SU 324).
2.4.3.2 Phillips learning value chain
A training value chain Table 2.4 therefore needs to be documented and used in
an organisation as part of its knowledge management system (Wick, et al; 2010:
306; Elkeles & Phillip, 2007: 15). The value chain of training, according to Elkeles
and Phillips (2007: 15), is adapted in tabular format in table 2.4 below.
UNISA STUDENT NO. 40315231 47
Table 2.4 Researchers Learning value chain Levels, Measurement type and
Key questions to seek value
Levels Measurement Key Questions
Confirmative and
Diagnostic Levels
Stage one and two of
Addie cycle.
Confirming training
intervention is
necessary.
Value of Investment (VOI):
what will be the situation
should the training not be
done ?
Is it a training need or would
other changes in systems
and management help with
the performance problem?
Formative Level Stage
three and four.
Impact costs of the
correct version of the
training intervention.
What are the costs of
developing a programme
compared with a self-training
intervention?
Input and Indicators.
Level 0
Volume and
efficiencies.
Number of participants, hours
of training and cost of
programme? And the
baseline data for comparison.
Reaction and Planned
Action. Level 1.
Participant satisfaction
and action plans to use
learning.
Was the learning relevant
and useful to the learner and
the occupational
environment?
Learning.
Level 2.
Changes in knowledge,
skills and attitudes
(SKAs).
Did Skills Knowledge and
Attitude increase confidence
to perform job or
productitivy?
UNISA STUDENT NO. 40315231 48
Application
Level 3
Transfer of learning for
SKAs to the job and
improves performance.
What did the participant do
differently on the job to
enhance performance?
Business Impact Level
4
Change in business
performance.
What changes in output
quality, costs, time and
satisfaction have occurred?
ROI
Level 5
Cost and benefit of
investment.
Have the benefits been worth
the investment?
Source: adapted by the researcher from Elkeles and Phillips (2007: 15
&195)
2.4.4 Challenges with the Phillips Level Five or (ROI)
The first four levels are very similar to the Kirkpatrick levels, but the return on
investment (ROI) overshadows the transfer, results and impact that matter to the
business. Although the ROI assess the cost benefit of the training investment by
calculating the return on the investment for training it does not mean much
without improvement in productitivy, customer satisfaction and re-orders or new
product development that ensure business growth. In other words business
needs behavior change in SKA’s . The conversion to monetary values concerns
practitioners and Stone (2012: SU 324) also questions the credibility and quality
of ROI. ROI has taken precendent in South Africa has there has been huge
increase as stated in the (ASTDSA research report for learning and development
2010: 45).
Patel (2011: 1) states that where and when the Kirkpatrick and Phillips five-level
model of learning evaluation is involved as a metric, training evaluation is
deemed a success. Training specialists need to implement the evaluation
process in finding value for the organisation according to a selection criteria
(Opperman & Meyer, 2008: 190). Learning value is produced in all the activities
UNISA STUDENT NO. 40315231 49
of the training cycle and it is an imperative that the value chain is documented for
reporting on the value of learning.
2.4.5 Evaluation Implementaion Process Map
According to Noe (2010: 219) understanding the evaluation process is crucial to
planning the process and is stated briefly as follows and figure 2.6 depicts the
flow plan for evaluation implementation.
Figure 2. 6 The Evaluation Process Flow, Source: (Noe, 2010: 219).
Kearns, (2005: 135-145) is against post training evaluation only as there is no
baseline data to compare after the intervention has been implement. This alludes
to the fact that the Addie training cycle is crucial in setting and collect baseline
data for ROE, ROI or impact studies.
Summative evaluations are shortsighted on its own but combined with various
other models it could stand up to the its critics. Terms such at the death
evaluation, after thought evaluation or post training is given to summative
evaluation practices. Training evaluation requires an authentic and credible
Conduct A Needs Analysis
Develop Measureable Learning Objectives and Analyze
Transfer
Develop Outcome Measures
Choose an Evaluation Strategy
Plan and Execute the Evaluation
UNISA STUDENT NO. 40315231 50
database to house all value data. The collection of baseline data is crucial for the
purpose of comparison before and after an intervention in order to determine the
impact.
2.5 Researchers View - The Combined Addie and
Kirkpatrick/Phillips Models for Learning Value
This value chain is based on work by Elkeles and Phillips (2007: 15) and Elsdon
(2010: 98) that has been adapted by the researcher for the organisation.
Beevers and Rea (2010: 235) allude to the fact the stages of a training cycle
must suit the organisation and its need. The figure 2.7 depicts Addie’s activities
and also places the K/P summative evaluation levels at stage five, therefore
indicating the challenge of only summative evaluation.
Figure 2.7 Adapted View Researcher: Combined Addie and Summative
Evaluation Levels
1. Training Needs Analysis
2. Develop Learning Objective
/Outcome/Plan Measures
3. Design Material And Assessment Tools Against Measures
4. Implement The Learning Intervention
5.
5. SUMMATIVE EVALUATION
Level 1 Smile Sheet Level 2 Learning Change
Level 3 Behaviour Change
Level 4 Business Impact
Level 5 Roi
UNISA STUDENT NO. 40315231 51
The researcher also proposes that summative evaluation is short of rich data
which Bersin (2008: 59) agrees with as well, from the total training cycle.
Therefore it depicts it as follows with the various types of evaluation, from the
needs analysis to evaluation which is supported by Bozarth (2008: 7) and
Opperman and Meyer (2008: 188) who agree that an integrated approach to
training evaluations. The integrated approach is from th needs anlysis to
evaluation. In figure 2.8 the stars and spokes depict the stages of Addie whilst
the spokes depict the evaluation type for rich data collection points so an
effective systematic approach is followed to search for learning value.
Addie Expanded
Figure 2.8 Addie star and spokes depicting a systematic training evaluation
(adapted view by researcher Addie Value - link to question 1-10 in
questionnaire) Source: adapted from: (Bozarth, 2008: 7 and Noe, 2010: 7)
Confirmative -
the need
requires training
Evaluation-data
Collection
baseline data
ANALYSIS
Summative
Evaluation and
Final Integrated
Report
Development
of objectives
align to
business
strategy
LEARNING &
DEVELOPMENT
Summative
Evaluation
Diagnostic/focus
- Evaluation
Competency
Assessment
Formative
Evaluation IMPLEMENTATION DESIGN
UNISA STUDENT NO. 40315231 52
The four types of training evaluations are conformative, diagnostic, formative and
summative. The Addie cycle also provides opportunity to practise the three types
preceding summative training evaluations. The three types are included in the
extended view of evaluations and the Addie factors that L&D should engage to
create value for stakeholders. This is discussed in section 2.5.1 below.
The stars and spokes are explained in some detail as follows:
2.5.1 Stage one
Analyse the need – confirmative and diagnostic evaluation of learning
value
Value produced are personal growth and development plans.
Value to the supervisor and manager taking ownership of staff
development by setting expectations, supporting the employee before,
during and after to transfer skills
Value developed is partnerships, expectations set, saving training costs,
time and investment calculation.
Value desired are optimal investment, planning and being proactive in
needs analysis
Value expected will be measurement data, working relationships, support
of training and transfer of learning which is critical for the impact level and
return on investment/expectation (ROI/Roe).
According to Kiley (2007: 254) and Wick et al; (2010: 317) all data and evidence
should be documented for integrated reporting of value. Beevers and Rea (2010:
174) state the timing of evaluation efforts are critical to allow for change in
behaviors for impact to be realised, and subsequently used within an ideal plan
to provide a comprehensive report on the value of learning (Noe, 2010: 231 and
Wick, et al; 2010: 263).
UNISA STUDENT NO. 40315231 53
2.5.2 Stage two
Developing measureable objectives and expectations to add value to
learning
Value created is an evaluation strategy and planning developed at stage
one will indicate up to what K/P level the evaluation must be completed.
Beyond level two, not all evaluation may be necessary for some
programmes. Evaluation is costly and requires a budget as well.
Value for knowledge managment by documenting and saving for
inclusion in value reporting (Wick, et al; 2010: 265).
2.5.3 Stage three
Design and development – diagnostic and formative evaluation of learning
value
Value is created by eliminating wasteful expenditure and producing
training based on business objectives.
Value for sourcing of vendors requires proper specifications to be
developed and selection of the best and most appropriate provider.
Value created in managing external vendors from selection to delivery
using service level agreements and standard contracts that save
expenditure on overruns and prevent costs due to poor implementation.
Value produced by including measurement tools in the form of pre- and
post-training tests, post-training assignment portfolio of evidence (POE).
Value of piloting programmes for refinement saves reputation and costs.
Value in document savings for inclusion in a value report after the training
or project.
UNISA STUDENT NO. 40315231 54
2.5.4 Stage four
Implementation of training – formative and summative evaluation of
learning value
A well-developed programme that is sequenced is valuable and provides
value to all stakeholders. The measurement tools to assess the
competencies are always included.
The pitch or difficulty of the programme must be correct so that learners
are challenged and fully engaged and there is no time wastage.
Development costs are crucial to facilitation success. All aspects of
learning methodology accommodate all types of learners and their
learning styles. The relevance to the individual and job environment
needs leads to satisfaction.
The action plans for transferring new SKA’s to the job is increased if
relationships have been built with managers and supervisors. Leaving
managers out of the learning process creates problems for implementing
the learnt skills and knowledge. Managers should be provided with
observation and recording tools for each learner.
Success factors should be documented during facilitation for reporting
later. Not only volume contributes to the value of learning but creating a
momentum to continue SKA development is crucial to success. Collect
reviews for further enhancing programmes and continuing with a
program.
2.5.5 Stage five
Summative and post-evaluation learning value
Complete level one and two evaluations and assessment of competency
for initial reports and determining the value of the learning intervention
after training in the classroom.
UNISA STUDENT NO. 40315231 55
Well-developed programmes provide an evidence-gathering process and
procedure tools for successful collection at each level.
Measurement tools or metrics are known and standardised for all training
providers in organisations.
Evaluation does not stop at level one or two. A well-developed strategy on
evaluation provides the foundations for each programme evaluation
beyond basic two levels according to proposed best practice.
The behavior change or application of knowledge, skills and attitudes
should be driven by the learner/employees managers and supervisors
with support from training.
The decision to move to the higher levels of training evaluation must be
determined well in advance, for example, at the analysis or design phase
of the programme.
Generally, levels one to three require full compliance, while levels four and
five of the K/P framework require less strict compliance due to the costs of
training evaluation. Therefore, funds must be put aside during the budget
planning phase for evaluation success.
2.5.6 Stage six
Longitudinal evaluation for post-learning value and predictive value (Kiley,
2007: 278; Beevers & Rea, 2010: 174; & Kirkpatrick & Kirkpatrick, 2010: 8 ).
The evaluation phase will require a well-planned approach. Levels 3 to 5
require sufficient practice for implementation of learning, proper SKA
transfer and for results, impact and return on investment to manifest, as
well as for evaluation to collect meaningful data.
Levels three to five must be implemented after the SKA’s have become
embedded in or are second nature to the learner. Six months to a year will
be sufficient time to determine if there is a difference in behaviour or not
(cycle of competence).
UNISA STUDENT NO. 40315231 56
Apply Brinkerhoff’s success case method. The survey found that
90percent of practitioners know the methods and procedures to collect
data on successful employees as they produce value for learning
(Lawson, 2009: 259).
Collect data from all business systems to make assertions on the
probability of learning having an impact on business productivity and
revenue increase.
Document all data and evidence and provide a comprehensive report
within a year on the value of learning.
2.6 OVIERVIEW OF ALTERNATIVE EVALUATION METHODS
There a number of models that have their origins in the Kirkpatrick model
according to Yeo (2011: 1) but proposes that evaluation should evaluate the
Pillars Model and move beyond Kirkpatrick. The Business Impact Model
developed by Bersin (2008: 73) evaluates different elements to the K/P
framework. However, evaluation received form and character in the Kirkpatrick
model which Phillips and others could continue improving and adding to, and
later moving away from the previous paradigms (Kirkpatrick & Kirkpatrick, 2010:
1). Maybe Kirkpatrick did create a paradigm that evaluees and the evaluand find
difficult to leave behind.
According to Opperman and Meyer (2008: 205) the Nadlers critical events model
was one of the earliest models that recognised the importance of Evaluation and
given it a central position to all designing elements. The model is different to the
summative models of Kirkpatrick and Phillips but looks at the very systems
driven approach to mulitple factors and the division as well.
UNISA STUDENT NO. 40315231 57
ASTD and (i4cp) Research report (2009: 9) and Kirkpatrick and Kirkpatrick 2010:
32 ) cites Brinkerhoff’s (2006/2008) case study method, in terms of which
successful learners’ tell their stories which are combined with the
Kirkpatrick/Phillips model. The combined approach produces or exposes data on
the value produced by learning. Kirkpatrick and Kirkpatrick (2010: 32); Bersin
(2008: 68) both cite the Brinkerhoff’s Conceptual Site Model (CSM), which
assesses learners who have taken responsibility for the successful transfer of
learning on the job.
According to Kirwan (2009: 115) when learning transfer dies on the job, due to
whatever reason, the results, impact or ROI mean nothing.The management
support of learners before, during and after training creates on-the-job SKA
transfer. This method used on its own is not effective unless added to the K/P
model in order to enhance the data and application value.
Bersin (2008: 73) provides a Business Impact Model that values training impact
using a seven model framework measurement standards. Bersin (2008: 71) has
developed the Impact Measurement Model that incorporates the Kirkpatrick,
Phillips, Brinkerhoff and Six Sigma designs to overcome some of shortcomings
of all the models used in this research.
Kearns (2005: 1) Baseline method adds depth to evaluation by setting a baseline
measurement for comparison after the interventions. This model and the ROI
method require baseline data established before the intervention is implemented.
Similarly, the latest Kirkpatrick (2012: SU 124) method includes ROE or creating
a “strong chain of evidence”.
Wick et al; (2010: 1) provide a model called the “Six Ds” for training evaluations.
It is unlike the K/P and more like the Addie training cycle evaluation sys tem. It is
UNISA STUDENT NO. 40315231 58
therefore more than a summative system rather, it is a comprehensive system for
training evaluation that was proposed in 2006 but refined in application in the
2010 version. The six D’s are as follows:
Define- outcomes
Design- the whole experience
Deliver – the expectations
Drive- transfer of skills, knowledge and attitudes on the job
Deploy- value created in training to enhance bottom-line impact
Document – all processes, outcomes and results are document as part of
the knowledgement system.
Yeo (2011: 1) represents training evaluations as pillars and advises practitioners
to go beyond the K/P framework. The model of the fives pillars in the article
“measuring Organizational Learning: Going Beyond measuring individual
Training Programs also questions the fact when evaluations are discussed it
always seems to gravitate to the Kirkpatrick/Phillips models which evaluates
single interventions or projects very well. Whilst measuring the quality of learning
in the whole organisation is not addressed in general. The Sloan’s Five Pillars is
aligned to quality principles and also is supported by the Kirkpatrick’s model.
Figure 2.9 The Sloan Consortium’s Five Pillars Model Source: (Yeo 2011:
1)
UNISA STUDENT NO. 40315231 59
The figure 2.9 Sloan’s five pillars evaluates the following: learning effectiveness
cost effectiveness, access to learning, followed by learner and management
satisfaction to determine learning value and it is in contrast to the Kirkpatrick
based evaluation models. Yeo (2011: 1) states in most discussions on trai ning
evaluation people are always thinking and or using the Kirkpatrick/Phillips
models, and therefore proposes the use of the Sloans five pillars as a different
set of criteria to engage in evaluting training. The Sloans model is little known but
learner and management satisfaction needs more details on the application,
impact on productitivy and return on investment.
Table 2.5 Models that are based on the Kirkpatrick/Phillips framework
Level Hamblin Warr, Bird &
Rackham
CIRO
Laird Stufflebeam
CIPP Model
Elsdon
Model
0 Context Input
1 Reaction Content Opinions Input reaction
2 Learning Input Learning Product
impact
Learning
transfer
3 Behaviour Reaction Use Sustainability Learning
application
4 Organisational
impact
Outcome Impact Effectiveness Learning
impact
5 Ultimate value Portability Learning
results
6 Learning
predictive
Level
Source: (Opperman and Meyer, 2008: 204-207, Bozarth, 2008: 195 and
Elsdon, 2010: 98).
UNISA STUDENT NO. 40315231 60
The above-mentioned models were also developed or based on the Kirkpatrick
framework and are summarised in table 2.5 but two have moved before training
is provided and one goes long after training is implemented, being Elsdon model.
Table 2.6 The Researchers Value view model to overcoming barriers to
systematic training evaluations.
Addie system The Addie cycle from analysis to evaluation creates
observable value for learning. Change from reactive to
proactive practice in providing training.
Create standardised templates/tools for confirmative,
diagnostic and formative evaluation. Selects evaluation
criteria at the design phase. Collects all baseline data
upfront.
Each stage provides value for L&D. If training is not the
solution then resources are saved.
Linking with line
managers
Builds relationships as training and evaluation is a social
activity. Line managers are a resource for informing L&D of
the training required. Line managers support what they
request.
Provides observational recording tools for managers and
supervisors.
Managing of
budget and
resources
Training and evaluation need a specific budget with correct
resources.
Trainers have skills to evaluate training using surveys and
questionnaires.
Level 1 Learner response review. Learners complete course
evaluation forms.
UNISA STUDENT NO. 40315231 61
Develop a standardised template with collation forms and a
system to house the results in a collation report.
Level 2 Learner achievement. All assessors and moderators
complete assessment reports as part of quality report.
Level 3 Transfer SKAs to work. Supervisor/learners complete
questionnaires.Interview and publish high flyers names.
Level 4 Productivity/service delivery improved. Examine quality,
quantity, cost and time. Control wastage and reformulate
programme.
Manager/learners complete questionnaire. Production
statistics provided. Access systems information or Business
Intelligence.
Level 5 Profitability impact on the organisation as a whole.
Manager/Systems Design Committee reports on impact
versus expenditure. Calculate Roe and ROI.
Level 6 Broad social and economic impact. Register unit standards
covered in the learning session. Predict growth of learners.
Training manager reports progress according to National
Skills Development Strategy and Qualification Framework
(NSDS/NQF) impact indicators.
Source: Adapted by researcher from Elsdon (2010: 98).
Table 2.6 outlines a system that places learning evaluation as a system. It makes
use of the Addie and K/P models.
Elkeles and Phillips (2007: 196), the ASTD, the South African Qualifications
Authority (SAQA) and National Qualifications Framework (NQF 2008) have
UNISA STUDENT NO. 40315231 62
average compliance standards which are shown in percentages in table 2.7
Comparative best practice on evaluation targets for South Africa, ASTDSA and
the United States of America (USA) are also given.
Table 2.7 The training Industry best practice guidelines for the practice of
evaluation levels: recommended acceptable practice percentage per level
per benchmark.
Accredited
Bodies/Level
and Survey
Percentages
SAQA
2006
ASTDSA
2010:41
ASTD &
(i4cp)
2009
U
S
A
Phillips
best
practice
Kirkpatrick
best
practice
0 Input - - - - 100 -
1 Reaction 100 80 92 100 100 100
2 Learning 100 35 80 100 60–80 60
3 Job
application
100 30 50 30 25–30 30
4 Business
results
20 17 37 20 10–25 10
5 ROE/ROI 10 38 18 10 ROI/6–10 ROE/ 5
Source: ( Elkeles and Phillips 2007: 196); ASTD & (i4cp) Survey (2009)
ASTD & (i4cp) research on the Value of Evaluation developed a tool called
Evaluation Success Index (ESI), from the findngs of international organisations
responses to the survey in (2009). The ESI gives an indication of what
constitutes a successful and effective evaluation and the nine items are tabulated
UNISA STUDENT NO. 40315231 63
below in table 2.8. The ESI would also form part of a comparative analysis
against the research organisation responses to assess what are the reactions to
the items as a effective evaluation strategy to be discussed further in chapter
four.
Table 2.8 Evaluation Success Index (ESI)
Evaluation Success Index Nine Items Percentages
Our learning evaluation techniques should help us meet
our organization’s learning goals . 41.39
Our learning evaluation techniques should help us meet
our business goals. 36,5
We get a solid “bang for our buck” ( return when it
comes to using the Kirkpatrick/Phillips learning metrics.
25,6
Follow-up focus groups are great source of data. 9,1
Performance records monitoring great source of data. 24,3
Follow-up surveys of Participants provides data. 31,09
Learner/Employee perceptions of Impact is important. 36,3
Actual business outcomes (e.g. revenue, sales) is important data. 22,4
Proficiency/Competency levels is impartant data. 33,0
Source: The ASTD and (i4cp) Group Research (2009) The Value of
Evaluation Success Index (ESI)
Making Training Evaluation Effective the nine items that the ASTD findings
proposed as items that organisations survey agreed are important part of training
evaluation. also be used on findings from this research questionnaire. Used with
permission from ASTD.
UNISA STUDENT NO. 40315231 64
2.7 CHAPTER SUMMARY
Training evaluations are a generic but valuable tool to meet the requirements of
L&D value proposition and must be implemented according to a well-established
systematic programme. The various systems and activities within the L&D
function need to be leveraged for value. Organisations cannot make excuses for
non-compliance in training evaluation. There are many models available, as
discussed above, and each one has its advantages and disadvantages.
Combinations of models can enhance evaluation practices with enriched data for
making a judgement on the value of learning.
The Addie training cycle is currently an effective and efficient model to use in the
search for learning value. The number of stages or steps could be unique to an
organisation in meeting the training need. Kirkpatrick’s model deals with
summative evaluation only, so the diagnostic and process criteria for formative
evaluation are lacking. The Philips framework looks at training evaluations
summatively, similar to Kirkpatrick’s model, but adds a fifth level. Phi llips also
proposes that training evaluations be centralised to all stages of the Addie cycle.
UNISA STUDENT NO. 40315231 65
CHAPTER 3
THE RESEARCH DESIGN AND METHODOLOGY
3.1 INTRODUCTION
In this chapter, the research design and methodology will be explained in detail.
A quantitative descriptive study will be applied to resolve the stated problem and
reach the research objectives.
According to Hofstee (2006: 85), a research problem may be defined as
something that is undesirable but not simple or trivial. Accordingly, the problem
needs the application of an applied research process to conclude a result. This
means that the problem cannot be an anecdotal query, but requires information
on the subject from other individuals and researchers in order to make inferences
or to report findings descripti vely. The information provided by the research
population will help prove the assumptions made about the stated problem.
3.2 RESEARCH OBJECTIVES
3.2.1 Generic Objectives
To assess the current training evaluation practice and ascertain empirically
whether an effective systematic evaluation practice is being practiced, to declare
the learning value in the organisation, secondary questions will need to be
answered first. The questions in section 3.2.3 will need to be investigated so that
a comprehensive solution to the main question will be found. The responses and
answers to the questions posed in the questionnaire will provide diverse views
on the current state of training evaluations.
UNISA STUDENT NO. 40315231 66
Further assumptions will be made on whether there is a valid connection
between the use of the Addie training cycle, the Kirkpatrick/Phillips framework
and finding the value of learning. The responses to specific questions will help
determine whether the current evaluation practice is systematic in its approach.
In other words, the question to be asked is whether it is possible to apply a
holistic view of evaluations and not summative practice only. Therefore, the three
training evaluation types, diagnostic, formative and summative, will be part of the
focus.
3.2.2 Specific Objectives
There are three specific objectives for the study.
Firstly, to conduct a literature review of training and development, training
systems, the Addie training cycle and training evaluation frameworks,
especially those of Kirkpatrick and Phillips.
Secondly, to conduct a literature review on training evaluation and its
impact on learning value in organisations.
Thirdly, to conduct a review of training evaluation studies and research
from ASTD and CIPD on effective international training evaluation
practices for improvement and review.
3.2.3 Research Questions
Leedy and Ormrod (2013: 3) and Maree et al; (2007: 25-26) agree that certain
questions that need to be explored so that the comprehensive total of the
answers will provide a complete or more precise view on whether there is a valid
connection between learning value in the Addie training cycle (a systematic
approach to training) and the training evaluation types (diagnostic, formative and
summative).
UNISA STUDENT NO. 40315231 67
An adapted survey questionnaire on effective evaluation practices will help
determine the answers to the following sub-questions:
i. How does the Addie training cycle support the systematic training
evaluation practice i.e. assess, develop, design, implement and evaluate?
To be assessed in question 1-10 of the survey.
ii. How the Kirkpatrick/Phillips (K/P) model support the effective summative
data collection process? To be assessed in questions 11-23 of the survey.
iii. What are the problem areas, barriers or challenges in effective training
evaluations? To be assessed in question 24 of the survey.
iv. How can the training evaluation process be improved in the future to be
more systematic, effective and efficient? To be assessed in questions 25–
31 of the survey.
These sub-questions along with the research findings are to be discussed further
in chapter four.
The research study will use a quantitative cross-sectional survey because the
project follows a non-experimental and descriptive design. A descriptive design
is suitable because it has a high degree of representativeness in obtaining data
from the research population (Leedy & Ormrod, 2013: 184). The data obtained
from the survey population group will enable the researcher to describe their
perceptions in terms of their application of an effective systematic training
evaluation practice in search of the value of learning. The study will provide an
understanding of the Addie training cycle as a premise to the systematic
approach to training evaluations. The summative training evaluation practice will
apply the K/P framework.
UNISA STUDENT NO. 40315231 68
3. 3 RESEARCH DESIGN
Hofstee (2006: 85) state that the research includes the actions occurring during
the problem-solving process in order to conclude whether the assumptions that
have been made are true, or false, or have no link with the variables whatever.
Hofstee (2006: 113) and Mouton (2001: 55) concur that the research design is a
holistic process plan for eliciting views, opinions and perceptions on individual
experiences. The unique tactics of the study are illustrated through the research
design.
According to Leedy and Ormrod (2013: 74), the research design is a
comprehensive plan explaining how the study will proceed in order to reach a
conclusion regarding the identified problem. The selection of the design is based
on the most appropriate way of collecting valid and useful data for solving the
problem.
A quantitative research design was selected to address the research objectives
as this type of design collects data for assessment and in order allows for
inferences to be made. From the request for training, or at the needs analysis
stage up to the summative evaluation stage, data is collected. A quantitative
research design involves individuals from a group who have certain
characteristics in common and whose opinions will be used to prove or disprove
assumptions about the phenomena or variables under scrutiny (Leedy & Ormrod,
2013: 189).
3.4 RESEARCH METHOD
Hofstee (2006: 107) and Leedy and Ormrod (2013: 7) call the general route or
technique the research takes the methodology. In this research, information was
collected from the identified target population using a structured questionnaire
from 20th November 2012 to 30 th April 2013.
UNISA STUDENT NO. 40315231 69
The survey method gives all practitioners an opportunity to be part of the
research. Thus, all Human Capital Management (HCM) training practitioners will
be given an opportunity to provide responses to the questions. The original target
population was fewer than 100. Subsequently, the employee relations,
organisational development and talent management practitioners were added to
provide an approximate total of 120 which is even so a fairly small group. The
quantitative method requires a response rate of 30 percent or higher or the
reliability will be negatively affected.
3.4.1 Population and Sampling
Maree et al; (2007: 147,172), Hofstee (2006: 116) define research population
sampling as the total group or unit of analysis. The researcher will invite
responses from all of the selected population memebers, in order to make
assumptions and to generalise the findings related to the problem. The total
population is selected because of their higher level of education and experience
in the training field. They were required to refer to studies on training systems,
the Addie cycle and training evaluations practices.
Leedy and Ormrod (2013: 215) cite Gay, Mills and Airasian, (2009: 135)
guidelines for population sampling that state that for smaller numbers of the
research population the whole population should be survyed and which is also
known as the census method. According to dictionary.com census is defined as
inclusion of a all members of the population similar to government census. A
census survey method was employed and no sampling took place as the
research population had been reduced, due to high levels of attrition, over the
past three years. All current trainers from head office and the six regions were
given an opportunity to contribute in an online survey (Leedy & Ormrod, 2013:
159).
UNISA STUDENT NO. 40315231 70
The corporate university of the organisation is the centralised strategic hub, but
the regional spokes implement training interventions and evaluations. Secondly
the regions participation in this quantitative survey will support the reliability
issue of surveys.The main eligibility criteria for inclusion in the survey population
required an individual to:
Be a member of HCM, including managers, practitioners and
administrators.
Provide training to the organisation’s employees.
Must have a further education and training (FET) or an
occupationally directed education training development practice (ODETDP)
qualification.
Table 3.1 National HCM Research Population Target Group
Region Geographic
Area Method Managers Trainers
Head Office Midrand Head
Office
Survey
questionnaire
11 10
Gauteng South
Wits
Wits Region of LI Survey
questionnaire
1 15
Northern Region Pretoria Survey
questionnaire
1 15
Central Free State Bloemfontein Survey
questionnaire
1 15
Western Cape Cape Town Survey
questionnaire
1 15
Eastern Cape Port Elizabeth Survey
questionnaire
1 15
UNISA STUDENT NO. 40315231 71
KZN Durban Survey
questionnaire
1 15
Subtotals 17 100
Total Research Population N117
Source: HCD Employee Resource System (ERS) (September 2012)
Accordingly, employees above were included who have had exposure to training
systems and training evaluation models and who know the value they add
through training. They are also aware of the challenges, problems and barriers in
training evaluation and may be able to provide input on possible ways to improve
training evaluation. The table 3.1 depicts the targetted.
3.4.2 Data Collection
Mouton (2001: 104), Hofstee (2006: 117) and Leedy and Ormrod (2013: 95)
state that the data collection process involves the approach, method, process
and instruments used. The process is conducted in order to gather input and
resolve a research problem. Accordingly, a structured data collection
questionnaire in quantitative research was employed. The permission of the
head of Human Capital Management was requested to survey the employees
and this was granted. It was requested that the organisation’s name not be used
in the research and that company time not be taken up by the survey (see
Appendix A).
An on-line survey company was utilised to reach the target population.
Technology provides support for quantitative surveys due to the amount of data
collected (Leedy & Ormrod, 2013: 159). The survey questionnaire was sent to
the participants via an e-mail and was requested to access the on-line format by
following a special link. This method had features to start and stop as and when
UNISA STUDENT NO. 40315231 72
time was permitting, but it provided basis for data collection, which would not
intimidate the target audience. The respondents remain anonymous as only a
response code was supplied to them to protect them.
3.5 MEASURING INSTRUMENT
Maree et al; (2007: 156) states that the survey questionnaire is a data collection
method. The instrument was partially based on the ASTD (2009: 1-65) and
(i4cp) instrument but adapted by the researcher due to literature on the
systematic approach. The 2009 survey reached a large number of international
organisations evaluating the current evaluation practices, on the value of
evaluations and how they might be made more effective. The researcher
selected items from the ASTD questionnaire to assess the metrics, timing and
approaches. The comparison between the ASTD ESI survey and the findings
from this research will be attempted. The comparison will be attempted to
evaluate the attitudes on what makes for a successful evaluations between
international study and the attitudes of the research organisations towards the
same issue. However, Leedy and Ormrod (2013: 293) state that correlations do
not indicate causation and this must be borne in mind.
The Questionnaire is divided into two sections: A and B. These sections are
discussed briefly here and table 3.2 below gives an overview of the
questionnaire. For a fuller description see Appendix B.
Section A included an introduction, confidentiality statement and request for
permission to use the data supplied. The demographic details of participants
was also part of section A, so as to determine a few non-contributory variables.
Mainly nominal data will be gathered in this section.
Section B posed a large number of fixed and well-defined questions. The five-
point Likert scale was used for the majority of the questions. Participants could
choose one of the following options for these questions: not sure, strongly
disagree, disagree, agree and strongly agree. Aspects of evaluations were also
UNISA STUDENT NO. 40315231 73
rated as not in use or of small value, moderate value, high value or very high
value.
The table 3.2 illustrates the sections, the item numbers, measurement types,
nominal, and ordinal and aspects covered in each question Maree, et al; (2007:
148).
Table 3.2 Survey questionnaire overview
Section Number of
Items
Measurement
types
Aspects covered
Introduction 0 None Information about the survey.
Researcher’s information
including contact numbers for queries.
Permission to use data supplied.
Confidentially clause.
Section A 7 Nominal scale Biographical data.
Section B
Research sub-question
1
10 Ordinal scale Covers the Addie training cycle support in systematic training
evaluations.
Research sub-question
2
11-23= 13 Ordinal scale Covers the training evaluation models including the five
Kirkpatrick/Phillips levels. Tests the frequency of data collection instrument used. Asks when the
evaluations take place. Value of levels 1-5 is rated.
Research
sub-question 3
24 Ordinal scale The frequency of use and
problems with training evaluations. Manager and supervisor involvement in
training and evaluation.
Research sub-question
4
25-31 Ordinal scale The benefits and value of evaluations and possible future
improvement.
UNISA STUDENT NO. 40315231 74
3.5.1 Piloting the Instrument
The instrument was first piloted with the target population over a two-week
period. Seven trainers from the head office site completed the questionnaire and
a few queries were addressed. The two different statisticians and the research
supervisor were sent the survey questionnaire before the launch and were
provided with an opportunity to give feedback on sentence construction and
language use. The questionnaire was then reviewed or refined from the feedback
to improve its readability and validity.
3.5.2 The Reliability of the Instrument
The reliability of the instrument, according to Leedy and Ormrod (2013: 91),
indicates the ability of the instrument to elicit the same response to a question
from a large group or for over 30 questionnaires (Maree, et al; 2007: 216). This
will mean that the Cronbach’s alpha score for reliability of above 0.700 must be
realised. The reliability of the ASTD and (i4cp) survey was established at 0.800. (
ASTD, 2009: 65).
3.5.3 The Validity of the Instrument
Leedy and Ormrod (2013: 89) define the validity of an instrument as the extent to
which it measures what it intends to measure and cite four types: face, content,
criteria and construct validity (Maree, et al; 2007: 217). Face validity of the
instrument was established because it makes use of the work of the ASTD
expert research team in conjunction with (i4cp) statisticians conducted in (2009:
65) The ASTD and (i4cp) survey reached international organisations and looked
at the value of effective evaluations. By customising the questionnaire for this
research with the specific organisation in mind, reliability was also established.
UNISA STUDENT NO. 40315231 75
3.6 CHAPTER SUMMARY
The preceding chapter discussed the research design and method in detail as
the tools that direct the researcher towards solutions for the stated problem. The
survey questionnaire was briefly summarised. The following chapter will unpack
findings and results of the survey responses.
UNISA STUDENT NO. 40315231 76
CHAPTER 4
RESEARCH FINDINGS AND RESULTS
4.1 INTRODUCTION
In the preceding three chapters the proposal of the research problem was
addressed followed by a literature study in the field of training, learning and
development. The field was studied with a view to obtaining relevant information
on theories, models and frameworks designed by experts and practitioners that
would be of use in reaching the research objectives.
The third chapter detailed the research design and methodology for the research
project. This chapter will include the descriptive analysis of data, its
representation and an interpretation of results. The results were obtained from an
empirical analysis performed to test the research questions using the data
obtained from the responses to the research survey questionnaire.
The statistics generated are analysed, discussed and presented descriptively.
The statistical analysis was generated by the SAS JMP version 10.1 software
package, for calculating percentages, probability and averages (Leedy &
Ormrod, 2013: 302). The design, which deployed a survey questionnaire, was a
descriptive quantitative design and required a statistical software package to
analyse the data so that inferences could be made.
According to Maree et al; (2007: 167) the Likert scale is the most widely used
and is convenient to measure constructs. The five-point Likert scale was applied
in 28 of the questions. Questions 12 and 14 required a yes or no response, whilst
questions 30 and 31 were qualitative inputs. The data collected measured the
UNISA STUDENT NO. 40315231 77
current training evaluation practices to assess whether they are effective and
systematic in collecting evidence of the value of learning in the organisation.
The questionnaire was a adapted, self-developed and self-administered survey
targeting training practitioners in the research organisation. The questionnaire
was adapted from the ASTD and (i4cp) international survey on effective training
evaluations (2009) (see Appendix C).
4.2 RESPONSE RATE OF THE SURVEY
An online survey organisation was engaged to launch the questionnaire online.
The survey company converted it into an electronic online format and distributed
the questionnaire to 117 employees on 20 November 2012. The participants
were requested to complete the survey by 30 November 2012.
By 30 November 2012 the response rate had only reached five percent.This
could have been as a result of the original period of time (ten days) being too
short and the fact that it fell into the busy year-end period. More time was thus
required and the deadline was extended to 14 December 2012. However, by this
date the response rate was still relatively low at 18 percent and an additional two
weeks was given for partially completed questionnaires. Reminders were sent
out by the online consultant between 19 and 31 January 2013. The problem here
could have been the fact that it coincided with the festive season and
practitioners were on leave.
The final feedback from the online provider was received on 26 March 2013,
because of the various disadvantages of survey methods (Hofstee, 2006: 133).
The project researcher established that, for the data to be reliable and valid, a
response rate of 30 percent for quantitative methods is sufficient. The response
rate was at only 25 percent In a further attempt to improve the response rate it
was decided to distribute hardcopies or e-mail copies to the regional managers
UNISA STUDENT NO. 40315231 78
to distribute copies to staff who indicated that they did not take the electronic
survey.
Finally, the questionnaire was handed out or emailed between 15 and 30 April
2013 with a request to return them by 30 May 2013. An additional 15 completed
questionnaires were then received. Subsequently, the total number of
questionnaires returned was 41, which includes the 26 from the online survey.
All information was received by hardcopies was captured on the online
spreadsheet by the researcher.
4.3 QUANTITATIVE ANALYSIS
The empirical objective of the research study was to examine the current training
evaluation practice to determine whether it is effective and systematic. The
results were analysed and described for each section of the questionnaire
(Appendix B) (Hofstee, 2006: 148). The questionnaire produced quantitative data
that was analysed with the help of the SAS JMP version 10.1 analysis package
(Leedy & Ormrod, 2013: 302).
A quantitative cross-sectional survey design was employed. According to Leedy
and Ormrod (2013: 188), this is a cost-effective and widely-used data collection
tool. The survey method can be used to sample a large number of respondents
who answer the same questions. The research survey results can measure
many variables, some as simple as age, but simpler questions ease respondents
into taking a survey. Surveys that request information about the individual have
been found to be appealing, because respondents are asked questions about
themselves. This indicates to them that they are being regarded as individuals
and not just as a means to an end.
UNISA STUDENT NO. 40315231 79
The statistics from the survey questionnaire will be represented and discussed
further. This study tested multiple hypotheses on participants’ characteristics,
opinions, behaviour and experiences. The relationship between the variables
was assessed using the data from the responding population.
Maree et al; (2007: 186) state that the analysis of quantitative data examines the
number of times a variable was selected to reach a numeric value. The
frequency distribution was calculated in the data analysis and a description of the
variables was then given.
4.4 RESULTS AND INTERPRETATION OF FINDINGS
4.4.1 Response Rate of the Target Population
The response rate for the questionnaire is given below in table 4.1.
Table 4.1 Comparison of Distribution and response rate per region
Geographic
Area/Regional
Satellite
Managers
Training
Practitioners
Response
rate
managers-
practitioners
Total
received
Percent
per
region
Head Office 11 10 8 10 18 44
Northern Region 1 15 0 8 8 20
Southern
Gauteng (Wits) 1 15 0 7 7 17
Central
Bloemfontein
(FS)
1 15 1 2 3 7
UNISA STUDENT NO. 40315231 80
Western Cape 1 15 1 1 2 5
Eastern Cape 1 15 0 1 1 2
KZN 1 15 0 2 2 5
Total 17 100 10 31 41 100
Distributed = N117- Responses =N41
The total number of responses was 41 out of N117 which resulted in a
percentage of 35 percent. However, the employee relations (ER) staff did not
respond, thus making the target population 41/90, which resulted in a 45 percent.
This rate can be judged to be acceptable for reliablity.
4.4.2 Description of the Target Population
The demographic information of the participants obtained from Section A of the
questionnaire is presented in table 4.2 below. This information describes the
target audience who are trainers and training evaluators. Of the 117
questionnaires distributed, 41 questionnaires were returned only from the training
practicitioners and training managers. These will be used to make all the
statistical deductions for this study.
Leedy and Ormrod (2013: 91) state that the internal consistency reliability is the
extent that a single instrument provides the same results. The internal
consistency of the responses was assessed using Cronbach’s coefficient alpha
values. The reliability test for value was done against the following questions 20,
21, 28 and scores estimated were 0.94, 0.92 and 0.92 confidence levels for
responses to questions on Level three and four of the Kirkpatrick/Phillips (K/P)
model respectively. The response on question 29 the evaluation success index
(ESI), the reliability score was thus well above 0.8, which is deemed acceptable
UNISA STUDENT NO. 40315231 81
(Maree, et al; 2007: 215). These scores therefore indicate a sufficient degree of
reliability.
Table 4.2 Frequency and Demographic characteristics of survey
respondents
Items Category Percentage
Age 25–30
31–35
36–40
41–50
50+
3
2
10
68
17
Gender Female
Male
46
54
Race African
Coloured
Indian
White
59
7
15
19
Rank Manager
ETDP
24
76
Region Central Gauteng
Eastern Cape
Head Office (Midrand)
KZN
Gauteng North
Western Cape
Gauteng South (Wits)
7
2
44
5
20
5
17
Employment
0–5
6–10
7
22
UNISA STUDENT NO. 40315231 82
period in years 11–15
16–20
21–30
37
17
17
Qualification Bachelor’s degree 24
BHRD (Bachelor of Human
Resource Development)
10
BTD (Bachelor in Training) 12
Grade 12 5
Master’s Degree 3
ODETDP Level 5 (Occupationally
Directed Education Training
Development Practice Qualification)
39
ODETDP Level 6 (Occupationally
Directed Education Training
Development Practice Qualification)
7
Doctorate 0
The grahic illustration of of demograhic data of respondents is as follows:
Figure 4.1 Depicts the age of the respondents which is a pertinent message to
management of LI to hire more graduate interns as can be seen 68 percent of
the national staff are in the age groups 41-50. Once the over 50’s are included
the percentage of aging practitioners jumps to a whopping 85 percent. A definite
aging group of practitioners.
UNISA STUDENT NO. 40315231 83
Figure 4.1 Age of respondents
Figure 4.2 Gender of respondents
Figure 4.2 depicts the Male respondents comprised of 54 percent and Females
46 percent. Which indicates a reasonable balance in responses as it does not
reflect any gender biases towards training evaluation practices.
36 - 40, 4, 10%
25 - 30, 1, 3%
41 - 50, 28, 68%
31 - 35 , 1, 2%
Over 50, 7, 17%
36 - 40
25 - 30
41 - 50
31 - 35
Over 50
Female, 19, 46%
Male , 22, 54% Female
Male
UNISA STUDENT NO. 40315231 84
Figure 4.3 The graphic illustration of the respondents Race Groups.
All races are respresented and therefore the responses will not be skewed to one
racial group views, although the African group is the largest and this reflects the
EE targets in the country. HCM has done a great job on representation.
Figure 4.4 Ranks target population of managers and practitioners
Managers comprised 24 percent and the education training development
practitioners (ETDPs) 76 percent Which indicates the implementers are in the
largest percentage of 76 percent which is also good for the data gathering for
this research, because the managers are controllers of the training interventions,
resources, budgets and reports, while the ETDP’s conduct and evaluate training
African, 24, 59%
Coloured, 3, 7%
Indian, 6, 15%
White, 8, 19%
African
coloured
Indian
White
ETDPS, 31, 76%
MANAGERS, 10, 24%
ETDPS
MANAGERS
UNISA STUDENT NO. 40315231 85
in the organisation. But it’s the managers that use the evaluation findings in
reports.
Figure: 4.5 Depicts the Regional response rate in numbers and
percentages
All regions were represented that were targetted but Eastern Cape had the least
response of two percent while Head Office had the most responses at 44
percent. The response rate was the highest from Midrand team which is
encouraging as this group are the strategic members who develop strategy,
policies, the training systems and compile the Annual Training Report (ATR) and
Workplace Skills Plan (WSP).
Figure 4.6 Tenure of Respondents
Central, 3, 7% EC, 1, 2%
HO, 18, 44%
KZN, 2, 5%
NR, 8, 20%
WC, 2, 5%
WITS, 7, 17% Central
EC
HO
KZN
NR
WC
WITS
0-5, 3, 7%
6 to 10 years, 9, 22%
11 to 15 years, 15, 37%
16 to 20, 7, 17%
21 to 30, 7, 17%
0-5
6 to 10 years
11 to 15 years
16 to 20
21 to 30
UNISA STUDENT NO. 40315231 86
The tenure is indicative of experience in the field of training and the application of
training evaluations. The largest percentage is 37 percent and have a average
tenure of 11-15 years which is a fairly long period for practices to become
embedded. The next is 16-30 tenures have 34 percent while of 6-10 years is
next at 22 percent of the total.
This basically means employees have stayed loyal for a long time and this could
indicated a good organisation that satisfies their needs or they were not
employable outside the organisation due to the follow reasons, one mainly being
tertiary education qualifications, two being close to pensionable age or three the
economy and the current job market.
Figure 4.7 Educational Qualification of respondents
The qualifications of respondents is crucial as it confirms the strategy the
learning organisation implemented in 2002 to upskill the then functional trainers
to a basic of occupational directed education and training practitioners Level five
(ODETDP L5). The basic requirement stipulated by SAQA to be a trainer. Many
of the trainers have moved beyond the occupational levels to Bachelors degrees
in training. This means that the survey audience would not find the survey
questions difficult because training evaluation practices form part of the syllabus
of occupational directed education training and development (ODETDP).
B.DEGREE, 10, 24%
BHRD, 4, 10%
BTD, 5,
12%
GRADE 12, 2,
5% MASTERS,
1, 3%
ODETDP 5, 16, 39%
ODETDP 6, 3, 7%
DOC, 0, 0%
B.DEGREE
BHRD
BTD
GRADE 12
MASTERS
ODETDP 5
ODETDP 6
DOC
UNISA STUDENT NO. 40315231 87
4.4.3 Survey Questionnaire Responses
This part of the study will look at the responses to questions in Section B of the
questionnaire. The main research question posed at the beginning of the study
was “Is the current training evaluation practices effective, efficient and systematic
in providing evidence to determine the value of learning?” The analysis of the
data below will aim to answer this research question. Specific parts of the survey
were aimed at answering one of the four secondary questions stated below.
Answers to each secondary question will be reached here with reference to the
participants’ responses.
4.4.3.1 Research sub-question I
How does the Addie training cycle support the systematic training evaluation
process to assess, develop, design, implement and evaluate? Questions 1 sub-
questions 1 to 10
The question one has one to ten statements which asked the participants to
respond to the value of the Addie cycle in systematic training evaluations. The
Likert scale ranged from strongly disagee to strongly agree.
Table 4.3 Importance of the Addie cycle in training evaluations - the top five
questions with which respondents strongly agreed.
Statement
Rank Number of strongly
agree responses
Mean
2. Training needs should be diagnosed to confirm
that the need requires a training intervention.
1 28 4.46
5. In order for training evaluation to be credible it should be well planned from the needs analysis
stage to evaluation stage.
2 26 4.37
6. Training evaluation practices which include diagnostic, formative and summative techniques
assist to declare the value of learning at all stages
3 24 4.37
UNISA STUDENT NO. 40315231 88
of the training cycle.
8. Training evaluations should have a learning metric (measuring tool) that works within the
organisation.
4 25 4.34
10. Training effectiveness impacts all performances (e.g. organisation, team and individuals).
5 24 4.24
Table 4.3 indicates the five top question responses and generally all ten
statements on the Addie training cycle indicated that the practitioners agree
strongly that Addie cycle will support the systematic evaluation of training
towards a more effective practice. Addie as discussed in chapter two, is a
systematic approach to training (SAT) model that approaches a training request
systematically. (SAT) also promotes systematic training evaluation according to
Barbazette (2008: 88). The five statements with the most responses have a
mean above 4.00. according to Maree et al; (2007: 187) the mean is the most
commonly used measurement of location or central tendency which means that
there was a general agreement and the acceptance of value of support for Addie
in systematic training evaluations is fairly high.
4.4.3.2 Research Sub-question II
How does the Kirkpatrick/Phillips model support the effective summative data
collection process? Questions 11 to 23 and sub questions.
Table 4.4 below relates to question 11 of the survey. It indicates the percentage
of respondents who know and use each of the four training evaluation models.
Because the research had to establish that the Kirkpatrick and Phillips is known
and used as the five level framework for summative evaluation was selected to
assess/examine the current practice. If the Kirkpatrick/Phillips model was not
known or not in use by the respondent it would have been difficult for responses
to the questions that followed.
UNISA STUDENT NO. 40315231 89
Table 4.4 Models known and currently used to establish (Question eleven)
Model/Framework Known Used to some
extent
Known and
used
Kirkpatrick 32 % 54 % 15 %
Phillips 80 % 20 % -
Nadler 85 % 15 % -
Brinkerhoff 92 % 8 % -
Stufflebeams CIPPS 0 % 0 %
War, Racham and Bairds CIRO
0 % 0 %
The research did establish that of all six models given that Kirkpatrick and
Phillips were known and used the most in the organisation, so the survey could
continue with reasonable comfort. In other words the survey had to confirm which
of the models are not only known but are also used as a metric. The Kirkpatrick
model is used to some extent by 54 percent and also known by 32 percent,
while Phillips, is known by 80 percent of respondents and used by 20 percent.
What is encouraging is that 92 percent know about Brinkerhoff models but used
it the least. The Ciro and Cipps models did not garner any response. Kirkpatrick
and the Phillips models in this study are central to the the survey questionnaire
for the summative evaluation of training.
Question 12 asked if the participants make use of any other evaluation methods
besides the six mentioned in the previous question. Most of the respondents, i.e.
98 percent, replied “no”. The probability rate for this question is thus 0.975, which
means a high level of confidence that the response was reliable.
Question 12 and 13 were linked if yes was the response to question 12 then
question 13 asked what are other metrics or tools. No response was noted.
The training industry best practice standards both internationally (ASTD, CIPD)
and locally (SAQA) provide guidelines to practitioners for applying the K/P levels
in effective evaluation as was dicussed in chapter two Table 2.7. The research
organisation compares well at all levels with all of the benchmarked best
UNISA STUDENT NO. 40315231 90
practices. The compliance with the SAQA requirements is particularly important
as it is a South African regulatory requirement for training providers. SAQA
requires the first three K/P levels to be evaluated at 100 percent (Meyer, et al;
2012: 467). The first three K/P levels are part of competency certification,
because learning then adds value to employees that are deemed competent and
therefore increases their employability and career mobility (NQF Act Amended
67 of 2008).
Table 4.5 Best Practice Evaluation guidelines in Comparison of Research
organisations Evaluation per level practice
Level Kirkpatrick & Phillips
Best Practice Guidelines
ASTD (i4cp)
Survey (2009)
SAQA Require=
ments
Survey responses
for this research
Value of the
level
Mean for
level value
1 100 % 92 % 100 % 95 % High 3.10
2 60 % 80 % 100 % 80 % High 3.00
3 30 % 50 % 100 % 38 % - 1.51
4 10 % 37 % 20 % 50 % - 2.08
5 5 % 18 % 10 % 23 % - 1.26
Table 4.5 summarises the responses to questions 14 and 15. These questions
relate to the application of current summative level prac tices. Percentage
comparisons between the research organisation and best practice standards. As
can be seen, the practice levels in the organisation decreased at K/P level three
and four. The K/P level three is the transfer to the job. This means if tranfer is not
driven from management and learning there would be poor impact (Meyer &
Orpen, 2012: 183, Kirwan, 2009: 18).
Moreover, the responses for level four and five seem disproportionate to the
organisations Level three at 23 percent. Whilst the other bodies seem to
decrease gradually lower to level five. The impact may however only be
anecdotal impact and cannot be attributable to training (Philips & Phillips, 2007a:
UNISA STUDENT NO. 40315231 91
239). The value for level one and two are high but no value for level three to
level five and this corresponds with the Phillips findings (2009: 1) in measuring
what matters to C-Suite.
Also noteworthy is the fact that the responses show that levels three to five are
not in use when the value is questioned. Levels four and five require only ten
percent and five percent compliance according to Elkeles and Phillips
(2007:196). According to Phillips and Phillips (2009: 1) Chief executives (Ceo’s)
believe that learning adds value, but do not require feedback on performance for
all interventions or all levels and what is important is measure what matters.
Perhaps the L&D strategy for training evaluations should specifically indicate
which programmes need to be evaluated at all five levels or plan at analysis level
of Addie on the strategy for training evaluations.
Table 4.6 Kirkpatrick/Phillips evaluation practice and probability scores in
the research organisation confirms application per level.
K/P level Number of responses to question
“Yes” reply Percentage
Probability score
“No” reply Percentage
1 39 95 % 0.95122 5 %
2 32 80 % 0.8000 20 %
3 15 38 % 0.38462 62 %
4 19 50 % 0.5000 50 %
5 9 23 % 0.23077 77 %
Table 4.6 looks specifically at question 14. K/P level evaluation practices indicate
that levels one and two are mostly in use. This means that the traditional
approach to training (TAT) is practiced, rather than the systematic approach
(SAT) (Barbazette, 2008: 88). What is eerie is that level three is evaluated at 38
percent which is evaluating the transfer of the learning to the job. The transfer
data is crucial to calculating ROI. SAQA requirement at level three is 100 percent
but only 38 percent is reached. Maybe assessment of learning is lacking.
UNISA STUDENT NO. 40315231 92
Question 15 requested the respondents to indicate the value of each level of K/P
and Level one and two indicate some value and high value again, but what is
concerning is that it is a value for the training division. Bingham and Jeary (2007:
30) state that communicating value to chief executives (Ceo’s) should be top of
mind, and Phillips and Phillips (2010: 1) concurs that Ceo’s want value of results
and impact or measuring what matters to business. Also what is a further
concern is that Level three to five responses indicate a high degree not in use.
The level three findings stands at 38 percent which is far lower than required to
assess behaviour change and application of training . To be noted is that in South
Africa competency assessments take place at level three, further it also indicates
collaboration problems and lack of relationship between training and
management. Further it is an indicator that management has not taken
responsibility for transfer of learning on the job.. This is confirmed in responses
to question 24.
The Level four and five seem to be better than the figures of SAQA and ASTD
but is this a true reflection according to the responses, but if budget and
resources are a barrier in question 24 how then is Level four and five
implemented better than Level three. This means that for effective systematic
training evaluations to be conducted it has to be planned in the design of the
training. Maybe guidelines for each intervention should be set upfront and Noe
(2010: 7) concurs otherwise practitioners will attempt levels according to what is
deemed acceptable. Phillips and Phillips (2007a: 29-30) provide selection
guiding principles for level evaluation.
Table 4.7 Location and timing of Kirkpatrick/Phillips evaluations for levels
one and two currently practiced
Location
Timing
Level 1
“Yes” Responses
Level 2
“Yes”
Responses
UNISA STUDENT NO. 40315231 93
Number % Number %
Classroom After training 38 93 34 85
On the Job After training 12 32 18 50
Classroom Before training 10 26 20 51
Classroom During training 7 18 25 65
On the Job Before training 5 14 - -
Question 16 and 17 asked where and when level one and two evaluations are
practised. Options were ranked from highest to lowest. Levels one (93 percent)
and two (85 percent) evaluations are popular in the classroom after training is
concluded, Level one evaluations generally used an institutional questionnaire.
According to the respondents it is the most commonly evaluated. Maybe the
assumption could be because a tool is provided for evaluation it is therefore easy
to implement.
Level one is the most useful level for the L&D division but decision-makers and
senior management would prefer that level three to five evaluations be done in
order to judge the value of learning for business (Philips, 2009: 1).
Question 18 queried what evaluation assessment tools are used for level two of
the K/P model and provided options for the participant to choose from. Table 4.8
shows the results of this question.
Table 4.8 The current most popular assessment tools
Assessment Tools Number of responses
Percentage of Responses
Tests 37 93 %
Presentations 32 80 %
Simulations 30 77 %
Role-plays 30 75 %
Others 0 0 %
UNISA STUDENT NO. 40315231 94
Tests, presentations, simulations and role-plays are the preferred tools in that
order, to assess the learning in the classroom.
The results of question 16, 17 and 18 show that evaluations mainly take place in
the classroom after training and make use of all the mentioned tools in Table 4.8.
From the emphrical results it was evident that tests were mentioned more often
than the others, but some tests only employ recall, but the practical application is
important as well.
Question 19 relates to when level three learning evaluations, with the intention to
determine timing evaluations in behaviour changes or tranfer of SKA’s take place
after training (Kirkpatrick & Kirkpatrick, 2010: ix); Elkeles & Phillips, 2007: 196).
The two tables below, 4.9a and 4.9b, show the responses to this question in
comparison to responses to the ASTD 2009 findings.
Table 4.9a Timing of level three evaluations: positive responses
Evaluation
timing for level 3: “yes”
responses
Number of
responses
Percentage ASTD ESI Percentage
(2009) findings
Short-term: two months after training
17
45 %
56 %
Any time after training
15 39 % -
Long-term: after training
15 36 % 52 %
Immediately
after training
13 35 % 38 %
The percentage of respondents who indicated that evaluation for level three
takes place two months after the training is 45 percent and the ASTD and (i4cp)
2009 finding for this option was a 56 percent agreement, whilst the long-term
being six to twelve months is fairly low at 36 percent in the organisation, which
suggests that value that is created in the long term is ignored and yet this where
individuals are using SKA’s at an unconcious competent level (Nielsen, 2009:1)
UNISA STUDENT NO. 40315231 95
Table 4.9b Timing of level three evaluations: negative responses or not
taking place.
Evaluation Timing Level 3: “No” Responses
Number of Responses
Percentage responses does not
take place
Short-term: two weeks to two months after training
21
55 %
Long-term: after training 23 63 %
Any time after training 23 60 %
Immediately after training 24 65 %
The “no” and “not applicable” responses are shown in table 4.9b above. This
table indicates that over 50 percent of participants do not conduct level three
evaluations at any time after the training. The ASTD ESI findings are fairly similar
to the findings here for the timing of level three evaluations. What is concerning
though is the large number who indicated that longitudinal evaluation does not
take place. The “no” responses for each question are very large and once the
“not applicable” responses are added, it can be deduced that Level three
evaluation is not taking place as a general rule.
The next question being 20 considers the approaches used to collect post-
learning data. The evaluation approaches are tested for level three indicate that
none of these methods are used very often. This confirms the responses to
question 19.
Table 4.10 Approaches for level three evaluation
Evaluation Approaches L3 Not at all
Small extent
Moderate extent
Large extent
Very large
extent
Follow-up survey for learners 35 % 33 % 23 % 10 % 0
Follow-up survey for managers/supervisors
33 % 44 % 13 % 10 % 0
Interview with participant 32% 32 % 20 % 12 % 5 %
UNISA STUDENT NO. 40315231 96
On the job observation 24 % 27 % 17 % 27 % 5 %
Interview with manager/supervisor
40 % 30 % 15 % 10 % 5 %
Follow-up focus groups 55 % 18 % 18 % 10 % 0
Action planning 49 % 24 % 17 % 10 % 0
Programme follow-up
session
68 % 15 % 13 % 5 % 0
Performance record monitoring
34 % 39 % 15 % 12 % 0
Success story collection 51 % 32 % 12 % 5 % 0
Table 4.10 indicates to what degree the listed methods or approaches at Level
three are used to gather evidence. Only three items at five percent each to a
very large extent (VLX) is a very small percentage while not at all percentage is
indicative of development need in the learning organisation.
Question 20 responses show that the level three approaches are little used
except for on-the-job observation, which is used to a very large extent in 27
percent of cases. However, for evaluation to be effective the approaches should
be on par with the ASTD findings, which show that all these methods are well
used. The responses confirms the lack of relationships with line on the part of
training and the lack of accountability for training from line management,
therefore training evaluation is hampered or does not get the attention it
deserves.Table 4.11 queried the L4 evaluation approaches:
Table 4.11 Approaches to level four evaluations
Evaluation approaches L four Not at all
Small extent
Moderate extent
Large extent
Business unit supervisor perceptions 51 % 24 % 12 % 12 %
Employee relations records 49 % 29 % 17 % 5 %
Promotion/resignation records 46 % 34 % 17 % 2 %
Employee satisfaction surveys 37 % 29 % 29 % 5 %
UNISA STUDENT NO. 40315231 97
Pre- and post-performance ratings 37 % 17 % 32 % 15 %
Learner/employee perceptions of impact
37 % 39 % 10 % 15 %
Productivity indicators: time, cost,
quality and quantity
33 % 28 % 23 % 18 %
Proficiency/competency levels 32 % 32 % 27 % 10 %
Customer satisfaction logs 22 % 29 % 29 % 20 %
Actual business outcomes 32 % 32 % 24 % 12 %
Table 4.11 shows the results of the extent to which level four approaches are
used. The options “not at all” or to a “small extent” were the main repsonses
which is indicative of the lack of systematic data collection at this level. The
options for “moderate” and “large extent” of use were also selected in a few
instances.
The findings with regard to the results of question 22 and 23 also indicate the
infrequency of programme evaluations across business units and portfolios. This
means that evaluations are not in line with best practice guidelines therefore not
effective nor systematic which means data on value is not gathered sufficiently.
(see table 4.5).
4.4.3.3 Research sub-question III
What are the problem areas, barriers or challenges to effective training
evaluations? Question 24.
A general concern regarding training evaluation implementation is that 60
percent of respondents who are categorised as training practitioners and
managers indicated that they are faced with each of the 17 problem areas and
barriers mentioned in the study to some degree (Phillips & Phillips, 2007a: 3).
The table below gives the most common barriers to evaluation faced by
practitioners. The findings from this research are compared with certain results
from the 2009 ASTD and(i4cp) study (ASTD, 2009: 1-65) (See appendix C). The
UNISA STUDENT NO. 40315231 98
table below states the seven problems and barriers that are most often described
as challenging to a “very large extent”. They are placed in order from the most to
the least commonly selected item of the seven options.
Table 4.12 Most common problems and barriers to training evaluations
Problems and Barriers Number of responses:
a very large extent
(VHX)
Percentage of
responses
ASTD ESI responses
Stakeholder involvement - no management participation
24 59 % -
The learning management system
does not have a useful total function to support training evaluations
22
54 %
41 %
No standardised collection instruments are available
17 41 %
Evaluation data is not standardised to be used in function and system comparisons
17 41 % 38 %
Management does not request
evaluations for impact or results
17 41 % 24 %
Evaluation levels are not determined at the needs analysis
stage and it is difficult to foresee this past levels 1 and 2
15 37 % -
Consultants do not go beyond level 2 evaluations
13 32 % -
The findings thus show that these specific problems and barriers require
attention as above 30 percent in all these responses is a concern as these could
hamper effective evaluation practices. This is indicative that effective and
systematic training evaluations are at risk due to the non involvement of pertinent
stakeholders especially the line management and L&D team (Noe, 2010: 196)
state that management that drives the success of training and therefore the
learning value. The learning management system is not supportive of evaluation
UNISA STUDENT NO. 40315231 99
efforts and yet according to Biech (2008: 795) and Bersin (2008: 31) technology
is an enabler in todays economy and needs to make training operations easy.
According to Meyer and Orpen (2012: 278) an evaluation success depends on
the standardised collection instruments and manager involvement. According to
Phillips and Phillips (2007a: 5 & 270) the data to be collected must be agreed
upon and selected at inception of the need as well as levels of evaluation and
whether the management request evaluation training or not but should provide a
report. In the absence of the aforementioned effective and systematic evaluation
suffers and therefore value is lost. All these common problems creates an
environment of chaos and non delivery by practitioners. Learning organisations
are compelled to remove barriers, (Senge 1990) as cited by Phillips and Phillips
(2007a: 134). Finally external consultants and providers generally also just
evaluate only with classroom tests or portfolio of evidence and rarely do
workplace follow-ups, nor calculate impact of their provision. Maybe it’s the
contracting or service level agreements that are not comprehensively contracted.
4.4.3.4 Research sub-question IV
How can the training evaluation process be improved in the future to be more
systematic, effective and efficient? Questions 25-31 and sub-questions.
Questions 25 and 26 considers how the results of evaluation efforts are utilised
to improve the organisation’s programmes and processes. The results for these
two questions are compared in table 4.13.
Table 4.13 Current and future use of training evaluation results (i)
Use of evaluations
results currently and
against the future
practice
Current
practice
%Q 25
Future
practice %
Q 26
ASTD ESI
comparison
%
UNISA STUDENT NO. 40315231 100
To gather performance data about trainers
39 % 51 % 53 %
To improve overall
business results
34 % 78 %
48 %
To review external providers’ performance
30 % 61 % -
To help meet performance goals of employees
27 % 66 % 36 %
To make decisions on continuing with effective programs
27 % 56 % -
To make decisions on
discontinuing programmes of little value
27 % 56 % -
The current situation indicates that evaluation results are used below 40 percent
of the time for business improvement and secondly for employee development,
but in future there will be a need to increase the use above 50 percent and
higher so that evaluation efforts become more focused on value of learning.
Noteworthy is the future use of training evaluation results to improve business
results is at 78 percent.
Generally evaluations findings in all future endeavours need to be used as
indicated by all seven items above 50 percent for an effective and systematic
value. Also there is need to use evaluation findings to create transformation and
change. The systematic reporting needs to be utilised both for the business and
its employees. In comparison the ASTD international study (2009) on future
improvements found similar findings to this study. What this means is that if
evaluation results are used above 50 percent training evaluations become a
demand, but if not used or used less than 40 percent training evaluation value
also could decrease.
UNISA STUDENT NO. 40315231 101
Table 4.14 Current and future use of training evaluation results (ii)
Use of evaluations results
currently and against the future practice
Percentage for
option: “Does not happen in current practice”
Q 27
Percentage for
option: “Needs to happen in future practice”
Q 28
For setting goals with employees prior to training
32 % 54 %
For setting goals with employees
after training
29 % 54 %
For giving employees opportunities to apply new skills, knowledge and
attitudes
32 % 63 %
For developing personal development plans
29 % 61 %
For determining pre- and post-
training performance
29 % 48 %
The opinions on future improvement in question 27-28 in Table 4.14 regarding
manager and supervisor evaluation accountability indicate that this should be
increased in order to add value to the learning evaluations. The fact that
improvement is warranted may in some instances be due to the fact that the
Addie training cycle is not being used. The reactive nature of training, the use of
the traditional approach to training (TAT) or managers who do not have enough
staff to assign to certain training duties may also be the problem. Concerning
this, Eraut (2011: 195) expresses the importance of the role of line managers in
the training value chain for transfer, results, impact and return.
4.4.4 Correlation Analysis
The comparison of international practices with the results of this study regarding
the value of evaluations is outlined below in Table 4.15 and figure 4.1. refer to
question 29 of questionnaire which request responses to the nine items on what
is deemed an effective training evaluation practice.
UNISA STUDENT NO. 40315231 102
Table 4.15 Evaluation Success Index (ESI) Comparison between the ASTD
AND the Research organisation
Evaluation Success Index Items Percentage
findings
International
participants to
ASTD Research
(2009: 1-65)
Percentage findings to a very
large extent (VLX) in Research
Organisation
1. Our learning evaluation
techniques should help us meet our organization’s learning goals .
41.39 % 32 %
2. Our learning evaluation techniques should help us meet
our business goals.
36,5 % 34 %
3. We get a solid “bang for our buck”
( return when it comes to using the Kirkpatrick/Phillips learning metrics.
25,6 % 20 %
4. Follow-up focus groups are great
source of data
9,1 % 17 %
5. Performance records monitoring great source of data.
24,3 % 24 %
6. Follow-up surveys of Participants
provides data.
31,09 % 29 %
7. Learner/Employee perceptions of Impact is important.
36,3 % 34 %
8. Actual business outcomes (e.g.
revenue, sales) is important data.
22,4 % 34 %
9. Proficiency/Competency levels is important data.
33,0 % 31 %
Source: The ASTD and (i4cp) Group Research (2009) The Value of
Evaluation Success Index (ESI) Making Evaluations More Effective. Used
with permission from ASTD.
UNISA STUDENT NO. 40315231 103
The findings from the research organisation and the international research of
(ASTD 2009) are very similar meaning that generally training practitioners want a
training evaluation that is effective.
Figure 4.8 depicts the response percentages of the international research
organisations in the ASTD survey to the nine Evaluation Success Index (ESI)
items in the colour blue and the red depicts the research organisations response
to the nine items of the Evaluation Success Index (ESI).The correlation with the
international findings from the ASTD and (i4cp) research and the research
oragnisation is significant on what makes evaluations effective is shown for the
nine items that were selected by the ASTD as being crucial to a successful
evaluation.
Figure 4.8 Comparison with ASTD international research and the current
findings for the research organisation on the value of the Evaluation
Success Index (ESI).
A Pearson correlation analysis was conducted to examine whether there is a
relationship between responses of the nine evaluation success index (ESI) within
the international organisations participating in the ASTD (2009) and the local
research organisation. The results revealed a significantly positive relationship (r
= 0.7393, N = 9, p = 0.028) (Leedy & Ormrod, 2013: 291). There is thus a strong
correlation between the two. The findings suggest that the research
0.00% 5.00%
10.00% 15.00% 20.00% 25.00% 30.00% 35.00% 40.00% 45.00%
1 2 3 4 5 6 7 8 9
Pe
rce
nta
ge
ESI Nine Items
ESI
Research international %
Very high Extent(vhx) in research organisation %
UNISA STUDENT NO. 40315231 104
organisations participants also agree strongly with the same nine items of the
ASTD (2009). The ASTD assessd the international research organisations
respondents on what would consititue an effective evaluation practice that makes
value tangible and crebilible. The correlation test was to ascertain whether the
research respondents also value the nine items that indicate an effective
evaluation practice. Thus, certain factors should be considered when
implementing systematic training evaluations. These factors will now be
discussed.
The ASTD and (i4cp) research findings indicated that if the Kirkpatrick/Phillips
models are used, the evaluation practice should be able to declare the value of
the learning (Appendix c). The population that is part of this research generally
agrees that if training evaluation strategy follows the ESI it will provide proof of
learning value. The use of the Brinkerhoff Success Method in combination with
K/P according to researchers enhances evaluation value. There are a number of
models that evaluate training but the organisation needs to choose what suits it
best to get value from evaluations. The barriers must be effectively dealt with for
evaluations to be effective. Managers of line and L&D need to drive success and
make evaluations effective by collaborative strategies.
4.4.5 Open ended questions responses
Question 30, requested the survey participants for their general views on the
value of training evaluations, meaning whether they value it or not.
Table 4.16 requested respondents for their views on training evaluation and
following is the views of managers of training divisions.
Table 4.16 Managers views on training evaluations
Evaluation is an important and necessary tool to collect data for decision-
making. Evaluation is an important activity to declare the value of learning. Reach the agreed levels for all interventions.
UNISA STUDENT NO. 40315231 105
Important tool and must be used to value training interventions for declaring the value of learning for the individual, team and organisation.
If done correctly it will serve as a valuable source of data that can be used for
justification and reporting.
Evaluation does not take place, as it should. My opinion is that practitioners are not aware how to do the evaluation and the analysis thereof.
Evaluation effectiveness must have a ripple effect on the total business starting with the individual and touching finally on the overall business goal set. Hence,
such evaluation must include indicators for all levels: individual, section, unit/business and company effectiveness and impact after learning has taken
place.
If done according to an evaluation strategy it helps harvest value for L&D must have a plan per intervention because not all training needs to be evaluation at
all levels.
In the organisation where reactive learning is, the order of the day and proper evaluation fell through.
If applied correctly it will be effective in that improvement on learning content and facilitator maybe done in order to improve training effectiveness tools.
If performed, monitored, and aligned properly, it can be a source of valuable information for any company.
The management group responded that they strongly see value in implementing
training evaluations systematically.” In others words they want training
evaluations to take place.Table 4.17 indicates the view of education training and
development practicitoners (ETDP’s) on the value of training evaluations.
Table 4.17 ETDP Views On Evaluations as stated.
Very effective
This will assist all the stakeholders to have a better understanding of their contribution to the organisation.
That training evaluations are crucial to the organisations strategy and
optimisation.
In the Post Office, evaluation is not effective as expected and it does not serve any purpose. Every Region has its own evaluation template that is why our Training Department is not standardised. As much as already stated earlier I
believe if as a department (L&D) can use standardised format of evaluation template, training and development will eventually produce productive
UNISA STUDENT NO. 40315231 106
employees as expected and the organizational ROI will definitely increase.
Conduct Evaluation using techniques and an instrument, which is sub-
standard, non-existent therefore, evaluating effectiveness, can be a skewed indication of results! Implement a standardised (well-developed) effective tool
for systematic evaluation purposes.
For evaluation to be effective, I think there should be after care service, since after evaluations, we trainers do not see if the employees apply the learning on the job.
Helps to have competent and knowledgeable workforce
It is imperative that I understand what the purpose of conducting an evaluation is. An evaluation answers whether the training program met a fundamental requirement. Run a diagnostic checkpoint for problems before moving ahead to
the next level.
No evaluation process is in place.
Implement Diagnostic, formative, and summative evaluations.
Evaluation is critical for T&D and for the learner. Interventions, trainer, learner, and the organisation need to know how well the intervention was 'accepted'.
When evaluation is use properly, we will not have a repetition of interventions
with no culture change and business improvement.
Our organisation needs to improve on evaluation methods to meet the organisational goal, increase productivity and employee satisfaction.
Formative or ongoing evaluation/ assessment are integral to the progress of any
training program and summative evaluation/assessment is required to gauge the program's ultimate efficiency and effectiveness. In order to achieve evaluation effectiveness in SAPO, practitioners will require standardised,
detailed, updated, well-maintained evaluation/ assessment tools/ training matrix for all training programs.
Evaluation process needs implementing in its entirety systematically to have
notable impact.
Evaluations should indicate the training impact. Evaluations need quantified outcomes for line and stakeholders to understand.
Evaluation process needs planning at the beginning of the training cycle.
Regions to develop training materials based on regional needs analysis gaps.
If the P.O. can implement effective evaluations on all levels, it will benefit the organisation and surely improve performance if it linked to an appraisal system.
Good tool to utilize but it monitoring at all times is important for an impact.
UNISA STUDENT NO. 40315231 107
Evaluation is crucial, linked to business goals and performance management of employee and management.
It is an excellent tool for learning and development.
It is a good tool to measure training effectiveness and impact.
It is a critical component of the training cycle but expensive depending on
delivery, it helps the trainer, managers, and learners see if training has met and applied.
I see the process of evaluation as a critical part of the training cycle and needs
to proper management in order to provide the desired information in order to establish the effectiveness of the trainer and program. This will assist in improving the training and ensuring that the workforce is properly equipped in
order to perform their job functions to the best of their ability and in turn will lead to a happy workforce and a profitable company.
In Sapo, evaluation is not effective as it should be. In the first, please we do not
have a tool to measure correctly, whether training has taken place and that employees are applying what they have learnt back at their workplace. Our LMS should make provision for evaluation option because currently we are training
for the sake of numbers hence evaluation is not working.
If managers and supervisors learn about the evaluation system, I am sure they well are able to use it. The instruments are not in use because is not known.
The practitioners also value training evaluations but want an effective, systematic
practice that is managed, strategised, planned and monitored by managers. The
need for standardised tools is also apparent and could be a reason for not
implementing training evaluation effectively. Question 31 depicted in Table 4.18
Training and development needs of respondents in systematic evaluation
practices.
Table 4.18 Development needs
D31.1
D31.2
D31.3
D31.4
How to calculate ROI/ROE
Learn about Sam method
Draw up evaluation strategies
Design data collection instruments to
implement evaluation
A+ N+
UNISA STUDENT NO. 40315231 108
Calculate return on investment (ROI)
How do design data collection instruments
Using new methods and strategy
Writing analytic reports on training.
Evaluation of Level 5 Kirkpatrik Model
Analysing data for better evaluation
Current ODETD model
Honours
Leadership style Managerial skills Inter-
Personal skills
Communication
skills
Needs analysis ROI
To develop
questionairs and obsevation tool
Management -
planning
Course design
Relevant standards Approach to evaluating the
impact of technology-based professional
Re-training on measuring
RIO and implementations
Development Gathering of
evidence
Designing short and precise questions
Fair time measurement for
evaluations
Value adding needs analysis
Pre-learning assessment
Formative evaluation (on the go)
Post training evaluation
Building a evaluation strategy
Selecting an appropriate model to us l&d
New models like sam
Do a ROI calculation that is credible
Reaction Learning
Training qualification Etdp roles
If Kirk Patrick's model for evaluation is
UNISA STUDENT NO. 40315231 109
followed - none
Attend refresher
training on Evaluation
Implement
evaluation as part of learning
development process
Get more practise in evaluation
Design more training and
evaluation tools
Hr management
ROI
Experience level 3-6
Behaviour evaluation Put practices/processe
s in place in my BU for evaluation
ROI
Should be put into
practice
Should be done
after each learning event
Should be
done after each learning
event
Design and development of training material
Mentoring and coaching
Project management Development
programs
ROI To have knowledge of other
evaluation models
Knowledge and use of other training
evaluation models beside Kirkpatrick
Design digital evaluations
Evaluations models
besides Kirkpatrick
And be able to draw findings
and interpretations of- responses
electronically
More exposure in moderation
processes
More exposure in skills development
facilitation processes
UNISA STUDENT NO. 40315231 110
Leadership skills Labour Law Management development
Computing skills
Behaviour evaluation Results ROI
The final question in the survey asked participants to state their training needs.
The training needs indicated by 95 percent of responses were in the skills;
knowledge or attitude domains pertaining to training evaluation.
Apart from a few participants, the majority indicated that training evaluation
models and ROI assessments are requirements.
Generally most respondents wanted to be trained on various types of training
evaluation models, frameworks and processes.
4.4.6 Overall Findings
The main question posed was: “Is the current training evaluation practice
effective, efficient and systematic in providing evidence to declare the value of
learning?”
The research found that the practitoners see value in the systematic approach
based on the Addie cycle but, the reactive approach in the organisation causes
practitioner to adopt short cuts or ignore the system to meet the training need.
The current training evaluation practice in the organisation is implemented mainly
at the summative levels as assessed using the Kirkpatrick/Phillips framework.
The Level one(1) and Level two(2) are implemented in the classroom after the
training. Test are mostly used to assess Level two(2). This is indicative of the
traditional approach to training and not the systems approach. The Level three(3)
is performed at 38 percent of the time and the problem is managers should take
responsibility on job, but training has not supported the effort nor formed
meaningful relationships nor collaborations for the value of learning to be
tangible.
UNISA STUDENT NO. 40315231 111
The data is not gathered systematic especially using set approaches for Level
three(3) and Level four(4). However, this does not prove that the training is not
adding learning value; rather, it means that the measurement and the evaluation
strategy, tools and instruments need to be improved to collect evidence of that
value. There is a definite relationship between training evaluation and learning
value because the types of evaluation and levels conducted did provide proof of
value add of L&D. Evaluation tools which collect data is a great need indicated
by the practitioners. The use of the results is far below required guidelines.
The assessment did not address the utilisation of diagnostics, confirmative and
formative evaluations but respondents strongly agree that the Addie training
cycle is a great system to base a holistic or systematic approach to training
evaluations. However, what is concerning is that the current practice lacks
comprehensiveness as the summative Level three(3) to Level five(5) is not fully
attempted due to the barriers and challenges faced in the practice. Due to the
reactive practice of training, relationships, collaboration and ownership a key
element to successful evaluations is lacking.
4.5 CHAPTER SUMMARY
This chapter presented the results mainly in tabular format. Statistical analysis
was used to describe the data in the tables. In conclusion a comparison was
made between current ineffective practices and more effective practices that
should be used in future to collect evidence for declaring the value of learning.
UNISA STUDENT NO. 40315231 112
CHAPTER 5
CONCLUSIONS AND RECOMMENDATIONS
5.1 INTRODUCTION
The L&D function has an important contribution to make to the bottom-line of the
organisation, because employees enter the workplace with high expectations of
success; however, measuring this contribution remains a challenge (Bersin,
2008: 1). The workplace has become a very competiti ve environment which is
impacted by technology-driven systems that operate in a global context.
Therefore, the speed and lack of longevity of information is resulting in the need
for more training more often (Beevers & Rea, 2012: 29).
According to Lawson (2009: 268), the learning function is at a disadvantage
because the value of investments in training has not been properly proven to
stakeholders. Accordingly, evidence is mostly anecdotal or incidental and not
generally obtained according to a scientific method, despite the fact that there
are models and frameworks that promote and assist in the training evaluation
practice. These are used in proving effectiveness, improving efficiency and
reducing costs of training.
According to Guerra-Lopez (2008: 27), training evaluations support organisations
in determining which interventions work and which do not produce results. Wick
et al; (2010: 1) agree that the systematic approachs aid organisations in rooting
out ineffective learning programs. The need for managing an effective systematic
training evaluation practice in search of learning value is ignored or overlooked in
the fast-paced learning world. However, value is sought from all investments and
this applies to training as well (Thorne & Mackey, 2007: 138). Accountability is a
UNISA STUDENT NO. 40315231 113
requirement from executives because of the various demands of the organisation
(PFMA, 2008; Kirkpatrick & Kirkpatrick, 2010: 12-13).
Barbazette (2008: 3), Thorne and Mackey (2007: 138) agree that both the audit
and evaluation of the L&D function as a system which includes the trainers
should be peridodically undertaken. The evaluation of interventions should take
place according to a well-developed method that forms part of managing the
function and its value proposition (Coetzee, et a l; 2007: 387).
The Addie training cycle, an instructional systems design(ISD) was proposed as
the structure for evaluating training systematically. The evidence for the value
judgement must be collected throughout the training cycle and not just
summatively (Bozarth, 2008: 4). Systematic training evaluations are proposed as
an evidence gathering practice to prove, improve and review the value of
learning. The Addie cycle is the structure to gauge or measure the value-adding
activities of the training function in meeting a training need.
This research followed a quantitative descriptive survey design and attempted to
deal with the main research question: “Is the current training evaluation practice
effective, efficient and systematic in providing evidence to declare the value of
learning?”
Four sub-questions stemming from the main research were analysed individually
along with the survey results in the previous chapter (Leedy & Ormrod, 2013:
36). The following areas were investigated and appraised in the analysis:
The support provided by the ISD or Addie training cycle in the systematic
approach to training assessments for diagnostic, formative and summative
evaluations.
UNISA STUDENT NO. 40315231 114
Use of the Kirkpatrick/Phillips (K/P) framework for summative training
evaluations.
The problems experienced with the process, procedures and data
collection methods.
What is deemed effective and systematic training evaluation practice
according to the evaluation success index (ESI) and the ASTD and (i4cp)
evaluation research (2009: 1-65).
The probable future state of the practice in providing data to prove
learning value.
5.2 OVERVIEW OF THE STUDY
The problem identified was the need to declare the value of learning when taking
into consideration the annual training funding and expenditure. The main focus
is: Is the current training evaluations an effective systematic practice that is being
leveraged to harvest the value of learning. This meta-analysis of the current
training evaluation practice was intended to determine whether the practice is
sufficiently effective, efficient and systematic in gathering comprehensive
evidence of value (Guerra-Lopez, 2008: 118).
5.3 FINDINGS OF THE STUDY
The four sub-questions had to be assessed first before the main question could
be answered. These four sub-questions are answered below.
5.3.1 Research Sub-question I
How does the Addie training cycle support the systematic training evaluation
process to assess, develop, design, implement and evaluate?
UNISA STUDENT NO. 40315231 115
The proposed use of the Addie training cycle was well received as the structure
on which to base the evaluation process and outcome. The Addie cycle provides
the confirmative, diagnostic, formative and summative opportunity for collecting
data, instead of only summative tools which lack credibility (Bozarth, 2008: 190;
Phillips & Phillips, 2007a: 22).
The Addie cycle reinforces the need to build relationships with all stakeholders
who can contribute to the establisment of the value of learning (Eraut, 2011: 195
and Noe, 2010: 197). Addie starts at the beginning or genesis of the need and
ends at a value report. Training analysts, designers and trainers must provide
services that add value to learning. For example, learners must be able to
transfer SKAs to the job and improve productivity for individual and group
performances (Elkeles & Phillips, 2007: 22).
5.3.2 Research Sub-question II
How does the Kirkpatrick/Phillips model support an effective summative data
collection process?
The K/P framework was confirmed as the model in use in the organisation.
Therefore the application, approaches and processes derived from the levels
was easy to assess in practice. The findings indicate that levels one and two are
on par with general research benchmark best practice. But, the practice does not
go beyond level two nor at the required levels that chief executive officers
(CEOs) demand. Ceo’s want training evaluation to also provide proof of value
from levels three to five, which gives evidence of individual, team and
organisational results and the bottom-line impact (Phillips & Phillips, 2009: 1).
Practitioners have indicated calculation of return on investments at an
acceptable percentage, but this is doubtful due to lack of preceding levels
comprehensiveness.
UNISA STUDENT NO. 40315231 116
5.3.3 Research Sub-question III
What are the problem areas, barriers or challenges to an effective training
evaluations?
The seventeen general challenges cited by Phillips and Phi llips (2007a: 3) and
others exist in the research organisation as well. These challenges prevent
evaluations beyond level two.
The general practice barriers impair effective systematic training evaluations.
The respondents’ opinions indicate that the known and diverse evaluation
models should be applied in practice to enhance the collection or harvesting of
the value of learning.
To improve training evaluations supervisor and manager involvement needs to
increase before, during and after the training to create learning value for all
stakeholders. The views of Brinkerhoff (2006: 1) and Wick et al; (2010: 174)
indicate that learning fails to transfer to the job environment when management
does not reinforce the use of new skills (Kirwan, 2009: 17). Training specialists
and line managers therefore need to build relationships for the good of learning
value and the organisation’s bottom-line (Eraut, 2011: 195; Noe, 2010: 197). The
Addie steps reinforce interaction between critical stakeholders (see researchers
view in chapter two 2.4)
5.3.4 Research Sub-question IV
How can the training evaluation process be improved in the future to be
systematic, effective and efficient?
The results of an evaluation are not addressed, maybe or most probably
because the practice has no credibility or trainers lack evaluation skills beyond
level two.
Although line management is not involved in L&D processes, the respondents
from L&D stated that the manager should be involved as a critical stakeholder or
UNISA STUDENT NO. 40315231 117
value judge. The question arises as to what L&D has provided to business
management to garner support regarding level three to five evaluations.
The ESI items were determined by the ASTD and (i4cp) (2009) from a survey of
a number of international organisations. The correlation between the ASTD study
and the organisation researched for this report is very high at 95 percent. This
positive correlation further indicates that the success of effective evaluations is
important for the research organisation in declaring the value of learning.
The views of respondents both managers and practitioners of training were very
positive towards training evaluations, but the challenges need to be dealt with.
The use of the LMS and other systems need to support the process, otherwise it
does become time consuming. The lack of the use of the ISD/Addie to provide
training also makes training reactive instead of a proactive issue. Respondents
believe that an effective systematic training evaluations practice will help declare
the value of learning.
5.4 VALUE OF THE STUDY
This study was of great value to the researcher firstly in providing insight on
training evaluations and the value of learning for both the organisation and
learning stakeholders. The study will inform changes and transformation of the
current training evaluation practices towards a more effective state. The study
will make recommendations on an effective systematic training evaluation
practice for the organisation.
The results of the study indicated that the value of learning is not effectively
established. The current training evaluation practice does not systematically
collect the data needed in order to make a judgement on the value of learning
UNISA STUDENT NO. 40315231 118
(Guerra-Lopez, 2008: 27). Although the lower level evaluations are implemented,
the higher levels that executives will trust are not attempted by the L&D function.
The current practice lacks a full implementation evaluation plan and strategy.
There is a need to implement training evaluations consistently as part of L&D
deliverables.
5.5 LIMITATIONS OF THE STUDY
This study was based on a single research organisation and the target
population was only training practitioners and managers. The survey did not
include external providers of learning nor the line managers. The findings are
unique to the research organisation but could be used in the South African ASTD
State of the Industry Report to compare the results for K/P levels one to five. The
findings can also be compared with other similar sized organisations in South
Africa.
The practice of the diagnostic, confirmative and formative evaluations levels
were not examined indepthly but formed part of Addie question as no tools were
used. There are a number of other evaluation models which were not part of the
survey but, could add much to the process in future. There are a large number of
well known theorist and models in the field some were discussed in chapter two
but, these are a few that can support furture training evaluation towards an
effective and systematic practice as follows:
The “Six Ds” design by Wick et al; (2010).
Bersin’s Impact Measurement Framework ( Bersin, 2008: 72).
Kirkpatrick’s return on expectation or Business partner Model ( Kirkpatrick
& Kirpatrick, 2010: 32).
Yeo’s five learning pillars (2009).
Brinkerhoff’s Successful Case Method (2006) .
UNISA STUDENT NO. 40315231 119
Allen’s Successive Approximation Model (Sam) (Allen & Sites, 2013).
5.6 FUTURE RESEARCH
The training provided in an organisation should produce value and, thus, value
creation should always be borne in mind (Wick, et al; 2010: 8). L&D needs to be
proactive and not reactive, as reactive training is a barrier to both training
evaluation and the declaration of learning value. When reporting on training
value, managers often produce anecdotal value.
However, if a needs analysis is conducted proactively at the inception of training,
it will promote systematic training evaluations and value demonstration as the
baseline data is collected upfront, for comparisons to be made about changes in
training quality, quantity, time and costs (Phillips & Phillips, 2007a: 13 ).
The practice of systematic training evaluations will require a strategy to be
established which must be known to all providers of training (Opperman &
Meyer, 2008: 224). This strategy will also communicate to trainers and others
what is required of them. Phillips and Phillips (2007a: 22) agree that the strategy
should be planned from the inception of the process. It should include selection
criteria for programmes of evaluation up to an agreed level and all data collection
tools for surveys, questionnaires, tests, observation and checklists need to be
included.
The research could be enhanced if the tools that are currently being used to
collect evidence are investigated further, because an evaluation is only as good
as the data collected. The unit standards on training evaluation states that to
conduct a training evaluation the tools are given to evaluators (Meyer & Orpen,
2012: 278-279). Further investigation is required to follow up with implementers
UNISA STUDENT NO. 40315231 120
at higher levels, that is, levels three to five, to provide data harvesting tools and
findings/reports for drawing up a standardised template. Wick et al; (2010: 25)
state that the whole experience must be designed in advance of training launch.
The value of learning could also have been ascertained by surveying the
business managers and the learners in the last three years to ascertain their
views on the value of learning. The following possible area should be addressed
in future studies:
The alignment of learning to business strategy and its outcomes.
Problem diagnosis at the needs analysis stage.
Formative evaluation of interventions in meeting the learning /training
need.
The relationship between management and L&D that supports learning
transfer and ownership.
Training as the catalyst for change, productivity and growth.
Collecting and publishing successful learner stories after training.
How the transfer of learning to the job should be driven by management
and not learning itself.
How learning and behavioural tools are used to assess changes and
application.
When evaluations take place, for example, longitudinal evaluations that
allow SKA’s to become behaviour before determining the value of
learning.
The change in skills, knowledge and attitudes.
The impact and result on individuals, teams and the organisation.
Systems that provide data on productivity, as well as cost benefit analysis.
UNISA STUDENT NO. 40315231 121
Combine different evaluation models to collect quantitative and qualitative
data.
Use control groups and determine the value of learning from the learners
and supervisors perceptions.
The increase in revenue due to training interventions must be investigated
(Thorne & Mackey, 2007: 133; Lawson, 2009: 254).
5.7 RECOMMENDATIONS
The findings indicate that although training evaluations, have been implemented,
they are not managed effectively as a systematic practice. Simply stated, the
practitioners perceive the current practice as being haphazard and not being
managed to produce value: because the practice does not collect pertinent data
from managers and learners pre and post learning for comparison: do not make
use of valuable data obtained during the training needs analysis stage (process
criteria to outcomes criteria) (Agarwala, 2012:365). Most of the respondents
concurred that a fully-fledged systematic training evaluation practice is lacking
due to partial application. However, if as such systematic training evaluation
were to take place, it would be of great value to all stakeholders.
5.7.1 Recommendation i
The L&D policy needs a training evaluation strategy for the organisation that has
a purpose, a plan and selection criteria for each type and level of the K/P model
of evaluation for each intervention. The current practice is summative and
therefore a systematic approach to training evaluation must be undertaken to
garner holistic value for learning. This means that evaluation is part of meeting
the need and is therefore part of the planning (Noe, 2010: 7).
Systematic training evaluations should be implemented by using the Addie
training cycle for revealing the value added to the organisation. Consequently, an
UNISA STUDENT NO. 40315231 122
alignment of training to business outcomes during the Addie cycle would prevent
reactive training provision (Wick, et al; 2010: 5). Hence, the function should be
managed according to a systems approach to training (SAT) and the instructional
systems design (ISD) or Addie training cycle should be used to meet the training
needs in the organisation. The implementation of training evaluations will require
a set of evidence gathering tools, e.g. survey questionnaires for individuals,
teams, groups and the use of the learning management system (LMS).
The investigation into the current training evaluation practice indicates that the
practice is known but not fully applied. The implementation of evaluation utilises
the K/P model which is goal-based and constitutes a systematic approach to
evaluation. However, the model is not leveraged fully and stops at the traditional
approach to training (TAT) levels being levels one and two. Although the K/P
levels one and two are commonly assessed owing to their simplicity, data on the
higher levels of the K/P model three to five also needs to be gathered to
calculate value for the C-Suite.
Although all responses are above a mean of 3.0, for Addie adding value in the
systems appraoch needs to be applied. There is therefore, a strong indication
that it can support training evaluation to become more effective, efficient and
systematic. It can therefore be concluded that an integrated holistic approach to
evaluation will serve the purpose of systematic evaluations (Opperman & Meyer
2007: 185). Use tables 2.4 and 2.6 as a guide to an effective systematic practice.
5.7.2 Recommendation ii
It was also found that the evaluation results are not used by the organisation to
improve the activities of L&D nor for business sustainability. It was noted that a
high percentage of respondents indicated that there is a lack of involvement by
line managers in L&D activities. The needs analysis stage and when the learning
UNISA STUDENT NO. 40315231 123
is transferred to the job especially lacks support. This indicates that the Addie
cycle is not being used thoroughly. This is confirmed by some of the research
responses on their views which stated that training in the organisation is reactive
and not proactive. This has the potential to create a one-sided approach or the
traditional approach or a situation of them and us between training and business.
The vacuum in relationships and accountability is apparent because to a large
extent managers in line do not take the responsibility for developing their staff.
Moreover, it could prevent longitudinal training evaluations on levels three to five
occurring and could be the cause of CEOs’ not having reports on value of
learning because these levels are ignored.
5.7.3 Recommendation iii
The age demographics indicates an aging group of practitioners and has become
a concerning issue for HRD staffing. This could have an impact on attracting the
generation Y or millenniums and adapting to their technology sauvy learning
requirements. There is a need to hire younger practitioners as an aging group
could create a vacuum in the L&D division very soon. Moreover, new L&D
trainers do not provide value from day one of appointment. For this reason
continuous investment in the development of practitioners is necessary to create
value for learning (ASTD New Competency Model, 2013).
5.7.4 Recommendation iv
Data collection should be housed on the learning management system (LMS) it
has at its disposal (Bersin, 2008: 31). The LMS must support training evaluations
in collecting on-line data from all stakeholders or 360 degree. In the Organisation
under study, line managers are not involved due to the possibility that L&D has
not been inclusive of stakeholders. It can then be assumed that L&D has not
provided them with training to support training evaluations or provided templates
for needs analysis and data collection. The follow-up surveys, interviews with
learners and managers is an important information gathering method and should
UNISA STUDENT NO. 40315231 124
be leveraged. Training is reactive so some Addie stages are skipped in lieu of
procurement processes. Further requesting ROI at the death of an intervention
is wishful thinking. ROI is not an afterthought methodology as it depends on a
robust effective system, efficient data harvesting training evaluation practice.
5.7.5 Recommendation v
According to Bersin (2008: 34), training evaluations should be undertaken
because they are part of what trainers have signed up for and not a separate
requirement. Just do training evaluations as a priority. Provide training to all
stakeholders but training practitioners should be top of the list as their needs
indicate that they would like to be trained in a number of aspects of training
evaluations.
An effective systematic training evaluation practice will require L&D to formulate:
a plan, a value proposition. Further refine levels one to three and meet the
demands of SAQA/NQF, before moving to the higher levels four and five. The
method should start with the training needs analysis and continue through all of
the Addie stages for selected interventions. Addie maybe adjusted to shorter or
longer stages to meet L&D’s need for speed of delivery. Metrics that should be
include is the value to L&D, learner, on the job productitivy, impact of learning on
profit, but a business intelligence system (BI) will be necessary for gathering
value data for analysis from all systems.
Marketing and education should be provided on evaluation methods, processes
and procedures to all stakeholders. Credibility will be enhanced if evaluations are
carried out by different regional trainers. All findings should be documented and
templates standardised for evaluators whether they are practitioners or line
managers. The ESI correlations showed a very high degree of agreement on
what is deemed an effective and successful training evaluation practice. Simply
UNISA STUDENT NO. 40315231 125
stated the respondents in this research organisation agree strongly that an
effective practice of training evaluations produces value for the organisation.
5.8 CHAPTER SUMMARY AND CONCLUSION
The respondents strongly agree that the Addie training cycle is a great system to
base a holistic or systematic approach to training evaluations. The respondents
are finding it difficult to move beyond the K/P level two evaluations due to a
number of known challenges faced by training stakeholders. Therefore it can be
concluded that the current practice somewhat lacks comprehensiveness in
collecting data on learning value as not the whole system is interrogated.
Training evaluations must be implemented to conclude and submit a final report
to business no matter the level at which it is conducted, as it is an L&D
deliverable. The evaluations beyond level two must be become a practice in
future. Reports about value must be corroborated by line managers,
learners/employees and evaluators. Training cannot depend on anecdotal or
incidental evidence to promote its value: it needs to use scientific methods to
arrive at conclusions. Biech cited in Kirkpatrick and Kirkpatrick (2010: viii) state
that training evaluation is both science and art. The existence of L&D could be in
question if learning professionals do not practise their profession. L&D must
partner with business as a strategic partner or as a learning partner (Kirkpatrick
& Kirkpatrick, 2010: 32).
The qualitative feedback from respondents insists that training evaluations be
done strategically to add value for all stakeholders and not only the training
function. This move towards systematic training evaluation will require a change
management intervention to implement an effective systematic practice to in
future declare the value of learning.
UNISA STUDENT NO. 40315231 126
BIBLIOGRAPHY
Allen, M. and Sites, R. 2013. Leaving Addie for Sam. Available from:
http://www.learningsolutionsmag.com/articles/1012/book-review-leaving-addie-
for-sam-by-michael-allen-with-richard-sites [Accessed 22 May 2013].
Agarwala, T. 2012. 'Assessment and Evaluation'. In: Wilson, J. ed. International
human resource development: Learning, education and training for individuals
and organisations. London: Kogan Page. 364-375.
American Society for Training and Development a 2009. Value of evaluations:
Making training evaluations more effective. Available
from:http://www.ASTD.org/articles [Accessed 12 September 2012].
American Society for Training and Development (ASTD). 2010. 'The best of
measuring and evaluating learning'. Training & Development Magazine,
December: 1-67. Available from <http://www.ASTD.org/articles [ Accessed April
2012].
American Society for Training and Development (ASTDSA). 2008. The 5th
Annual (ASTD) State of the South African Training Industry Report 2007.
Randburg: Knowledge Resources.
American Society for Training and Development (ASTDSA). 2010. The 8th South
African Learning and Development ASTD State of Industry Report. Cape Town:
Lexus Nexus
Bachetta, S. 2012. Increase your training ROI with instructional design. Available
from http://www.trainingmag.com/articles/2011-training-industry-report [Accessed
6 July 2012].
Barbazette, J. 2008. Managing the training function for bottom-line results. San
Francisco, CA: Pfeiffer.
Beevers, K. and Rea, A. 2010 Learning and Development Practice. London:
CIPD.
UNISA STUDENT NO. 40315231 127
Bersin, J. 2008. The training measurement book: Best practices, proven
methodologies and practical approaches. San Francisco, CA: Pfeiffer.
Biech, E. ed. 2008. ASTD handbook for workplace learning professionals.
Baltimore, ML: United Book Press.
Bingham, T. and Jeary, T. 2007. Presenting learning. Alexandria VA: ASTD
Press.
Bozarth, J. 2008. From analysis to evaluation. San Francisco, CA: Pfeiffer.
Brinkerhoff, R. O. 2006. 'Increasing impact of training investments: An evaluation
strategy for building organizational learning capability.' Journal, Industrial and
Commercial Training. 38(6):302-307.
Chan, J.F. 2010. Designing and developing training programs. San Francisco,
CA: Pfeiffer.
Chartered Institute for People Development. 2007. CIPD Research on the value
of learning. [online]. [Accessed 20 Jul 2012].
Chartered Institute for People Development (CIPD). 2011. Learning and Talent
devlopment survey. London: CIPD.
Chartered Institute for People Development (CIPD). 2013. Learning and Talent
devlopment survey. London: CIPD.
Coetzee, M., Botha, J. Kiley, J. & Truman, K. 2009. Practising education, training
and development in South African organisations. Cape Town: Juta & Co.
Elkeles, T. and Phillips, J.J. 2007. The chief learning officer. Oxford, UK:
Elsevier.
Elsdon, R. ed. 2010. Building workforce strength. Santa Barbara, CA: Praeger.
Ensor, L. 2012. 'Nzimande says seta billions must pay for fet colleges' Business
Day 02 August 1:http://www.bdlive.co.za/articles/2012/08/02/seta-billions-must-
pay-for-fet-colleges nzimande;jsessionid[ Accessed 13 August 2012].
UNISA STUDENT NO. 40315231 128
Erasmus, B.J., Loedolff, P.v.Z., Mda, T.V. & Nel, P.S. 2012. Managing training
and development in South Africa. Cape Town: Oxford University Press.
Eraut, M. 2011. 'Identifying and classifying corporate universities in the United
States'. In: Malloch, Cairns, Evans & O'Connor eds. The Sage handbook of
workplace learning. London: Sage Publications.
Griffin, R.P. 2010. 'Means and ends: Effective training evaluations'. Industrial and
Commercial Training Journal. 42(4):220-225.
Guerra-Lopez, I.J. 2008. Performance evaluation: Proven approaches for
improving program and organisational performance. San Francisco, CA: Jossey-
Bass.
Hofstee, E. 2006. Constructing a good dissertation. Sandton: EPE.
Investors in People. (IIPSA), 2012. IIPSA standards.Avalialble from: http//www.
investors in people.co.za. [Accessed 07 Sep 2012].
Institute for Corporate Productivity (i4cp) Research, 2009 The value of
evaluations:Making evaluations more effective Available from:
http://www.ASTD.org/articles [accessed 12 September 2012].
Institute for Advance Analytics. Stastical Analysis System SAS JMP version 10.1
Jefferson, A.M., Pollock, R.V.H. & Wick, C.W. 2009. Getting your money's worth
from training & development. San Francisco, CA: Pfeiffer.
Kearns, P. 2005. 'From Return on investment to added value evaluation: The
foundation for organisational learning'. Advances in Developing Human
Resources. 7(1):135-145.
Ketter, P. 2012. Learning's new analytics. Available from:
http://www.trainingmag.com/articles/2011-training-industry-report [Accessed 6
July 2012].
UNISA STUDENT NO. 40315231 129
Kiley, J. 2007. 'Evaluating ETD effectiveness' In: Coetzee, M. ed. Practising
education, training and development in South African organisations. Cape Town:
Juta & Co. 248-286.
Kirkpatrick, D.L. and Kirkpatrick, J.D. 2006. Evaluating training programs. San
Francisco, CA: Berrett-Koehler.
Kirkpatrick, J.D. and Kirkpatrick, W.K. 2010. Training on trial. New York:
Amacom.
Kirkpatrick, D.L. and Kirkpatrick, J.D. 2012. The return on expectation for training
(Roe) proceedings SU 105 of ASTD international conference & exposition 6-9
May 2012. Denver: Brett-Kohler.
Kirkpatrick, D.L. and Kirkpatrick, J.D. 2012. Ultimate demonstration of training
value proceedings SU-103-1. of ASTD international conference & exposition 6-9
May 2012. Denver: Berrett-Koehler.
Kirkpatrick, W. and Kirkpatrick, J. 2013. Let’s get real about training evaluations
Available/:http://www.trainingmagnetwork.com/welcome/thekirkpatricks
[Accessed 13 Jun 2013].
Kirwan, C. 2009. Improving learning transfer: A guide to getting more out of what
you put into your training. Surrey: Gower.
Klyneveld, Peat, Marwick & Goerdeler (KPMG). 2012. Global Economic Outlook
2012. Johannesburg: KPMG. Available from: http://www.fasset.org.za/ [
Accessed August 2012].
Lawson, K. 2009. The trainer’s handbook. Updated edition. San Francisco, CA:
Pfeiffer
Lee, B. 2011. 'Proving training's value: Linking training to business outcomes'.
Training Industry Quarterly, Winter, [Accessed 4 Nov:2011].
Leedy, P.D. and Ormrod, J.E. 2013. Practical research planning and design.
Boston, MA: Pearson.
UNISA STUDENT NO. 40315231 130
Lui-Abel, A. 2011. 'Identifying and classifying corporate universities in the United
States'. In: Malloch, Cairns, Evans & O'Connor eds. The Sage handbook of
workplace learning. London: Sage Publications.
Malloch, M. Cairns, L., Evans, K. & O'Connor, B.N. eds. 2011 The Sage
handbook of workplace learning. London: Sage Publications.
Maree, K. ed. Creswell, J. W. Ebersöhn, L. Eloff, I., Ferreira, R. Ivankova, N.V.
Jansen, J.D. Nieuwenhuis J. Pietersen, J. Plano Clark, V.L. & van der
Westhuizen, C. 2007. First Steps in Research. Pretoria: Van Schaik
Marshall, J. 2012. Evaluation practices, promises and barriers: How do YOU
measure up? Available from: http://www.trainingmag.com/articles/2011-training-
industry-report [Accessed 08 Nov 2012].
Meyer, M. 2007. Managing Human resource development: An outcome-based
approach. Durban: LexisNexis.
Meyer, M. 2007. The skills revolution: South African companies meet the
international training standards. Management Today, 23(1):57-58.
Meyer, M. ed. 2012. Managing human resource development: A strategic
learning approach. 4th Edition Durban: LexisNexis.
Meyer, M. and Orpen, M. 2012. Occupationally-directed ETDP practices.
Durban: LexisNexis.
Mouton. J. 2001. How to succeed in your Master's & Doctoral Studies: A South
African guide and resource book. Pretoria: Van Schaik.
National Qualification Act 67 of 2008.The Department of Higher Education and
Training. 2008. NQF Act 67 of 2008. [online] http://www.dhet.gov.za [Accessed
Nov 2012]
Ndlovu, J.B. Training Manager LI. 2013. SDF Roles in the Effective Training
Evaluations: A Personal Interview 22 October 2013 Midrand.
UNISA STUDENT NO. 40315231 131
Nielson, B. 2009. Training analytics.
http://ezinearticles.com/?expert=Bryant_Nielson. [Accessed 6 July 2012].
Noe, R.A. 2010. Employee Training and Development. Fifth Edition Macgraw-Hill
International Edition.
Oxford Dictionary of Current English; The South African Pocket 2000. ed.
Thompson, D. Cape Town: Oxford University Press.
Opperman, C. and Meyer, M. 2008. Integrating training needs analysis,
assessment and evaluation. Randburg: Knowledge Resources
Post Office Strategy (anon). 2012 : P.O. Strategic roadmap booklet, 2012-2015.
Pretoria.
Pangarkar, A.M. and Kirkwood, T. 2009. The trainers balanced scorecard. San
Francisco, CA: Pfeiffer.
Patel, L. 2011. Overcoming barriers and valuing evaluations. Available from:
http;//www.ASTD.org/Publications/Newsletter/Learning-Circuits
Archives/2010/2009. [Accessed 08 Jul 2013].
Pervaiz, H. 2012. Engagement matters: The impact of training on employee
turnover. Available from: http://humanresourcesiq.com/corporate-learning-
running-training-like-a-busines/columns/engagement-matters-the-impact-of-
training-on-emplo/ 29 Feb: 2012 1-4[ Accessed 6 October 2013].
Phillips, P.P., Phillips, J.J. Stone, R.D. & Burkett, H. (2007). The ROI field book.
Oxford, UK: Butterworth-Heinemann.
Phillips, J.J. 2005. Investing in your company's human capital. New York:
Amacom.
Phillips, P.P., Phillips, J.J. Stone, R.D. and Burkett, H. 2007. ROI in action
casebook. San Francisco, CA: Pfeiffer.
Phillips, P.P. and Phillips, J.J.. 2007a. The value of learning. San Francisco, CA:
Pfeiffer.
UNISA STUDENT NO. 40315231 132
Phillips, J.J. and Phillips, P.P. 2007b. Show me the money: The use of ROI in
performance improvement. San Francisco, CA: Berrett-Koehler.
Phillips, J.J. and Phillips, P.P. 2010. Measuring for success: What Ceo’s Really
think about Learning Investments Available from: http://www.ASTD.org/articles.
[Accessed 11 Oct 2012].page 1-10.
Phillips, J.J. and Phillips, P.P. 2009. 'Measuring what matters: How CEOs view
learning success'. T+D, 01 August:44-49. Available from:
http://www.ASTD.org/articles/
Phillips, J.J., Phillips, P.P. & Aaron, B. 2013. Survey basics: Measurement,
Evaluation and ROI. Available from:http://.ASTD.Org. [Accessed 03 Jul 2013].
Phillips, P.P. and Phillips, J.J. 2008. ROI in action casebook. San Francisco,
CA: Pfeiffer.
Phillips, J.J. and Phillips, P.P., 2012. The return on investment for training (ROI)
proceedings ASTD international Conference and exposition 2012. Denver, CO:
ASTD. SU 324.
Powers, B. and Rothwell, W.J. 2007. Instructor excellence. San Francisco, CA:
Pfeiffer.
Robinson, R. 2003. 'Calculating ROI in training: International training ROI
measurements not good enough for SA'. 6(3), page.52.
Rossi, P.H., Lipsey, M.W. & Freeman, H.E. 2004. Evaluation: A systematic
approach Available from: http://www.amazon.com/EvaluationSystematic-Peter-
Henry [Accessed 15 Jul 2013].
Russ-Eft, D. and Preskil, H. 2005. 'In search of the Holy Grail: Return on
investment evaluation in human resource development'. Advances in Developing
Human Resources. 7(1):71-85.
Slovovitch, H. 2012. Stop wasting money on training. Available from:http///www.
ASTD.Org [Accessed on 6 October 2013].
UNISA STUDENT NO. 40315231 133
Stone, R. D. 2009. Aligning training for results. San Francisco, CA: Pfeiffer.
Stone, R.D. 2012. Analyzing your training's value using the ROI quality analysis
Tool proceedings of the International ASTD Conference and exposition 2012.
Denver, presentation SU 324.
Thorne, K. and Mackey, D. 2007. Everything you need to know about training.
London: Kogan Page.
Trolley. 2008. Running training like a business: Building learning capabilities
through outsourcing. Available from:http://humanresourcesiq.com [Accessed 5
July 2012].
Trolley, E. 2009. Leveraging the value of training: An interview. Human
Resources IQ, 01 Dec:1-2. Available from:
http://humanresourcesiq.com.[Accessed 5 July 2012].
Vasquez, L. and Norwood, A. 2013. A Kirpatrick partnersBlogArticle: Do you
Leverage Hisotical Data in your Chain of Evidence? [Accessed 27 Nov 2013]
Werner, J.M. and Desimone, R.L. 2009. Human resource development. Mason,
OH: Cengage Learning.
Wick, C., Pollock, R. & Jefferson, A. 2010. The six disciplines of breakthrough
Learning. San Francisco, CA: Pfeiffer.
Wilson, J.P. ed. 2012. International human resource development: Learning,
education and training for individuals and organisations. London: Kogan
Yeo, K.M. 2009. Measuring organisational learning: Going beyond measuring
individual training programs. Available from: http://humanresourcesiq.com
[Accessed 04 November 2011].