26
CHAPTER 15 Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus Assessment Determining the Focus of Evaluation Evaluation Models Process (Formative) Evaluation Content Evaluation Outcome (Summative) Evaluation Impact Evaluation Program Evaluation Designing the Evaluation Designing Structure Evaluation Methods Evaluation Instruments Barriers to Evaluation Conducting the Evaluation Analyzing and Interpreting Data Collected Reporting Evaluation Results Be Audience Focused Stick to the Evaluation Purpose Stick to the Data KEY TERMS assessment evaluation process (formative) evaluation content evaluation outcome (summative) evaluation impact evaluation program evaluation evaluation research OBJECTIVES After completing this chapter, the reader will be able to 1. Define the term evaluation. 2. Compare and contrast evaluation and assess- ment. 3. Identify purposes of evaluation. 4. Distinguish between five basic types of evalua- tion: process, content, outcome, impact, and program. 5. Discuss characteristics of various models of evaluation. 6. Describe similarities and differences between evaluation and research. 7. Assess barriers to evaluation. 8. Examine methods for conducting an evaluation. 9. Select appropriate instruments for various types of evaluative data. 10. Identify guidelines for reporting results of evaluation.

Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

  • Upload
    lynhu

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15

Evaluation in HealthcareEducationPriscilla Sandford Worral

CHAPTER HIGHLIGHTSEvaluation Versus AssessmentDetermining the Focus of EvaluationEvaluation Models

Process (Formative) EvaluationContent EvaluationOutcome (Summative) EvaluationImpact EvaluationProgram Evaluation

Designing the EvaluationDesigning Structure

Evaluation MethodsEvaluation InstrumentsBarriers to Evaluation

Conducting the EvaluationAnalyzing and Interpreting Data CollectedReporting Evaluation Results

Be Audience FocusedStick to the Evaluation PurposeStick to the Data

KEY TERMSassessmentevaluationprocess (formative) evaluationcontent evaluation

outcome (summative) evaluationimpact evaluationprogram evaluationevaluation research

OBJECTIVESAfter completing this chapter, the reader will beable to

1. Define the term evaluation.

2. Compare and contrast evaluation and assess-ment.

3. Identify purposes of evaluation.

4. Distinguish between five basic types of evalua-tion: process, content, outcome, impact, andprogram.

5. Discuss characteristics of various models ofevaluation.

6. Describe similarities and differences betweenevaluation and research.

7. Assess barriers to evaluation.

8. Examine methods for conducting an evaluation.

9. Select appropriate instruments for varioustypes of evaluative data.

10. Identify guidelines for reporting results ofevaluation.

Page 2: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

494 PART III / Techniques and Strategies for Teaching and Learning

Evaluation is the process that can justifythat what we do as nurses and as nurse

educators makes a value-added difference inthe care we provide. Evaluation is defined as asystematic process by which the worth orvalue of something—in this case, teaching andlearning—is judged. Early consideration ofevaluation has never been more critical thanin today’s healthcare environment. Crucialdecisions regarding learners rest on the out-comes of learning. Can the patient go home?Can the nurse provide competent care? If edu-cation is to be justified as a value-added activ-ity, the process of education must be measura-bly efficient and must be measurably linked toeducation outcomes. The outcomes of educa-tion, for the learner and for the organization,must be measurably effective.

Evaluation is a process within a process—acritical component of the nursing process, thedecision-making process, and the educationprocess. Evaluation is the final component ofeach of these processes. Because theseprocesses are cyclical, evaluation serves as thecritical bridge at the end of one cycle thatguides direction of the next cycle.

The sections of this chapter follow the stepsin conducting an evaluation. These stepsinclude (1) determining the focus of the evalu-ation, including use of evaluation models; (2)designing the evaluation; (3) conducting theevaluation; (4) determining methods of analy-sis and interpretation of data collected; (5)reporting results of data collected; and (6)using evaluation results. Each aspect of theevaluation process is important, but all of themare meaningless if the results of evaluation arenot used to guide future action in planning andcarrying out educational interventions.

EVALUATION VERSUSASSESSMENT

While assessment and evaluation are highlyinterrelated and are often used interchange-ably as terms, they are not synonymous. The

process of assessment is to gather, summarize,interpret, and use data to decide a directionfor action. The process of evaluation is togather, summarize, interpret, and use data todetermine the extent to which an action wassuccessful. The primary differences betweenthe two terms are those of timing and pur-pose. For example, an education programbegins with an assessment of learners’ needs.From the perspective of systems theory,assessment data might be called the “input.”While the program is being conducted, peri-odic evaluation lets the educator knowwhether the program and learners are pro-ceeding as planned. After program comple-tion, evaluation identifies whether and towhat extent identified needs were met. Again,from a systems theory perspective, these eval-uative data might be called “intermediate out-put” and “output,” respectively.

An important note of caution: Because youmay conduct an evaluation at the end of yourprogram, do not assume that you should planit at this point in time. Evaluation as an after-thought is, at best, a poor idea and, at worst, adangerous one. Data may be impossible tocollect, be incomplete, or even be misleading.Assessment and evaluation planning shouldideally be concurrent activities. Where feasi-ble, use the same data collection methods andinstruments. This approach is especiallyappropriate for outcome and impact evalua-tions, as will be discussed later in this chapter.“If only. . .” is an all too frequent lament,which can be minimized by planning ahead.

DETERMINING THE FOCUS OFEVALUATION

In planning any evaluation, the first and mostcrucial step is to determine the focus of theevaluation. The focus then will guide evalua-tion design, conduct, data analysis, andreporting of results. The importance of a clear,specific, and realistic evaluation focus cannotbe overemphasized. Usefulness and accuracy

Page 3: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 495

of the results of an evaluation depend heavilyon how well the evaluation is initiallyfocused.

Evaluation focus includes five basic compo-nents: audience, purpose, questions, scope,and resources (Ruzicki, 1987). To determinethese components, ask the following questions:

1. For whom is the evaluation being conducted?2. Why is the evaluation being conducted?

3. What questions will be asked in the evalua-tion?

4. What is the scope of the evaluation?

5. What resources are available to conduct theevaluation?

The audience comprises the persons orgroups for whom the evaluation is being con-ducted (Ruzicki, 1987). These individuals orgroups include the primary audience, or theindividual or group who requested the evalu-ation, and the general audience, or all thosewho will use evaluation results or who mightbenefit from the evaluation. Thus the audi-ence for an evaluation might include yourpatients, your peers, your supervisor, thenursing director, the staff development direc-tor, the chief executive officer of your institu-tion, or a group of community leaders. Whenyou report results of the evaluation, you willprovide feedback to all members of the audi-ence. In focusing the evaluation, however,first consider the primary audience. Givingpriority to the individual or group whorequested the evaluation will make focusingthe evaluation easier, especially if results of anevaluation will be used by several groups rep-resenting diverse interests.

The purpose of the evaluation is the answerto the question, “Why is the evaluation beingconducted?” The purpose of an evaluationmight be to decide whether to continue a par-ticular education program or to determine theeffectiveness of the teaching process. If a par-ticular individual or group has a primary

interest in results of the evaluation, use inputfrom that group to clarify the purpose.

An important note of caution: Why you areconducting an evaluation is not synonymouswith who or what you are evaluating. For exam-ple, nursing literature on patient educationcommonly distinguishes among three types ofevaluations: learner, teacher, and program.This distinction answers the question of who orwhat will be evaluated and is extremely usefulfor evaluation design and conduct. Why learnerevaluation might be undertaken, for example,is answered by the reason or purpose for evalu-ating learner performance. Determining teach-ing or program effectiveness is another exam-ple of the purpose for undertaking evaluation.

An excellent rule of thumb in stating thepurpose of an evaluation is: Keep it singular.In other words, state, “The purpose is. . . ,” not“The purposes are. . . .” Keeping the purposeaudience focused and singular will help avoidthe all too frequent tendency to attempt toomuch in one evaluation.

Questions to be asked in the evaluation aredirectly related to the purpose for conductingthe evaluation, are specific, and are measura-ble. Examples of questions are “To what extentare patients satisfied with the cardiac dischargeteaching program?” and “How frequently dostaff nurses use the diabetes teaching referencematerials?” Asking the right questions is cru-cial if the evaluation is to fulfill the intendedpurpose. As will be discussed later in thischapter, delineation of evaluation questions isboth the first step in selection of evaluationdesign and the basis for eventual data analysis.

The scope of an evaluation can be consid-ered an answer to the question, “How muchwill be evaluated?” “How much” includes“How many aspects of education will be eval-uated?”, “How many individuals or represen-tative groups will be evaluated?”, and “Whattime period is to be evaluated?” For example,will the evaluation focus on one class or on anentire program; on the learning experience forone patient or for all patients being taught a

Page 4: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

496 PART III / Techniques and Strategies for Teaching and Learning

particular skill? Evaluation could be limited tothe teaching process during a particularpatient education class or it could beexpanded to encompass both the teachingprocess and related patient outcomes of learn-ing. The scope of an evaluation is determinedin part by the purpose for conducting theevaluation and in part by available resources.For example, an evaluation addressing learnersatisfaction with faculty for all programs con-ducted by a staff development department ina given year is necessarily broad and long-term in scope and will require expertise indata collection and analysis. An evaluation todetermine whether a patient understandseach step in a learning session on how to self-administer insulin injections is narrow inscope and focused on a particular point intime and will require expertise in clinicalpractice and observation.

Resources needed to conduct an evaluationinclude time, expertise, personnel, materials,equipment, and facilities. A realistic appraisalof what resources are accessible and availablerelative to what resources are required is cru-cial in focusing any evaluation. Remember toinclude the time and expertise required to col-late, analyze, and interpret data and to pre-pare the report of evaluation results.

Evaluation can be classified into differenttypes, or categories, based on one or more ofthe five components described above. Themost common types of evaluation identifiedinclude process, content, outcome, impact,and program evaluation. A number of evalua-tion models have been developed that help toclarify differences among these evaluationtypes as well as how they relate to one another(Abruzzese, 1978; Haggard, 1989; Koch, 2000;Puetz, 1992; Rankin & Stallings, 2001; Walker& Dewar, 2000).

EVALUATION MODELS

Abruzzese (1978) developed the RobertaStraessle Abruzzese (RSA) Evaluation Model

for conceptualizing, or classifying, educa-tional evaluation into different categories orlevels. Although developed more than 20years ago and derived from the perspective ofstaff development education, the RSA Modelremains useful for conceptualizing types ofevaluation from both staff development andpatient education perspectives. A recentexample of use of the RSA model is given byDilorio, Price, and Becker (2001) in their dis-cussion of the evaluation of the NeuroscienceNurse Internship Program at the NationalInstitutes of Health Clinical Center.

The RSA Model pictorially places five basictypes of evaluation in relation to one anotherbased on purpose and related questions, scope,and resource components of evaluation focus(Figure 15–1). The five types of evaluationinclude process, content, outcome, impact, andprogram. Abruzzese describes the first fourtypes as levels of evaluation leading from thesimple (process evaluation) to the complex(impact evaluation). Total program evaluationencompasses and summarizes all four levels.

Process (Formative) EvaluationThe purpose of process or formative evaluation isto make adjustments in an educational activ-ity as soon as they are needed, whether thoseadjustments be in personnel, materials, facili-ties, learning objectives, or even one’s ownattitude. Adjustments may need to be madeafter one class or session before the next istaught or even in the middle of a single learn-ing experience. Consider, for example, evalua-tion of the process of teaching a newly diag-nosed juvenile insulin-dependent diabetic andher parents how to administer insulin. Wouldyou facilitate learning better by first injectingyourself with normal saline so they can seeyou maintain a calm expression? If you hadplanned to have the parent give the first injec-tion, but the child seems less fearful, mightyou consider revising your teaching plan tolet the child first perform self-injection?

Page 5: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 497

High

HighLow

LowT

ime

and

Cos

t

Freq

uen

cy

Impact

Outcome

Content

Process

Total Program

FIGURE 15–1 RSA Evaluation Model Reprinted by permission of Roberta S. Abruzzese.

Process evaluation is integral to the educa-tion process itself. It “forms” an educationalactivity because evaluation is an ongoingcomponent of assessment, planning, andimplementation. As part of the educationprocess, this ongoing evaluation helps thenurse anticipate and prevent problems beforethey occur or identify problems as they arise.

Consistent with the purpose of processevaluation, the primary question is, “How canteaching be improved to facilitate learning?”The nurse’s teaching effectiveness, the teach-ing process, and the learner’s responses aremonitored on an ongoing basis. Abruzzese(1978) describes process evaluation as a “hap-piness index.” While teaching and learningare ongoing, learners are asked their opinionsabout faculty, learning/course objectives, con-tent, teaching and learning methods, physicalfacilities, and administration of the learningexperience. Specific questions could include:

• Am I giving the patient time to ask questions?

• Is the information I am giving in class con-sistent with information included in thehandouts?

• Does the patient look bored? Is the roomtoo warm?

• Should I include more opportunities forreturn demonstration?

The scope of process evaluation generallyis limited in breadth and time period to a spe-cific learning experience such as a class orworkshop, yet is sufficiently detailed toinclude as many aspects of the specific learn-ing experience as possible while they occur.Learner behavior, teacher behavior, learner–teacher interaction, learner response to teach-ing materials and methods, and characteristicsof the environment are all aspects of the learn-ing experience within the scope of processevaluation. All learners and all teachers par-ticipating in a learning experience should beincluded in process evaluation. If resources

Page 6: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

498 PART III / Techniques and Strategies for Teaching and Learning

are limited and participants include a numberof different groups, a representative sample ofindividuals from each group rather thaneveryone from each group may be included inthe evaluation.

Resources usually are less costly and morereadily available for process evaluation thanfor other types such as impact or total pro-gram evaluation. Although process evalua-tion occurs more frequently—during andthroughout every learning experience—thanany other type, it occurs concurrently withteaching. The need for additional time, facili-ties, and dollars to conduct process evaluationis consequently decreased.

Content EvaluationThe purpose of content evaluation is to deter-mine whether learners have acquired theknowledge or skills taught during the learn-ing experience. Abruzzese (1978) describescontent evaluation as taking place immedi-ately after the learning experience to answerthe guiding question, “To what degree did thelearners learn what was imparted?” or “Towhat degree did learners achieve specifiedobjectives?” Asking a patient to give a returndemonstration or asking participants to com-plete a cognitive test at the completion of aone-day seminar are common examples ofcontent evaluation.

Content evaluation is depicted in the RSAModel as the level “in between” process andoutcome evaluation levels. In other words,content evaluation can be considered as focus-ing on how the teaching–learning processaffected immediate, short-term outcomes. Toanswer the question, “Were specified objec-tives met as a result of teaching?”, requiresthat the evaluation be designed differentlyfrom an evaluation to answer the question,“Did learners achieve specified objectives?”Evaluation designs will be discussed in somedetail later in this chapter. An important point

to be made here, however, is that evaluationquestions must be carefully considered andclearly stated because they dictate the basicframework for design and conduct.

The scope of content evaluation is limitedto a specific learning experience and to specif-ically stated objectives for that experience.Content evaluation occurs at a circumscribedpoint in time, immediately after completion ofteaching, but encompasses all teaching–learn-ing activities included in that specific learningexperience. Data are obtained from all learn-ers targeted in a specific class or group. Forexample, if both parents and the juvenile dia-betic are taught insulin administration, allthree are asked to complete a return demon-stration. Similarly, all nurses attending aworkshop are asked to complete the cognitivepost-test at the end of the workshop.

Resources used to teach content can also beused to carry out evaluation of how well thatcontent was learned. For example, equipmentincluded in teaching a patient how to changea dressing can be used by the patient to per-form a return demonstration. In the samemanner, a pretest used at the beginning of acontinuing education seminar can be read-ministered as a post-test at seminar comple-tion to measure change.

Outcome (Summative) EvaluationThe purpose of outcome evaluation is to deter-mine the effects or outcomes of teachingefforts. Outcome evaluation is also referred toas summative evaluation because its intent is to“sum” what happened as a result of educa-tion. Guiding questions in outcome evalua-tion include the following:

• Was teaching appropriate?• Did the individual(s) learn?• Were behavioral objectives met?• Did the patient who learned a skill before

discharge use that skill correctly once home?

Page 7: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 499

Just as process evaluation occurs concur-rently with the teaching–learning experience,outcome evaluation occurs after teaching hasbeen completed or after a program has beencarried out.

Outcome evaluation measures changesoccurring as a result of teaching and learning.Abruzzese (1978) differentiates outcome evalu-ation from content evaluation by focusing out-come evaluation on measuring more long-termchange that “persists after the learning experi-ence” (p. 243). Changes can include institutionof a new process, habitual use of a new tech-nique or behavior, or integration of a new valueor attitude. Which changes you will measureusually will be dictated by the objectives estab-lished as a result of the initial needs assessment.

The scope of outcome evaluation dependsin part on the changes being measured, which,in turn, depend on the objectives establishedfor the educational activity. As mentioned ear-lier, outcome evaluation focuses on a longertime period than does content evaluation.Whereas evaluating accuracy of a patient’sreturn demonstration of a skill prior to dis-charge may be appropriate for content evalua-tion, outcome evaluation should includemeasuring a patient’s competency with a skillin the home setting after discharge. Similarly,nurses’ responses on a workshop post-testmay be sufficient for content evaluation, but ifthe workshop objective states that nurses willbe able to incorporate their knowledge intopractice on the unit, outcome evaluationshould include measuring nurses’ knowledgeor behavior some time after they havereturned to the unit. Abruzzese (1978) sug-gests that outcome data be collected sixmonths after baseline data to determinewhether a change has really taken place.

Resources required for outcome evaluationare more costly and sophisticated than thosefor process or content evaluation. Comparedto the resources required for the first twotypes of evaluation in the RSA Model, out-

come evaluation requires greater expertise todevelop measurement and data collectionstrategies, more time to conduct the evalua-tion, knowledge of baseline data establish-ment, and ability to conduct reliable and validcomparative data after the learning experi-ence. Postage to mail surveys and time andpersonnel to carry out observation of nurseson the clinical unit or to complete patient/family telephone interviews are specific exam-ples of resources that may be necessary toconduct an outcome evaluation.

Impact EvaluationThe purpose of impact evaluation is to deter-mine the relative effects of education on theinstitution or the community. Put anotherway, the purpose of impact evaluation is toobtain information that will help decidewhether continuing an educational activity isworth its cost. Examples of questions appro-priate for impact evaluation include “What isthe effect of the education program on subse-quent nursing staff turnover?” and “What isthe effect of the cardiac discharge teachingprogram on long-term frequency of rehospi-talization among patients who have com-pleted the program?”

The scope of impact evaluation is broader,more complex, and usually more long-termthan that of process, content, or outcome eval-uation. Whereas outcome evaluation wouldfocus on whether specific teaching resulted inachievement of specified outcomes, for exam-ple, impact evaluation would go beyond thatto measure the effect or worth of those out-comes. In other words, outcome evaluationwould focus on a course objective, whereasimpact evaluation would focus on a coursegoal. Consider, for example, a class on the useof body mechanics. The outcome objective isthat staff members will demonstrate properuse of body mechanics in providing patientcare. The goal is to decrease back injuries

Page 8: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

500 PART III / Techniques and Strategies for Teaching and Learning

among the hospital’s direct-care providers.This distinction between outcome and impactevaluation may seem subtle, but it is impor-tant to the appropriate design and conduct ofan impact evaluation.

Resource requirements for conducting animpact evaluation are extensive and may bebeyond the scope of an individual nurse edu-cator. Literature on evaluation describesimpact evaluation as being most like evalua-tion research (Abruzzese, 1978; Hamilton,1993; Waddell, 1992). (The distinction betweenevaluation and evaluation research will beaddressed later in this chapter.) “Good” sci-ence is rarely inexpensive and never quick;good impact evaluation shares the same char-acteristics. The resources needed to designand conduct an impact evaluation generallyinclude reliable and valid instruments, traineddata collectors, personnel with research andstatistical expertise, equipment and materialsnecessary for data collection and analysis, andaccess to populations who may be culturallyor geographically diverse. Because impactevaluation is so expensive and time-intensive,this type of evaluation should be targetedtoward courses and programs where learningis critical to patient well-being or to safe, high-quality, cost-effective healthcare delivery(Puetz, 1992).

Conducting an impact evaluation mayseem a monumental task, but do not let thatstop you from undertaking the effort.Rather, plan ahead, proceed carefully, andobtain the support and assistance of col-leagues. Keeping in mind the purpose forconducting an impact evaluation should behelpful in maintaining the level of commit-ment needed throughout the process. Thecurrent managed care environment requiresjustification for every health dollar spent.The value of patient and staff education maybe intuitively evident, but the positiveimpact of education must be demonstrated ifit is to be funded.

Program EvaluationThe purpose of program evaluation can begenerically described as “designed and con-ducted to assist an audience to judge andimprove the worth of some object” (Johnson &Olesinski, 1995, p. 53). The “object” in thiscase is an educational program. Using theframework of the RSA Model (Abruzzese,1978), the purpose of total program evaluationis to determine the extent to which all activi-ties for an entire department or program overa specified period of time meet or exceedgoals originally established. Guiding ques-tions appropriate for a total program evalua-tion from this perspective might be “To whatextent did programs undertaken by membersof the nursing staff development departmentduring the year accomplish annual goalsestablished by the department?” or “Howwell did patient education activities imple-mented throughout the year meet annualgoals established for the institution’s patienteducation program?”

The scope of program evaluation is broad,generally focusing on overall goals ratherthan on specific objectives. While the term pro-gram could be defined as an individual educa-tional offering (Albanese and Gjerde, 1987),the resource requirements for conducting aprogram evaluation generally are too exten-sive to justify the effort on less than a broadscale. Abruzzese (1978) describes the scope ofprogram evaluation as encompassing allaspects of educational activity (e.g., process,content, outcome, impact) with input from allthe participants (e.g., learners, teachers, insti-tutional representatives, community repre-sentatives). The time period over which dataare collected may extend from several monthsto one or more years, depending on the timeframe established for meeting the goals to beevaluated.

Resources required for program evaluationmay include the sum of resources necessary toconduct process, content, outcome, and

Page 9: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 501

TABLE 15–1 Comparison of levels/types of evaluation across staff/patient education evaluation models

Abruzzese (1978) Haggard (1989) Rankin & Stallings (2001)

Process

Content

Outcome

Impact

Program

Patient assimilation of infor-mation during teaching

Patient information retentionafter teaching

Patient use of information inday-to-day life

N/A

N/A

Patient-education intervention

Patient/family performancefollowing learning

Patient/family performance athome

Overall self-care and healthmaintenance

N/A

impact evaluations. A program evaluationmay require significant expenditures for per-sonnel if the evaluation is conducted by anindividual or team external to the organiza-tion. Additional resources required mayinclude time, materials, equipment, and per-sonnel necessary for data entry, analysis, andreport generation.

As stated earlier, the RSA Model remains use-ful as a general framework for categorizingbasic types of evaluation: process, content, out-come, impact, and program. As depicted in themodel, differences between these types are, inlarge part, a matter of degree. For example,process evaluation occurs most frequently;impact evaluation occurs least frequently. Con-tent evaluation focuses on immediate effects ofteaching; outcome evaluation concentrates onmore long-term effects of teaching. Conduct ofprocess evaluation requires fewer resourcescompared with impact evaluation, whichrequires extensive resources for implementa-tion. The RSA Model further illustrates oneway that process, content, outcome, andimpact evaluations can be considered togetheras components of total program evaluation.

Clinical examples of how different types ofevaluation relate to one another can be found

in Haggard’s (1989) description of threedimensions in evaluating teaching effective-ness for the patient and in Rankin andStallings’s (2001) four levels of evaluation ofpatient learning. The three dimensionsdescribed by Haggard and the four levelsidentified by Rankin and Stallings are consis-tent with the basic types of evaluationincluded in Abruzzese’s RSA Model, asshown in Table 15–1. As can be seen fromTable 15–1, models developed from an educa-tion theory base, such as the RSA Model, havemuch in common with models developedfrom a patient care theory base, exemplifiedby the other two models.

At least one important point about the dif-ference between the RSA and other modelsneeds to be mentioned, however. That differ-ence is depicted in the learner evaluationmodel shown in Figure 15–2. This learner-focused model emphasizes the continuum ofpatient health/learner performance fromneeds assessment to patient health/learnerperformance once an adequate level of healthstatus/performance has been regained orachieved. Both models have value in focusingand planning any type of evaluation but areespecially important for impact and programevaluations.

Page 10: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

502 PART III / Techniques and Strategies for Teaching and Learning

LEVEL 0Learner'sdissatisfactionandreadinessto learn(Needsassessment)

LEVEL ILearner'sparticipationandsatisfactionduringintervention(Initial;process)

LEVEL IILearner'sperformanceandsatisfactionafterintervention(Initial;process)

LEVEL IIILearner'sperformanceand attitudein dailysetting(Long-term;outcome)

LEVEL IVLearner'smaintainedperformanceand attitude(Ongoing;impact)

FIGURE 15–2 Five Levels of Learner Evaluation SOURCE: Based on S. H. Rankin & K. D. Stallings(2001). Patient Education: Principles and Practices, 4th ed. Philadelphia: Lippincott.

DESIGNING THE EVALUATION

The design of an evaluation is created withinthe framework, or boundaries, already estab-lished by focusing the evaluation. In otherwords, the design must be consistent with thepurpose, questions, and scope and must berealistic given available resources. Evaluationdesign includes at least three interrelated com-ponents: structure, methods, and instruments.

Designing StructureAn important question to be answered indesigning an evaluation is “How rigorousshould the evaluation be?” The obviousanswer is that all evaluations should havesome level of rigor. In other words, all evalua-tions should be systematic and carefully andthoroughly planned or structured before theyare conducted. How rigor is translated intodesign structure depends on the questions tobe answered by the evaluation, the complexityof the scope of the evaluation, and theexpected use of evaluation results. The morethe questions address cause and effect, themore complex the scope. Likewise, the morecritical and broad-reaching the expected use ofresults, the more the evaluation design shouldbe structured from a research perspective.

Evaluation Versus Research Evaluation andresearch are neither synonymous nor mutu-

ally exclusive activities. The extent to whichthey are either very different or indistinguish-able from each other depends on the type ofevaluation and type of research considered.Ruzicki (1987) makes the following distinctionbetween the two:

While both research and evaluation involveobjective, systematic collection of data, evalu-ation is conducted to make decisions in agiven setting. Research is designed so that itcan be generalized to other settings and repli-cated in other settings. Furthermore, researchseeks new knowledge, examines cause andeffect relationships, tests hypotheses, whereasevaluation determines mission achievement,examines means-end processes, and assessesattainment of objectives (p. 234).

This argument holds true when comparing“basic” research to process evaluation, forexample. Basic research is defined as tightlycontrolled experimental studies of cause andeffect conducted for the purpose of generatingnew knowledge. This new knowledge may ormay not eventually influence practice. Processevaluation occurs concurrently with an educa-tional intervention and is conducted in anuncontrolled or real-world setting for the pur-pose of making change as soon as the need forchange is identified.

Differences between research and evalua-tion have become less distinct over the past

Page 11: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 503

several years with the advent of “applied”research, with the acceptance of qualitativemeasures and methods as legitimate research,and with the increasing importance given toresults of outcome, impact, and program eval-uations. The purpose for conducting appliedresearch is to positively affect change in prac-tice. There is little difference between this pur-pose and the purpose for conducting evalua-tion. Evaluation research, which is one type ofapplied research, can be defined as “a processof applying scientific procedures to accumu-late reliable and valid data on the manner andextent to which specified activities produceoutcomes or effects” (Hamilton, 1993, p. 148).Using this definition, Hamilton identifies pro-gram accreditation, program cost analysis,and outcome of treatment as appropriate foruse of evaluation research designs and meth-ods. Hamilton further describes impact evalu-ation as appropriate for use of quasi-experi-mental and experimental research designstructures. A number of other authors supportsome use of research designs and data collec-tion methods for outcome, impact, and pro-gram evaluations (Albanese & Gjerde, 1987;Berk & Rossi, 1990; Holzemer, 1992; Puetz,1992; Waddell, 1992).

Of course, not all outcome, impact, andprogram evaluations should be conducted asresearch studies. Some important differencesdo exist between evaluation and evaluationresearch. One of the most significant relates tothe influence of a primary audience. As dis-cussed earlier in this chapter, the primaryaudience, or the individual or group request-ing the evaluation, is a major component infocusing an evaluation. The evaluator mustdesign and conduct the evaluation consistentwith the purpose and related questions identi-fied by the primary audience. Evaluationresearch, by contrast, does not have an identi-fied primary audience. The researcher hasautonomy to develop a protocol to answer aquestion posed by the researcher.

A second difference between evaluationand evaluation research is one of timing. Thenecessary timeline for usability of evaluationresults may not be sufficient to prospectivelydevelop a research proposal and obtain insti-tutional review board approval prior to begin-ning data collection.

Given the discussion of evaluation versusevaluation research, how are decisions aboutlevel of rigor of an evaluation actually trans-lated into an evaluation structure? The struc-ture of an evaluation design depicts the num-ber of groups to be included in the evaluation,the number of evaluations or periods of evalu-ation, and the time sequence between an edu-cational intervention and evaluation of thatintervention. A “group” can comprise oneindividual, as in the case of one-to-onenurse–patient teaching, or several individuals,as in the case of a nursing in-service programor workshop.

A process evaluation might be conductedduring a single patient education activitywhere the educator observes patient behaviorduring instruction/demonstration and engagesthe patient in questions and answers uponcompletion of each new instruction. Becausethe purpose of process evaluation is to facili-tate better learning while that learning is goingon, education and evaluation occur concur-rently in this case.

Evaluation also may be conducted after aneducational intervention. This structure isprobably the most commonly employed inconducting educational evaluations, althoughit is not necessarily the most appropriate. Oncompletion of an educational activity, partici-pants may be asked to fill out a satisfactionsurvey to provide data for a process evalua-tion, or they may be given a cognitive test toprovide data for a content evaluation. If thepurpose of conducting a content evaluation isto determine whether learners know the con-tent just taught, a cognitive post-test or imme-diate return demonstration is adequate. If the

Page 12: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

504 PART III / Techniques and Strategies for Teaching and Learning

purpose of conducting the evaluation is todetermine whether after a class learners knowspecific content that they did not know beforeattending that class, then a structure thatbegins with collection of baseline data is moreappropriate. Collection of baseline data,which can be compared with data collected atone or more points in time after learners havecompleted the educational activity, providesan opportunity to measure whether changehas occurred. The ability to measure change ina particular skill or level of knowledge, forexample, also requires that the same instru-ments be used for data collection at bothpoints in time. Data collection will be dis-cussed in more detail later in this chapter.

If the purpose of conducting an evaluationis to determine whether learners know con-tent or can perform a skill as a result of aneducational intervention, the most appropri-ate structure will include at least two groups:one receiving education and one not receivingeducation. Both groups are evaluated at thesame time, even though only one groupreceives education. The group receiving thenew education program is called the treat-ment or experimental group, and the groupreceiving standard care or the traditional edu-cation program is called the comparison orcontrol group. The two groups may or maynot be “equivalent.” Equivalent groups arethose with no known differences betweenthem prior to some intervention, whereasnonequivalent groups may be different fromone another in several ways. For example,patients on Nursing Unit A may receive aneducational pamphlet to read prior to attend-ing a class, while patients on Nursing Unit Battend the class without first reading the pam-phlet. Because patients on the two units prob-ably are different in many ways—age anddiagnosis, for example—besides which educa-tional intervention they received, they wouldbe considered nonequivalent groups.

Use of the term nonequivalent is common todiscussions of traditional research designs.

Quasi-experimental designs, such as “non-equivalent control group” designs, should beamong those considered in planning an out-come, impact, or program evaluation. Espe-cially if the purpose of an evaluation is todemonstrate that an education program“caused” fewer patient returns to the clinic orfewer nurses to leave the institution, for exam-ple, the evaluation structure must have therigor of evaluation research.

Another type of quasi-experimental design,called a time-series design, might include onlyone group of learners from which evaluativedata are collected at several points in timeboth before and after receiving an educationalintervention. If data collected before the edu-cation consistently demonstrate lack oflearner ability to comply with a treatment reg-imen, whereas data collected after the educa-tion consistently demonstrate a significantimprovement in patient compliance with thatregimen, the evaluator could argue that theeducation was the reason for the improve-ment in this case.

In more recent years, pluralistic designshave appeared in the literature as approachesespecially suited for evaluation of projectsthat have a community base, that include par-ticipants from diverse settings or perspectives,or that require both program processes andoutcomes to be included in the evaluation(Billings, 2000; Gerrish, 2001; Hart, 1999). Asthe term implies, a pluralistic design uses avariety of sources and methods for obtainingevaluative data, often including both qualita-tive and quantitative evidence. Because thesedesigns are comprehensive, resource-inten-sive, and long-term in nature, they are mostappropriate for program evaluation.

This chapter does not provide an exhaus-tive description of evaluation designs. Rather,it is intended to increase awareness of thevalue and usefulness of these designs, espe-cially when the results of an evaluation will beused to make major financial or programmaticdecisions. The literature on evaluation of

Page 13: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 505

nursing staff education and patient educationhas become an increasingly rich source ofexamples of how to conduct rigorous evalua-tion. A literature search that includes many orall of the following journals is a must for plan-ning evaluation of healthcare education in acost-conscious and outcome-focused health-care environment: Evaluation and the HealthProfessions, Journal of Continuing Education inNursing, Adult Education Quarterly, Health Edu-cation Quarterly, Nurse Educator, Journal ofNursing Staff Development, Health EducationResearch, Nursing Management, NursingResearch, Research in Nursing and Health, andJournal of Advanced Nursing.

Evaluation MethodsEvaluation focus provides the basis for deter-mining the evaluation design structure. Thedesign structure, in turn, provides the basisfor determining evaluation methods. Evalua-tion methods include those actions that areundertaken to carry out the evaluationaccording to the design structure. All evalua-tion methods deal in some way with data anddata collection. Answers to the followingquestions will assist in selection of the mostappropriate, feasible methods for conductinga particular evaluation in a particular settingand for a specified purpose:

• What types of data will be collected?• From whom or what will data be collected?• How, when, and where will data be col-

lected?• By whom will data be collected?

Types of Data to Collect Evaluation ofhealthcare education includes collection ofdata about people, about the educational pro-gram or activity, and about the environment inwhich the educational activity takes place.Process, outcome, impact, and program evalu-ations require data about all three: the people,the program, and the environment. Content

evaluations may be limited to data about thepeople and the program, although this limita-tion is not necessary. Types of data that are col-lected about people can be classified as physi-cal, cognitive, affective, or psychomotor. Datathat are collected about educational activitiesor programs generally include such programcharacteristics as cost, length, number of edu-cators required, amount and type of materialsrequired, teaching–learning methods used,and so on. Data that are collected about theenvironment in which a program or activity isconducted generally include such environ-mental characteristics as temperature, lighting,location, layout, space, and noise level.

Given the possibility that an unlimited andoverwhelming amount of data could be col-lected, how do you decide what data shouldbe collected? The most straightforwardanswer to this question is that data should becollected that will answer evaluation ques-tions posed in focusing the evaluation. Thelikelihood that you will collect the rightamount of the right type of data to answerevaluation questions can be significantlyimproved by (1) remembering that any datayou collect, you are obligated to use and (2)using operational definitions. An operationaldefinition of a word or phrase is a definitionthat is written in measurement terms.

Functional health status, for example, can betheoretically defined as an individual’s abilityto independently carry out activities of dailyliving without self-perceived undue difficultyor discomfort. Functional health status can beoperationally defined as an individual’s com-posite score on the SF-36 survey instrument(Ware et al., 1978; Stewart et al., 1988). The SF-36, which has undergone years of extensivereliability and validity testing with a wide vari-ety of patient populations and in several lan-guages, is generally considered the “gold stan-dard” for measuring functional health statusfrom the individual’s perspective.

Similarly, patient compliance can be theo-retically defined as the patient’s regular and

Page 14: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

506 PART III / Techniques and Strategies for Teaching and Learning

consistent adherence to a prescribed treat-ment regimen. For use in an outcome evalua-tion of a particular educational activity,patient compliance might be operationallydefined as the patient’s demonstration ofunassisted and error-free completion of allsteps in the sterile dressing change asobserved in the patient’s home on three sepa-rate occasions at two-week time intervals.

As you can see from these examples, anoperational definition states exactly what datawill be collected. In the first example, meas-urement of functional health status willrequire collection of patient survey data usinga specific self-administered questionnaire.The second example provides even moreinformation about data collection than doesthe first, by including where and how manytimes the patient’s performance of the dress-ing change is to be observed, as well as statingthat criteria for compliance include both unas-sisted and error-free performance on eachoccasion.

In addition to being categorized as describ-ing people, programs, or the environment,data can be categorized as quantitative orqualitative. Quantitative data are numeric andgenerally are expressed in statistics such asmean, median, ratio, F statistic, t statistic, orchi-square. Numbers can be used to answerquestions of how much, how many, howoften, and so on in terms that are commonlyunderstood by the audience for the evalua-tion. Mathematical analysis can demonstratewith some level of precision and reliabilitywhether a learner’s knowledge or skill haschanged since completing an educational pro-gram, for example, or how much improve-ment in a learner’s knowledge or skill is theresult of an educational program. Qualitativedata include feelings, behaviors, words, andphrases and generally are expressed in themesor categories. Qualitative data can bedescribed in quantitative terms, such as per-centages or counts, but this transformationeliminates the richness and insight that the

use of qualitative data can offer. Qualitativedata can be used as background to betterinterpret quantitative data, especially if theevaluation is intended to measure such value-laden or conceptual terms as satisfaction orquality.

Any evaluation may be strengthened bycollecting both quantitative and qualitativedata. For example, an evaluation to determinewhether a stress reduction class resulted indecreased work stress for participants couldinclude participants’ qualitative expressionsof how stressed they feel plus quantitativepulse and blood pressure readings. Becausecollection of both quantitative and qualitativedata, while intuitively appealing, is resource-intensive, be certain that the focus of the eval-uation justifies such an undertaking.

From Whom or What to Collect Data Datacan be collected directly from the individualswhose behavior or knowledge is being evalu-ated, from surrogates or representatives ofthese individuals, or from documentation ordatabases already created. Whenever possi-ble, plan to collect at least some data directlyfrom individuals being evaluated. In the caseof process evaluation, data should be collectedfrom all learners and all educators participat-ing in the educational activity. Content andoutcome evaluations should include datafrom all learners.

Because impact and program evaluationshave a broader scope than do the first threetypes of evaluation, collecting data from allindividuals who participated in an educationalprogram over an extended period of time maybe impossible due to the inability to locate par-ticipants or a lack of sufficient resources togather data from such a large number of peo-ple. When all participants cannot be counted orlocated, data may be collected from a subset, orsample, of participants who are considered torepresent the entire group. If an evaluation isplanned to collect data from a sample of partic-ipants, be careful to include participants who

Page 15: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 507

are representative of the entire group. A ran-dom selection of participants from whom datawill be collected will minimize bias in the sam-ple but cannot guarantee representativeness.

Consider the example of an impact evalua-tion conducted to determine whether a five-year program supporting home-based healtheducation actually improved the generalhealth status of individuals in the communityserved by the program. Suppose all membersof the community could be counted. A ran-dom sample of community members could begenerated by first listing and numbering allmembers’ names, then drawing numbersusing a random numbers table until a 10%sample is obtained. Such a method for select-ing the sample of community members wouldeliminate intentional selection of those indi-viduals who were the most active programparticipants and who might therefore have abetter health status than does the communityas a whole. At the same time, the 10% randomsample could unintentionally include onlythose individuals who did not participate inthe health education program. Data collectedfrom this sample of nonparticipants would beequally as misleading as data collected fromthe first sample. A more representative sam-ple for this evaluation should include bothparticipants and nonparticipants, ideally inthe same proportions in the sample as in thecommunity.

Preexisting databases should never be usedas the only source of evaluative data unlessthey were created for the purpose of that eval-uation. Even though these data were collectedfor a different purpose, they may be helpfulfor providing additional information to theprimary audience for the evaluation. Dataalready in existence generally are less expen-sive to obtain than are original data. The deci-sion whether to use preexisting data dependson whether they were collected from peopleof interest in the current evaluation andwhether they are consistent with operationaldefinitions used in the current evaluation.

How, When, and Where to Collect DataMethods for how data can be collectedinclude the following:

• Observation• Interview• Questionnaire or written examination• Record review• Secondary analysis of existing databases

Which method is selected depends, first, onthe type of data being collected and, second,on available resources. Whenever possible,data should be collected using more than onemethod. Using multiple methods will providethe evaluator, and consequently the primaryaudience, with more complete informationabout the program or performance being eval-uated than could be accomplished using a sin-gle method.

Observations can be conducted by the eval-uator in person or can be videotaped for view-ing at some later time. In the combined role ofeducator-evaluator, the nurse educator who isconducting a process evaluation can directlyobserve a learner’s physical, verbal, psychomo-tor, and affective behaviors so that they can beresponded to in a timely manner. Use of video-tape or a nonparticipant observer also can bebeneficial for picking up the educator’s ownbehaviors of which the educator is unaware,but which might be influencing the learner.

The timing of data collection, or when datacollection takes place, has already beenaddressed both in discussion of different typesof evaluation and in descriptions of evaluationdesign structures. Process evaluation, forexample, generally occurs during and imme-diately after an educational activity. Contentevaluation takes place immediately after com-pletion of education. Outcome evaluationoccurs some time after completion of educa-tion, after learners have returned to the settingwhere they are expected to use new knowl-edge or perform a new skill. Impact evaluationgenerally is conducted from weeks to years

Page 16: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

508 PART III / Techniques and Strategies for Teaching and Learning

after the educational program being evaluatedbecause the purpose of impact evaluation is todetermine what change has occurred withinthe community or institution as a whole as aresult of an educational program.

Timing of data collection for program evalu-ation is less obvious than for other types ofevaluation, in part because a number of differ-ent descriptions of what constitutes a programevaluation can be found both in the literatureand in practice. As discussed earlier, Abruzzese(1978) describes data collection for programevaluation as occurring over a prolongedperiod because program evaluation is itself aculmination of process, content, outcome, andimpact evaluations already conducted.

Where an evaluation is conducted can havea major effect on evaluation results. Be carefulnot to make the decision about where to col-lect data on the basis of convenience for thedata collector. An appropriate setting for con-ducting a content evaluation may be in theclassroom or skills laboratory where learnershave just completed class instruction or train-ing. An outcome evaluation to determinewhether training has improved the nurse’sability to perform a skill with patients on thenursing unit, however, requires that data col-lection—in this case, observation of thenurse’s performance—be conducted on thenursing unit. Similarly, an outcome evalua-tion to determine whether discharge teachingin the hospital enabled the patient to provideself-care at home requires that data collection,or observation of the patient’s performance,be conducted in the home. What if availableresources are insufficient to allow for homevisits by the evaluator? To answer this ques-tion, keep in mind that the focus of the evalua-tion is on performance by the patient, not per-formance by the evaluator. Training a familymember, a visiting nurse, or even the patientto observe and record patient performance athome is preferable to bringing the patient to aplace of convenience for the evaluator.

Who Collects Data Evaluative data are mostcommonly collected by the educator who isconducting the class or activity being evalu-ated because that educator is already presentand interacting with learners. Combining therole of evaluator with that of educator is oneappropriate method for conducting a processevaluation because evaluative data are inte-gral to the teaching–learning process. Invitinganother educator or a patient representative toobserve a class can provide additional datafrom the perspective of someone who doesnot have to divide his or her attentionbetween teaching and evaluating. This sec-ond, and perhaps less biased, input canstrengthen legitimacy and usefulness of eval-uation results.

Data can also be collected by the learnersthemselves, by other colleagues within thedepartment or institution, or by someone fromoutside the institution. Puetz’s (1992) descrip-tion of data collection using a participant evalu-ation team is an example of data collection thatincludes learners. The evaluation team is com-posed of a small number of randomly selectedindividuals who are scheduled to attend aneducational program. Team members are intro-duced to other program participants at thebeginning of the class, join other participantsduring the program, and collect data throughself-report as well as through observation andinteraction with others during breaks.

When selecting who will collect data, keepin mind that the individuals chosen to carryout this task become an extension of the eval-uation instrument. If the data that are col-lected are to be reliable, unbiased, and accu-rate, the data collectors must be unbiased andsufficiently expert at the task. Use of unbiasedexpert data collectors is especially importantfor collecting observation and interview data,because these data in part depend on the sub-jective interpretation of the data collector.Other data can also be affected by who col-lects those data. For example, if staff nurses

Page 17: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 509

are asked to complete a job satisfaction surveyand their head nurse is asked to collect thesurveys for return to the evaluator, whatproblems do you think might occur? Mightsome staff nurses be hesitant to provide nega-tive scores on certain items, even though theyhold a negative opinion? Likewise, physiolog-ical data can be altered, however unintention-ally, by the data collector. Consider, for exam-ple, an outcome evaluation to determinewhether a series of biofeedback classes givento young executives can reduce stress asmeasured by pulse and blood pressure. Howmight some executives’ pulse and blood pres-sure results be affected by a data collectorwho is extremely physically attractive or out-wardly angry?

Use of trained data collectors from an exter-nal agency is, in most cases, not a financiallyviable option. The potential for a data collec-tor to bias data can be minimized using anumber of less expensive alternatives, how-ever. First, limit the number of data collectorsas much as possible, as this step will automat-ically decrease person-based variation. Askindividuals assisting you with data collectionto wear similar neutral colors, to avoidcologne, and to speak in a moderate tone.Because “moderate tone,” for example, maynot be interpreted the same way by everyone,hold at least one practice session or “dry run”with all data collectors prior to actually con-ducting the evaluation. Whenever possible,ask for help with data collection from some-one who has no vested interest in results andwho will be perceived as unbiased and non-threatening by those providing the data. Inter-view scripts to be read verbatim by the inter-viewer can ensure that all patients or staffbeing interviewed will be asked the samequestions.

With the advent of continuous qualityimprovement as an expectation of daily activ-ity in healthcare organizations, healthcareprofessionals are obligated to become more

knowledgeable about principles of measure-ment and ways to implement measurementtechniques in their work setting (Joint Com-mission on Accreditation of Healthcare Orga-nizations, 1995). One benefit that this changein practice has for the nurse educator is thatmore people within the organization havesome expertise in data collection and are moti-vated to help with data collection activities.Another potential benefit is that data collec-tion activities are likely to already be a part ofpractice. Not only might the nurse educatorhave readily available individuals to assistwith data collection, but the educator mightalso have readily available and usable instru-ments and data.

Use of a portfolio as a method for evalua-tion of an individual’s learning over time hasbeen documented in the literature for morethan 25 years, primarily from an academicperspective (Appling et al., 2001; Ball et al.,2000; Cayne, 1995; Roberts et al., 2001).Although formal education of nursing stu-dents is not the focus of this text, other uses ofportfolios are relevant to the role of the prac-tice-based nurse as educator. Individual com-pletion of a professional portfolio is a currentrequirement for recertification in some nurs-ing specialties in the United States and forperiodic registration in the United Kingdom(Ball, 2000; Serembus, 2000). In light of theincreasing demands on today’s professionalsto maintain currency in their competence topractice, Serembus (2000) suggests that a prac-tice portfolio may soon be a requirement forrelicensure in the United States.

Given the importance of a nurse’s portfolioto his or her career status, the nurse educatormay find several colleagues asking for assis-tance in creating and maintaining a portfoliothat will provide a strong base of evaluativeevidence demonstrating that nurse’s continu-ing professional development and consequentimpact on practice. Perhaps the best sugges-tion the nurse educator might offer—and

Page 18: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

510 PART III / Techniques and Strategies for Teaching and Learning

heed—is to clarify the focus of the portfolio asdetermined by the requiring organization (inthis case, the “primary audience”) and asstated in that organization’s criteria for portfo-lio completion. Is the focus more on processevaluation, outcome evaluation, or both?Specifically, is the nurse expected to demon-strate “reflective practice”? If so, what doesthe organization accept as evidence of “reflec-tive practice”?

One reason why focus clarification is sochallenging is because there is no consistentdescription of how portfolios are to be used orwhat they are to contain. In its simplest form, apractice portfolio comprises a collection ofinformation and materials about one’s practicethat have been gathered over time. The issue ofwhether this collection is intended to demon-strate previous learning or whether the processof collecting is itself a learning experience con-tinues to foster debate (Cayne, 1995; Roberts,2001). Central to this issue is the notion ofreflective practice. First coined by Schön (1987),the term reflective practice still does not have acommonly agreed-upon definition (Cotton,2001; Hannigan, 2001; Teekman, 2000). Schöndescribes two key components of reflectivepractice as reflection-in-action and reflection-on-action. Reflection-in-action occurs when thenurse introspectively considers a practiceactivity while performing it so change forimprovement can be made at that moment.Reflection-on-action occurs when the nurseintrospectively analyzes a practice activity afterits completion so as to gain insights for thefuture (Cotton, 2001). From an evaluation per-spective, these components are similar to form-ative and summative evaluation, indicatingthat reflective practice has more than one focus.

Given the complexity of attempting toaddress multiple foci with the use of a singleevaluation method—the practice portfolio—itis not surprising that use of portfolios hasinspired such controversy. To the extent that aportfolio is required to include physical docu-mentation as evidence of reflective practice, an

introspective activity, the argument by somethat portfolios are not adequately reliable orvalid for use in evaluation of learning (Ball etal., 2000; Cayne, 1995) is also not surprising.

Evaluation InstrumentsThis chapter is intended to present key pointsto consider in selection, modification, or con-struction of evaluation instruments. When-ever possible, an evaluation should be con-ducted using existing instruments, becauseinstrument development requires consider-able expertise, time, and expenditure ofresources. Construction of an original evalua-tion instrument, whether it is in the form of aquestionnaire or a type of equipment, alsorequires rigorous testing for reliability andvalidity. Timely provision of evaluative infor-mation for making decisions rarely allows theluxury of the several months to several yearsneeded to develop a reliable, valid instrument.

The initial step in instrument selection is toconduct a literature search for evaluationssimilar to the evaluation being planned. Ahelpful place to begin is with the same jour-nals listed earlier in this chapter. Instrumentsthat have been used in more than one studyshould be given preference over an instru-ment developed for a single use, becauseinstruments used multiple times generallyhave been more thoroughly tested for reliabil-ity and validity. Once a number of potentialinstruments have been identified, each instru-ment must be carefully critiqued to determinewhether it is, in fact, appropriate for the eval-uation planned.

First, the instrument must measure the per-formance being evaluated exactly as that per-formance has been operationally defined forthe evaluation. For example, if satisfactionwith a continuing education program is oper-ationally defined to include a score of 80% orhigher on five specific program components(such as faculty responsiveness to questions,relevance of content, and so on), then the

Page 19: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 511

instrument selected to measure participantsatisfaction with the program must includeexactly those five components and must beable to be scored in percentages.

Second, an appropriate instrument shouldhave documented evidence of its reliability andvalidity with individuals who are as closelymatched as possible with the people fromwhom you will be collecting data. If you will beevaluating the ability of older adult patients tocomplete activities of daily living, for example,you would not want to use an instrumentdeveloped for evaluating the ability of youngorthopedic patients to complete routine activi-ties. Similarities in reading level and visualacuity also should exist if the instrument beingevaluated is a questionnaire or scale that partic-ipants will complete themselves.

Existing instruments being considered forselection also must be affordable, must be fea-sible for use in the location planned for con-ducting data collection, and should requireminimal training on the part of data collectors.

The evaluation instrument most likely torequire modification from an existing tool ordevelopment of an entirely new instrument isa cognitive test. The primary reason for con-structing such a test is that it must be consis-tent with content actually covered during theeducational program or activity. The intent ofa cognitive test is to be comprehensive andrelevant and to fairly test the learner’s knowl-edge of content covered. Use of a test blue-print is one of the most useful methods forensuring comprehensiveness and relevance oftest questions because the blueprint enablesthe evaluator to be certain that each area ofcourse content is included in the test and thatcontent areas emphasized during instructionare similarly emphasized during testing.

Barriers to EvaluationIf evaluation is so crucial to healthcare educa-tion, why is evaluation often an afterthoughtor even overlooked entirely? The reasons

given for not conducting evaluations are manyand varied but rarely, if ever, insurmountable.To overcome barriers to evaluation, they firstmust be identified and understood; then theevaluation must be designed and conducted ina way that will minimize or eliminate as manyidentified barriers as possible.

Barriers to conducting an evaluation can beclassified into three broad categories:

• Lack of clarity• Lack of ability• Fear of punishment or loss of self-esteem

Lack of Clarity Lack of clarity most oftenresults from an unclear, unstated, or ill-defined evaluation focus. Undertaking anyaction is difficult if the performer does notknow the purpose for taking that action.Undertaking an evaluation certainly is no dif-ferent. Often evaluations are attempted todetermine the quality of an educational pro-gram or activity, yet quality is not definedbeyond some vague sense of “goodness.”What is goodness and from whose perspectivewill it be determined? Who or what has todemonstrate evidence of goodness? What willhappen if goodness is or is not evident? Inabil-ity to answer these or similar questions createsa significant barrier to conducting an evalua-tion. Not knowing the purpose of an evalua-tion or what will be done with evaluationresults, for example, can become a barrier foreven the most seasoned evaluator.

Barriers in this category have the greatestpotential for successful resolution because thebest solution for lack of clarity is to provideclarity. Recall that evaluation focus includesfive components: audience, purpose, ques-tions, scope, and resources. To overcome apotential lack of clarity, all five componentsmust be identified and made available tothose conducting the evaluation. A clearlystated purpose must explain why the evalua-tion is being conducted. Part of the answer tothis question consists of a statement detailing

Page 20: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

512 PART III / Techniques and Strategies for Teaching and Learning

what decisions will be made on the basis ofevaluation results. Clear identification of whoconstitutes the primary audience is as impor-tant as a clear statement of purpose. It is fromthe perspective of the primary audience thatterms such as quality should be defined andoperationalized. While the results of the eval-uation will provide the information on whichdecisions will be made, the primary audiencewill actually make those decisions.

Lack of Ability Lack of ability to conduct anevaluation most often results from insufficientknowledge of how to conduct the evaluationor insufficient or inaccessible resources neededto conduct the evaluation. Clarification of eval-uation purpose, questions, and scope is oftenthe responsibility of the primary audience.Clarification of resources, however, is theresponsibility of both the primary audienceand the individuals conducting the evaluation.The primary audience members are account-able for providing the necessary resources—personnel, equipment, time, facilities, and soon—to conduct the evaluation they arerequesting. Unless these individuals havesome expertise in evaluation, they may notknow what resources are necessary. The per-sons conducting the evaluation, therefore,must accept responsibility for knowing whatresources are necessary and for providing thatinformation to the primary audience. The per-son asked or expected to conduct the evalua-tion may be as uncertain about necessaryresources as is the primary audience, however.

Lack of knowledge of what resources arenecessary or lack of actual resources may forma barrier to conducting an evaluation that canbe difficult, although not impossible, to over-come. Lack of knowledge can be resolved orminimized by enlisting the assistance of indi-viduals with needed expertise through con-sultation or contract (if funds are available),through collaboration, or indirectly throughliterature review. Lack of other resources—time, money, equipment, facilities, and so

on—should be documented, justified, andpresented to those requesting the evaluation.Alternative methods for conducting the evalu-ation, including the option of making deci-sions in the absence of any evaluation, alsoshould be documented and presented.

Fear of Punishment or Loss of Self-EsteemEvaluation may be perceived as a judgment ofpersonal worth. Individuals being evaluatedmay fear that anything less than a perfect per-formance will result in punishment or thattheir mistakes will be seen as evidence thatthey are somehow unworthy or incompetentas human beings. These fears form one of thegreatest barriers to conducting an evaluation.Unfortunately, the fear of punishment or ofbeing seen as unworthy may not easily beovercome, especially if the individual has hadpast negative experiences. Consider, for exam-ple, traditional quality assurance monitoring,where results were used to correct deficienciesthrough punitive measures. To give anotherexample, how many times has an educatorinterpreted learner dissatisfaction with ateaching style as learner dislike for the educa-tor as a person? How many times have pedi-atric patients’ parents said, “If you don’t do itright, the doctor won’t let you go home . . . andwe will be very disappointed in you”? Everyone of us probably has experienced “test anxi-ety” at some point in our own education.

The first step in overcoming this barrier is torealize that the potential for its existence maybe close to 100%. Individuals whose perform-ance or knowledge is being evaluated are notlikely to say overtly that evaluation representsa threat to them. Rather, they are far morelikely to demonstrate self-protective behaviorsor attitudes that can range from failure toattend a class that has a post-test, to providingsocially desirable answers on a questionnaire,to responding with hostility to evaluationquestions. An individual may intentionallychoose to “fail” an evaluation as a method forcontrolling the uncertainty of success.

Page 21: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 513

The second step in overcoming the barrierof fear or threat is to remember that “the per-son is more important than the performanceor the product” (Narrow, 1979, p. 185). If thepurpose of an evaluation is to facilitate betterlearning, as in process evaluation, focus on theprocess. Consider the example of teaching anewly diagnosed diabetic how to administerinsulin. The educator has carefully and thor-oughly explained each step in the process ofinsulin administration, observing the patient’sintent expression and frequent head nods dur-ing the explanation. When the patient tries todemonstrate the steps, however, he is unableto begin. Why? One answer may be that theuse of an auditory teaching style does notmatch the patient’s visual learning style.Another possibility might be that too manydistractions are present in the immediateenvironment, making concentration on learn-ing all but impossible.

A third step in overcoming the fear of pun-ishment or threatened loss of self-esteem is topoint out achievements, if they exist, or tocontinue to encourage effort if learning hasnot been achieved. Give praise honestly,focusing on the task at hand.

Finally, and perhaps most importantly, usecommunication of information to prevent orminimize fear. Lack of clarity exists as a bar-rier for those who are the subjects of an evalu-ation as much as for those who will conductthe evaluation. If learners or educators knowand understand the focus of an evaluation,they may be less fearful than if such informa-tion is left to their imaginations. Rememberthat failure to provide certain information maybe unethical or even illegal. For example, anyevaluative data about an individual that canbe identified with that specific person shouldbe collected only with the individual’sinformed consent. The ethical and legal impor-tance of informed consent as a protection ofhuman rights is a central concern of institu-tional review boards. Indeed, institutionalreview board approval is a prerequisite to ini-

tiation of virtually every experimental medicalintervention conducted on patients or families.

CONDUCTING THEEVALUATION

To conduct an evaluation means to implementthe evaluation design by using the instru-ments chosen or developed according to themethods selected. How smoothly an evalua-tion is implemented depends primarily onhow carefully and thoroughly the evaluationwas planned. Planning is not a complete guar-antee of success, however. Three methods tominimize the effects of unexpected events thatoccur when carrying out an evaluation are to(1) conduct a pilot test first, (2) include “extra”time, and (3) keep a sense of humor.

Conducting a pilot test of the evaluationentails trying out the data collection methods,instruments, and plan for data analysis with afew individuals who are the same as or verysimilar to those who will be included in the fullevaluation. A pilot test must be conducted if anynewly developed instruments are planned forthe evaluation, so as to assess reliability, valid-ity, interpretability, and feasibility of those newinstruments. Also, a pilot test should be carriedout prior to implementing a full evaluation thatwill be expensive or time-consuming to conductor on which major decisions will be based.Process evaluation generally is not amenable topilot testing unless a new instrument will beused for data collection. Pilot testing should beconsidered prior to conducting outcome,impact, or program evaluations, however.

Including “extra” time during the conductof an evaluation means leaving room for theunexpected delays that almost invariablyoccur during evaluation planning, data collec-tion, and translation of evaluation results intoreports that will be meaningful and usable bythe primary audience. Because those delaysnot only will occur but also are likely to occurat inconvenient times during the evaluation,keeping a sense of humor is vitally important.

Page 22: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

514 PART III / Techniques and Strategies for Teaching and Learning

An evaluator with a sense of humor is morelikely to maintain a realistic perspective inreporting results that include negative find-ings, too. An audience with a vested interestin positive evaluation results may blame theevaluator if results are lower than expected.

ANALYZING ANDINTERPRETING DATACOLLECTED

The purposes for conducting data analysis are(1) to organize data so that they can providemeaningful information and (2) to provideanswers to evaluation questions. Data andinformation are not synonymous terms. That is,a mass of numbers or a mass of commentsdoes not become information until it has beenorganized into coherent tables, graphs, or cat-egories that are relevant to the purpose forconducting the evaluation.

Basic decisions about how data will be ana-lyzed are dictated by the nature of the dataand by the questions used to focus the evalua-tion. As described earlier, data can be quanti-tative or qualitative. Data also can bedescribed as continuous or discrete. Age andlevel of anxiety are examples of continuousdata; gender and diagnosis are examples ofdiscrete data. Finally, data can be differenti-ated by level of measurement. All qualitativedata are at the nominal level of measurement,meaning they are described in terms of cate-gories such as “health focused” versus “illnessfocused.” Quantitative data can be at the nom-inal, ordinal, interval, or ratio level of meas-urement. The level of measurement of thedata determines what statistics can be used toanalyze those data. A useful suggestion fordeciding how data will be analyzed is to enlistthe assistance of someone with experience indata analysis.

Analysis of data should be consistent withthe type of data collected. In other words, alldata analysis must be rigorous, but not all

data analysis need include use of inferentialstatistics. For example, qualitative data, suchas verbal comments obtained during inter-views and written comments obtained fromopen-ended questionnaires, are summarizedor “themed” into categories of similar com-ments. Each category or theme is qualitativelydescribed by directly quoting one or morecomments that are typical of that category.These categories then may be quantitativelydescribed using descriptive statistics such astotal counts and percentages.

Different qualitative methods for analyz-ing data are emerging as they gain legiti-macy in a scientific environment once ruledby traditional experimental quantitativemethods. One use of qualitative methods inevaluation is called fourth-generation evalu-ation (Hamilton, 1993), or naturalistic orconstructivist evaluation. Perhaps most ben-eficial in conducting a process evaluation,fourth-generation evaluation focuses onteacher–learner interaction and observationof that interaction by the teachers and learn-ers present. As the term constructivist mightimply, evaluation is an integral componentof the education process; that is, evaluationhelps construct the education. Data collec-tion, analysis, and use of results occur con-currently. Teacher and learner questionsand responses are observed and recordedduring an education program. These obser-vations are summarized at the time ofoccurrence and throughout the program,and then are used to provide immediatefeedback to participants in the educationalactivity.

The first step in analysis of quantitative dataconsists of organization and summarizationusing statistics such as frequencies and per-centages that describe the sample or popula-tion from which the data were collected. Adescription of a population of learners, forexample, might include such information asresponse rate and frequency of learner demo-

Page 23: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 515

TABLE 15–2 Demographic comparison of survey respondents to total course participants using group averages

Survey Respondents All Course ParticipantsLearner Demographics (n = 50) (N = 55)

Age 27.5 years 25.5 yearsLength of time employed 3.5 years 7.5 yearsYears of post-high school education 2.0 years 2.0 years

graphic characteristics. Table 15–2 presents anexample of how such information might bedisplayed.

The next step in analysis of quantitativedata is to select the statistical proceduresappropriate for the type of data collected thatwill answer questions posed in planning theevaluation. Again, a good suggestion is toenlist the assistance of an expert.

REPORTING EVALUATIONRESULTS

Results of an evaluation must be reported if theevaluation is to be of any use. Such a statementseems obvious, but how many times have youheard that an evaluation was being conductedbut never heard anything more about it? Howmany times have you participated in an evalu-ation but never seen the final report? Howmany times have you conducted an evaluationyourself but not provided anyone with areport on findings? Almost all of us, if we arehonest, would have to answer even the lastquestion with a number greater than zero.

Reasons for not reporting evaluation resultsare diverse and numerous. Ignorance of whoshould receive the results, belief that theresults are not important or will not be used,inability to translate results into language use-ful for producing the report, and fear thatresults will be misused are four major reasons

for why evaluative data may often never getfrom the spreadsheet to the customer.

Following a few guidelines when plan-ning an evaluation will significantly increasethe likelihood that results of the evaluationwill be reported to the appropriate individu-als or groups, in a timely manner, and inusable form:

1. Be audience focused.2. Stick to the evaluation purpose.3. Stick to the data.

Be Audience FocusedThe purpose for conducting an evaluation isto provide information for decision making bythe primary audience. The report of evalua-tion results must, therefore, be consistent withthat purpose. One rule of thumb to use:Always begin an evaluation report with anexecutive summary that is no longer than onepage. No matter who the audience membersare, their time is important to them. A secondimportant guideline is to present evaluationresults in a format and language that the audi-ence can use and understand without addi-tional interpretation. This statement does notmean that technical information should beexcluded from a report to a lay audience;rather, it means that such information should

Page 24: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

516 PART III / Techniques and Strategies for Teaching and Learning

be written using nontechnical terms. Graphsand charts generally are easier to understandthan are tables of numbers, for example. If asecondary audience of technical experts alsowill receive a report of evaluation results,include an appendix containing the moredetailed or technically specific information inwhich they might be interested. Third, makeevery effort to present results in person aswell as in writing. A direct presentation pro-vides an opportunity for the evaluator toanswer questions and to assess whether thereport meets the needs of the audience.Finally, include specific recommendations orsuggestions for how evaluation results mightbe used.

Stick to the Evaluation PurposeKeep the main body of an evaluation reportfocused on information that fulfills the pur-pose for conducting the evaluation. Provideanswers to the questions asked. Include themain aspects of how the evaluation was con-ducted, but avoid a diary-like chronology ofthe activities of the evaluators.

Stick to the DataMaintain consistency with actual data whenreporting and interpreting findings. Keep inmind that a question not asked cannot be

answered and that data not collected cannotbe interpreted. If you did not measure orobserve a teacher’s performance, for example,do not draw conclusions about the adequacyof that performance. Similarly, if the onlymeasures of patient performance were thoseconducted in the hospital, do not interpretsuccessful inpatient performance as successfulperformance by the patient at home or atwork. These examples may seem obvious, but“conceptual leaps” from the data collected tothe conclusions drawn from those data are anall-too-common occurrence. One suggestionthat decreases the opportunity to overinter-pret data is to include evaluation results andinterpretation of those results in separate sec-tions of the report.

A discussion of any limitations of theevaluation is an important part of the evalu-ation report. For example, if several patientswere unable to complete a questionnairebecause they could not understand it orbecause they were too fatigued, say so.Knowing that evaluation results do notinclude data from patients below a certaineducational level or physical status will helpthe audience realize that they cannot makedecisions about those patients based on theevaluation. Discussion of limitations alsowill provide useful information for what notto do the next time a similar evaluation isconducted.

SUMMARYThe process of evaluation in healthcare educa-tion involves gathering, summarizing, inter-preting, and using data to determine theextent to which an educational activity is effi-cient, effective, and useful for those who par-ticipate in that activity as learners, teachers, orsponsors. Five types of evaluation were dis-cussed in this chapter: process, content, out-come, impact, and program evaluations. Eachof these types focuses on a specific purpose,

scope, and questions to be asked of an educa-tional activity or program to meet the needs ofthose who ask for the evaluation or who canbenefit from its results. Each type of evalua-tion also requires some level of availableresources for the evaluation to be conducted.

The number and variety of evaluationmodels, designs, methods, and instrumentsare experiencing an exponential growth as theimportance of evaluation becomes more evi-

Page 25: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

CHAPTER 15 / Evaluation in Healthcare Education 517

dent in today’s healthcare environment. Anumber of guidelines, rules of thumb, andsuggestions have been included in this chap-ter’s discussion of how a nurse educatormight go about selecting the most appropri-ate model, design, methods, and instrumentsfor a particular type of evaluation. Perhaps

the most important point to remember wasmade at the beginning of this chapter: Eachaspect of the evaluation process is important,but all of them are meaningless if the resultsof evaluation are not used to guide futureaction in planning and carrying out educa-tional interventions.

REVIEW QUESTIONS1. How is the term evaluation defined?2. How does the process of evaluation differ from

the process of assessment?3. What is the first and most crucial step in plan-

ning any evaluation?4. What are the five (5) basic components included

in determining the focus of an evaluation?5. What are the five (5) basic types (levels) of eval-

uation in order from simple to complex identi-fied in Abruzzese’s RSA Evaluation Model?

6. How does formative evaluation differ from sum-mative evaluation and what is another name foreach of these two types of evaluation?

7. What is the purpose of each type (level) of eval-uation as described by Abruzzese in her RSAEvaluation Model?

8. What data collection methods can be used inconducting an evaluation of educationalinterventions?

9. What are the three (3) major barriers to con-ducting an evaluation?

10. When and why should a pilot test be con-ducted prior to implementing a full evaluation?

11. What are the three (3) guidelines to follow inreporting the results of an evaluation?

REFERENCESAbruzzese, R.S. (1978). Evaluation in nursing staff

development. In Nursing Staff Development: Strate-gies for Success. St. Louis: Mosby–Year Book.

Albanese, M. A. & Gjerde, C. L. (1987). Evaluation.In H. VanHoozer et al. (eds.), The TeachingProcess: Theory and Practice in Nursing. Norwalk,CT: Appleton-Century-Crofts.

Appling, S. E., Naumann, P. L., & Berk, R. A.(2001). Using a faculty evaluation triad toachieve evidence-based teaching. Nursing andHealth Care Perspectives, 22(5), 247–251.

Ball, E., Daly, W. M., & Carnwell, R. (2000). The useof portfolios in the assessment of learning andcompetence. Nursing Standard, 14(43), 35–37.

Berk, R. A. & Rossi, P. H. (1990). Thinking AboutProgram Evaluation. Newbury Park, CA: Sage.

Billings, J. R. (2000). Community development: Acritical review of approaches to evaluation. Jour-nal of Advanced Nursing, 31(2), 472–480.

Cayne, J. (1995). Portfolios: A developmental influ-ence? Journal of Advanced Nursing, 21(2),395–405.

Cotton, A. H. (2001). Private thoughts in publicspheres: Issues in reflection and reflective prac-tices in nursing. Journal of Advanced Nursing,36(4), 512–559.

Dilorio, C., Price, M. E., & Becker, J. K. (2001). Eval-uation of the neuroscience nurse internship pro-gram: The first decade. Journal of NeuroscienceNursing, 33(1), 42–49.

Gerrish, K. (2001). A pluralistic evaluation toexplore people’s experiences of stroke servicesin the community. Health & Social Care in theCommunity, 7(4), 248–256.

Haggard, A. (1989). Evaluating patient educa-tion. In Handbook of Patient Education.Rockville, MD: Aspen.

Hamilton, G. A. (1993). An overview of evaluationresearch methods with implications for nursingstaff development. Journal of Nursing Staff Devel-opment, 9(3), 148–154.

Hannigan, B. (2001) A discussion of the strengthsand weaknesses of “reflection” in nursing prac-tice and education. Journal of Clinical Nursing,10(2), 278–283.

Page 26: Evaluation in Healthcare Education-2018-Evaluation-in-Health-Care-Education.pdf · Evaluation in Healthcare Education Priscilla Sandford Worral CHAPTER HIGHLIGHTS Evaluation Versus

518 PART III / Techniques and Strategies for Teaching and Learning

Hart, E. (1999). The use of pluralistic evaluation toexplore people’s experiences of stroke servicesin the community. Health & Social Care in theCommunity, 7(4), 248–256.

Holzemer, W. (1992). Evaluation methods in con-tinuing education. Journal of Continuing Educa-tion in Nursing, 23(4), 174–181.

Johnson, J. H. & Olesinski, N. (1995). Program eval-uation: Key to success. Journal of Nursing Admin-istration, 25(1), 53–60.

Joint Commission on Accreditation of HealthcareOrganizations. (1995). Accreditation Manual forHospitals. Oakbrook Terrace, IL: Joint Commissionon Accreditation of Healthcare Organizations.

Koch, T. (2000). “Having a say”: Negotiation infourth-generation evaluation. Journal ofAdvanced Nursing, 31(1), 117–125.

Narrow, B. (1979). Patient Teaching in Nursing Prac-tice: A Patient and Family-centered Approach. NewYork: Wiley.

Puetz, B. E. (1992). Evaluation: Essential skill forthe staff development specialist. In K. J. Kelly(ed.), Nursing Staff Development: Current Compe-tence, Future Focus. Philadelphia: Lippincott.

Rankin, S. H. & Stallings, K. D. (2001). Patient Edu-cation: Principles and Practices, 4th ed. Philadel-phia: Lippincott.

Roberts, P., Priest, H., & Bromage, C. (2001). Select-ing and utilizing data sources to evaluate healthcare education. Nurse Researcher, 8(3), 15–29.

Ruzicki, D. A. (1987). Evaluating patient educa-tion—a vital part of the process. In C. E. Smith

(ed.), Patient Education: Nurses in Partnership withOther Health Professionals. Orlando, FL: Grune &Stratton.

Schön, D. A. (1987). Educating the Reflective Practi-tioner. Towards a New Design for Teaching andLearning in the Professions. London: Jossey-Bass.

Serembus, J. F. (2000). Teaching the process ofdeveloping a professional portfolio. Nurse Edu-cator, 25(6), 282–287.

Stewart, A. L., Hays, R. D., & Ware, J. E. (1988). TheMOS Short-Form General Health Survey: Relia-bility and validity in a patient population. Med-ical Care, 26(7), 724.

Teekman, B. (2000). Exploring reflective thinking innursing practice. Journal of Advanced Nursing,31(5), 1125–1135.

Tolson, D. (1999). Practice innovation: A method-ological maze. Journal of Advanced Nursing, 30(2),381–390.

Waddell, D. (1992). The effects of continuing educa-tion on nursing practice: A meta-analysis. Journalof Continuing Education in Nursing, 23(4), 164–168.

Walker, E. & Dewar, B. J. (2000). Moving on frominterpretivism: An argument for constructivistevaluation. Journal of Advanced Nursing, 32(3),713–720.

Ware, J. E., Jr., Davies-Avery, A., & Donald, C. A.(1978). Conceptualization and Measurement ofHealth for Adults in the Health Insurance Study: Vol.V, General Health Perceptions. Santa Monica, CA:RAND.