14
Comparing the cost-effectiveness of simulation modalities: a case study of peripheral intravenous catheterization training Wanrudee Isaranuwatchai Ryan Brydges Heather Carnahan David Backstein Adam Dubrowski Received: 9 October 2012 / Accepted: 20 May 2013 Ó Springer Science+Business Media Dordrecht 2013 Abstract While the ultimate goal of simulation training is to enhance learning, cost- effectiveness is a critical factor. Research that compares simulation training in terms of educational- and cost-effectiveness will lead to better-informed curricular decisions. Using previously published data we conducted a cost-effectiveness analysis of three simulation- based programs. Medical students (n = 15 per group) practiced in one of three 2-h W. Isaranuwatchai Centre for Excellence in Economic Analysis Research, Keenan Research Centre, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Toronto, ON, Canada R. Brydges Department of Medicine, University of Toronto, Toronto, ON, Canada R. Brydges H. Carnahan A. Dubrowski Wilson Centre for Research in Education, University Health Network, Toronto, ON, Canada H. Carnahan Centre for Ambulatory Care Education, Women’s College Hospital, Toronto, ON, Canada H. Carnahan Department of Occupational Science and Occupational Therapy, University of Toronto, Toronto, ON, Canada D. Backstein Department of Surgery, University of Toronto, Toronto, ON, Canada D. Backstein Division of Orthopaedic Surgery, The Musculoskeletal Centre of Excellence, Mount Sinai Hospital, Toronto, ON, Canada A. Dubrowski Department of Paediatrics, University of Toronto, Toronto, ON, Canada A. Dubrowski (&) SickKids Learning Institute, Hospital for Sick Children, 525 University Ave, Room 6021, Unit 600, Toronto, ON M5G 2L3, Canada e-mail: [email protected] 123 Adv in Health Sci Educ DOI 10.1007/s10459-013-9464-6

Comparing the cost-effectiveness of simulation modalities: a case study of peripheral intravenous catheterization training

  • Upload
    adam

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Comparing the cost-effectiveness of simulationmodalities: a case study of peripheral intravenouscatheterization training

Wanrudee Isaranuwatchai • Ryan Brydges • Heather Carnahan •

David Backstein • Adam Dubrowski

Received: 9 October 2012 / Accepted: 20 May 2013� Springer Science+Business Media Dordrecht 2013

Abstract While the ultimate goal of simulation training is to enhance learning, cost-

effectiveness is a critical factor. Research that compares simulation training in terms of

educational- and cost-effectiveness will lead to better-informed curricular decisions. Using

previously published data we conducted a cost-effectiveness analysis of three simulation-

based programs. Medical students (n = 15 per group) practiced in one of three 2-h

W. IsaranuwatchaiCentre for Excellence in Economic Analysis Research, Keenan Research Centre,Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Toronto, ON, Canada

R. BrydgesDepartment of Medicine, University of Toronto, Toronto, ON, Canada

R. Brydges � H. Carnahan � A. DubrowskiWilson Centre for Research in Education, University Health Network, Toronto, ON, Canada

H. CarnahanCentre for Ambulatory Care Education, Women’s College Hospital, Toronto, ON, Canada

H. CarnahanDepartment of Occupational Science and Occupational Therapy, University of Toronto, Toronto, ON,Canada

D. BacksteinDepartment of Surgery, University of Toronto, Toronto, ON, Canada

D. BacksteinDivision of Orthopaedic Surgery, The Musculoskeletal Centre of Excellence, Mount Sinai Hospital,Toronto, ON, Canada

A. DubrowskiDepartment of Paediatrics, University of Toronto, Toronto, ON, Canada

A. Dubrowski (&)SickKids Learning Institute, Hospital for Sick Children, 525 University Ave, Room 6021, Unit 600,Toronto, ON M5G 2L3, Canadae-mail: [email protected]

123

Adv in Health Sci EducDOI 10.1007/s10459-013-9464-6

intravenous catheterization skills training programs: low-fidelity (virtual reality), high-

fidelity (mannequin), or progressive (consisting of virtual reality, task trainer, and man-

nequin simulator). One week later, all performed a transfer test on a hybrid simulation

(standardized patient with a task trainer). We used a net benefit regression model to

identify the most cost-effective training program via paired comparisons. We also created a

cost-effectiveness acceptability curve to visually represent the probability that one program

is more cost-effective when compared to its comparator at various ‘willingness-to-pay’

values. We conducted separate analyses for implementation and total costs. The results

showed that the progressive program had the highest total cost (p \ 0.001) whereas the

high-fidelity program had the highest implementation cost (p \ 0.001). While the most

cost-effective program depended on the decision makers’ willingness-to-pay value, the

progressive training program was generally most educationally- and cost-effective. Our

analyses suggest that a progressive program that strategically combines simulation

modalities provides a cost-effective solution. More generally, we have introduced how a

cost-effectiveness analysis may be applied to simulation training; a method that medical

educators may use to investment decisions (e.g., purchasing cost-effective and educa-

tionally sound simulators).

Keywords Cost-effective � Cost-effectiveness analysis � Knowledge translation �Medical education � Simulation � Technical skills � Net benefit regression �Cost-effectiveness acceptability curve

Introduction

Recently, researchers have been highlighting the need for studies that clarify the mecha-

nisms and quality assurance of simulation-based training programs (Cook 2010; Cook et al.

2011; Teteris et al. 2012). Such studies will enable the broader simulation community to

learn which educational techniques are effective in particular contexts and when using

various simulation modalities. As this evidence base grows, the goal is to enable simulation

educators, stakeholders, and decision makers to pinpoint which instructional design fea-

tures maximize learning in their unique setting. While the ultimate goal of simulation-

based training is to enhance learning, cost-effectiveness is also a critical factor (Ker and

Hogg 2010). That is, a training program may be educationally sound and effective but if

the cost associated with implementation is prohibitively high, it may not be a viable option.

Additionally, like in the clinical setting, decisions to invest in a new treatment should be

supported by evidence showing that the new treatment is more effective and less costly

(i.e., more cost-effective) than the current treatment (Teteris et al. 2012); a similar rigor

should be used when making educational decisions. Research that compares the effec-

tiveness of simulation training in terms of educational- and cost-effectiveness will lead to

better-informed curricular decisions.

Given the consistent focus on the high costs of simulation, research that analyzes the

cost-effectiveness of simulation is noticeably sparse in the literature (Cohen et al. 2010;

Cook et al. 2011; Zendejas et al. 2012). Previous studies have assessed the cost-effec-

tiveness of training programs on a per student basis (Hoffman and Abrahamson 1975; Scott

et al. 2007), compared the cost-savings of a ‘pre-training’ intervention versus a control

condition (Stefanidis et al. 2010), compared costs of laparoscopic simulation training with

W. Isaranuwatchai et al.

123

training in the operating room (Scott et al. 2000), and most recently, systematically

investigated the cost savings of a simulation program on hospital-wide infection rates

(Cohen et al. 2010). Absent in the literature to this point are cost-effectiveness studies that

compare different simulation-training programs (Zendejas et al. 2012). In particular, few

studies evaluate cost-effectiveness using methods that concurrently associate learning

outcomes with cost data.

A cost-effectiveness analysis is a type of economic evaluation that can be used to assist

decision-makers on how to allocate scarce resources effectively (Drummond et al. 2005;

Gold et al. 1996). Here, using previously published data as an example scenario (Brydges

et al. 2010), we applied a cost-effectiveness analysis to study three simulation-based

training programs. Specifically, our earlier research demonstrated that arranging multiple

simulators in a training program that progressively increased the challenge for learners

yielded superior learning outcomes compared to programs that used only the most complex

simulator (most realistic), or only the least complex simulator. Although, theoretically and

empirically supported as educationally effective, the progressive training program is likely

the most costly. One question in the present study, then, is whether the increased cost of the

progressive training program is offset by the observed gains in learning.

Although outcome-based evaluations help determine the success of a program, they

provide limited information to decision makers about resource allocation (Alkin and

Christie 2004). Process-based measures (e.g., costs and time) are typically more infor-

mative to these populations. Therefore as a supplement to the original outcome-based

analysis, we conducted a cost-effectiveness analysis of the three training programs. A

crucial feature of the current analyses was that we examined cost data together with

learning outcome data using methods from health economics. Specifically, we used esti-

mates of the actual costs required to run the three training programs (i.e., the low, high, and

progressive programs) as inputs in our analyses.

In this proof-of-concept study, we performed two types of cost-effectiveness analyses to

represent two types of institutions. First, we analyzed the ‘implementation cost’, which

included only consumables and operational costs associated with the studied training

programs and reflects the scenario where an institution has already purchased the simulator

equipment. Second, we analyzed the ‘total cost’, which included the implementation costs

plus funds necessary to purchase all required simulators and reflects where an institution

must purchase the requisite equipment.

Methods

The original study methods were published previously (Brydges et al. 2010) and we

present an abbreviated methods section below.

Sampling procedure

The data were obtained from a randomized controlled trial involving 45 medical students at

the University of Toronto, Canada. To be included, participants must have previously

attempted less than 10 peripheral intravenous (IV) catheters ‘starts’ (mean IV

starts = 0.46). Following enrollment, participants were randomly assigned to one of the

three programs: (1) progressive (PROGRESS) program; (2) low-fidelity (LOW) program;

and (3) high-fidelity (HIGH) program.

Comparing the cost-effectiveness of simulation modalities

123

Training programs

Three different simulators were utilized in the study. A low-fidelity simulator, which was a

computer-based system (Virtual Intravenous Simulator, Laerdal Medical), a mid-fidelity,

which was an inanimate plastic arm (Nasco Health Care, Model LF01121U), and a high-

fidelity, which was the human patient simulator, SimMan (Laerdal Medical, Model

211-00050). Crucially, the process of inserting an IV catheter into the simulated vein was

identical for the arm simulator and SimMan. Further details on the study apparatus can be

found elsewhere (Brydges et al. 2010).

Participants in the progressive program progressed from the low- to mid- to high-fidelity

simulators in a self-regulated manner. After switching, participants could not return to a

previous simulator. Participants in the low- and high-fidelity programs practiced solely on

their respective simulators until they chose to end practice.

During practice, participants watched an instructional video of an expert performing IV

catheterization, after which they engaged in self-regulated, unsupervised learning in their

assigned group. The participants’ learning objectives were framed via the instructional

video and a standardized script; both emphasized learning all skills (e.g., communication,

technical, and decision-making) related to performing peripheral intravenous catheteriza-

tion on real patients.

One week after practice, participants returned to complete a transfer test on a hybrid

simulation that combined a standardized patient with the mid-fidelity simulator (Brydges

et al. 2010; Kneebone et al. 2006).

Outcome measure

To evaluate learning effectiveness on the transfer test, two blinded expert raters used the

Direct Observation of Procedural Skills (DOPS) tool (Kneebone et al. 2006). The DOPS

score represents a comprehensive assessment of an authentic performance context (i.e.,

hybrid simulation) and relates directly to the broad learning objectives we set for each

participant (Kneebone et al. 2006). For our cost-effectiveness analysis, the DOPS score

represented the clinically relevant outcome measure (hereafter called the ‘effect’ variable).

Cost measures and covariates

Based on the actual resource utilization from our previous study of the three simulator

training programs, we developed two definitions of costs in our cost-effectiveness analyses:

implementation cost and total cost (Table 1). Implementation cost included the mainte-

nance, staffing, and consumable costs associated with each simulator. Total cost included

both the cost of the study apparatus (i.e., the simulator and necessary equipment) and the

implementation cost. Table 1 shows a full report on the calculation of the implementation

and total cost for each program. As an example, we used a wage of $50 per hour for

technician support. We consulted with two simulation centre managers who suggested that

the setup, re-stocking/software management, and takedown time for each simulator was

18 min (VR), 30 min (arm simulator), and 75 min (SimMan) and we used those times to

calculate the technician costs in Table 1.

Other potential covariates that we collected include participants’ sex (female, male),

year of undergraduate medical education (1–4 years), total training time, and total number

of practice IV starts (referred to as ‘number of trials’ below). These covariates were

included in the analyses because they were considered to be significant correlates of the

W. Isaranuwatchai et al.

123

outcome (theoretical justification) or were significantly different between the training

programs (statistical validation) (Moreno-Briseno et al. 2010).

Cost-effectiveness analyses

The cost-effectiveness analysis was conducted from the viewpoint of an institution inter-

ested in deciding between the three training programs based on the outcome and cost

measures derived from our previous study. Note that we conducted all analyses in dupli-

cate, once for the implementation and once for the total costs associated with the programs

due to the very different nature of the two types of cost.

We used two approaches for economic evaluations, the net benefit regression (NBR)

model and cost-effectiveness acceptability curve (CEAC), to compare the cost-effective-

ness of the programs at a specific willingness-to-pay value (WTP) (Fenwick et al. 2004,

2006; Gold et al. 1996; Hoch et al. 2002; Hoch et al. 2006). In economics, the WTP is a

‘theoretical’ and not actual monetary value that represents the maximum amount that a

Table 1 Information on the total and implementation cost for each training program

Type of cost and items Type of simulator

Low-fidelity Mid-fidelity High-fidelity

Study apparatus cost

Virtual IV anatomical viewer 550.0 0 0

Virtual IV pre-hospital module software 2,625.0 0 0

Desktop computer 2,100.0 0 0

Haptics device and virtual IV 11,000.0 0 0

Male multi-venous IV training arm kit 0 600.0 0

212-01001 SimMan 3G complete with 1200 monitor 0 0 67,500.0

Sub-total $16,275.0 $600.0 $67,500.0

Implementation cost (per trial)

Technician(s) 15.0 25.0 62.5

Medical doctor or clinician 0 0 150.0

Maintenance 10.0 10.0 10.0

Consumables 0 5.0 5.0

Sub-total $25.0 $40.0 $227.5

Type of cost Type of training program

Low-fidelity Progressive High-fidelity

Implementation cost (per trial) $25.0 $97.5a $227.5

Total cost $16,300.0 $84,472.5 $67,727.5

IV intravenous. All apparatus costs were taken from company websites and implementation cost wereestimated by local simulation centre manager and simulation program director. Total cost refers to the sumof the study apparatus cost and the implementation cost. Implementation cost refers to the personnel andconsumable costs per IV catheterization attempta This value of implementation cost is an average of the cost per trial for all three simulators (i.e., equalweighting of per trial cost for progressive training program). The exact implementation cost of the pro-gressive program per trial is a weighted function of the number of trials each participant spent on eachrespective simulator (see Table 2 for exact values)

Comparing the cost-effectiveness of simulation modalities

123

decision maker would be willing to pay in order to receive a good (Gold et al. 1996). As

applied to simulation the WTP was defined as the monetary value a decision maker (e.g., a

simulation program director) would be willing to pay for one unit increase in the average

effect score (i.e., DOPS score).

Net benefit regression (NBR)

The NBR model involves comparing pairs of training programs to identify the most cost-

effective program at various WTP values. The following comparisons were examined: (1)

the progressive training program versus the high-fidelity program; (2) the progressive

program versus the low-fidelity program; and (3) the high-fidelity program versus the low-

fidelity program. Note that when we use the term ‘tested program’ below, we refer to the

program noted first in the comparison (e.g., the progressive program in the first compar-

ison). The first step of the NBR model is to generate a net benefit (NB) values for each

study participant using the following equation:

NBi ¼WTP � Ei � Ci ð1Þ

where the subscript ‘i’ referred to participant ‘i’. The Ei and Ci presented the observed

effect and cost, respectively. The effect was the DOPS score, and the cost could be either

the implementation or total cost. We calculated NB value for each participant in all study

groups. Based on this equation, each participant has different NB values at different WTP

values.

To set those WTP values, we consulted with a simulation program director (DB), who

suggested a range of WTP values of $0–$10,000 to represent ‘typical’ implementationcosts for an educational program similar in scope and trainee population to the programs

employed in this study. Using a similar process, we determined that the range of WTP for

total cost was from $0 to $100,000. Importantly, the value of WTP is often unknown, and

thus the WTP should be viewed as a form of sensitivity analysis for comparing the

differences in the average cost and average effect (i.e., the cost-effectiveness) of different

training programs. The WTP should not be interpreted as the actual investment cost or

capital required for the training programs.

With the NB value for each participant, we applied this to the NBR model. The NBR

approach uses a general linear regression framework to facilitate the cost-effectiveness

analysis. As each participant has a specific NB value for each WTP, we created a separate

regression model for each WTP. In a simple linear model with the NB value as the

dependent variable, a simple net benefit regression model can be presented as:

NBi ¼ aþ b1ðTXÞi þ ei ð2Þ

where a was an intercept term, TX was an intervention dummy (1 = and 0 =), and ewas an error term. From the regression model, a coefficient estimate (b1) of the program

variable (TX) represents an ‘incremental net benefit’ (INB). Previous research using this

method shows that a negative INB indicates that the tested program is not cost-effective,

whereas a positive INB indicates that it is cost-effective (Hoch et al. 2002). We

examined the INB value for each regression model (at a specified WTP) to determine

when the tested program was cost-effective (i.e., at which WTP, was the INB value

positive?).

We also adjusted for covariates using the NBR model. The main purpose of random-

ization in RCTs is to balance baseline characteristics between the study groups in order to

W. Isaranuwatchai et al.

123

estimate the true treatment effect; however, in any particular RCT, the characteristics may

not be balanced (Senn 1994). Consequently, variables, strongly associated with an out-

come, should be adjusted for (Pocock et al. 2002). We tested several regression models to

find the model that best captured the cost-effectiveness of the three training programs.

Interactions between the covariates were not considered because of the lack of evidence for

any interactions among these variables. The final model included the program variable and

the four covariates (i.e., sex, year of education, total training time, and total number of

practice trials), and could be presented as:

NBi ¼ aþ b1ðTXÞi þ b2ðsexÞi þ b3ðeducationÞi þ b4ðtrainingÞi þ b5ðpracticeÞi þ ei ð3Þ

where the coefficient estimate of the TX variable was the incremental net benefit, and

therefore the cost-effectiveness of implementing the new program adjusting for covariates.

Cost-effectiveness acceptability curve (CEAC)

Using an established method, we used the results from the NBR model (i.e., the coefficient

estimates of the TX variable and p values) to create a CEAC (Fenwick et al. 2004, 2006;

Hoch et al. 2006). In summary, a CEAC indicates the probability that the tested program is

cost-effective versus its comparator at given WTP values (Hoch et al. 2006). In other

words, such a curve allows decision-makers to examine the chance (i.e., statistical prob-

ability) that their efforts to increase the learning outcomes of a training program by one

unit of effect would still yield a cost-effective program. In a CEAC, the y-axis shows the

probability that the intervention is cost-effective, and the x-axis shows the range of WTP

values. For a more comprehensive description of how to conduct a CEAC from a NBR

model, readers are referred to Hoch et al. (2006).

Statistical analysis

Analyses were conducted using SAS 9.3 statistical software package (SAS Institute, NC,

USA). The reported probability value (p value) was two-sided with a significance level of

0.05. We conducted analyses (i.e., Analysis of Variance test for continuous variables and

Pearson’s Chi square test for categorical variables) to examine the distribution and pro-

portion of variables; and to examine the unadjusted differences between the two programs

in each paired comparison.

We repeated each paired comparison, once with the implementation cost and a second

time with total cost. The analysis with implementation cost best represents the scenario

where an institution already has the simulator and necessary equipment in place, whereas

the analysis with total cost best represents where an institution needs to purchase all of the

requisite equipment.

Results

Table 2 summarizes the descriptive data and statistical analyses for the raw performance

and cost data, as well as covariates for each training program. Among the three programs,

the high-fidelity program had the highest implementation cost (p \ 0.001), whereas the

progressive program had the highest total cost (p \ 0.001).

Comparing the cost-effectiveness of simulation modalities

123

Tab

le2

Des

crip

tive

and

stat

isti

cal

anal

ysi

sof

the

study

var

iable

s

Ov

eral

l(N

=4

5)

PR

OG

RE

SS

(N=

15

)H

IGH

(N=

15

)L

OW

(N=

15

)p

val

ue

(PR

OG

RE

SS

ver

sus

HIG

H)

pv

alu

e(P

RO

GR

ES

Sv

ersu

sL

OW

)

pv

alu

e(H

IGH

ver

sus

LO

W)

Imp

lem

enta

tion

cost

±S

Da

79

0.0

56

3.2

17

83

.33

±2

26

.40

1,3

80

.17

±4

50

.65

20

6.6

69

.09

\0

.001

*\

0.0

01

*\

0.0

01

*

To

tal

cost

±S

D5

6,8

40

.06

±2

9,6

33

.73

85

,15

8.3

22

6.4

06

8,8

80

.17

±4

50

.65

16

,48

1.6

69

.09

\0

.001

*\

0.0

01

*\

0.0

01

*

DO

PS

sco

re±

SD

32

.76

±1

0.2

63

9.8

8.8

93

3.5

6.7

82

4.8

9.1

70

.107

\0

.001

*0

.019

*

Fem

ale

(%)

28

(62

.22

%)

9(6

0%

)9

(60

%)

10

(66

.67

%)

1.0

00

0.7

05

0.7

05

To

tal

tria

ls±

SD

7.7

2.6

48

.93

±2

.34

6.0

1.9

88

.27

±2

.76

0.0

06

*0

.726

0.0

40

*

To

tal

tim

SD

87

.91

±1

6.8

29

5.8

15

.81

86

.13

±1

3.3

28

1.8

18

.76

0.2

39

0.0

56

0.7

44

Yea

ro

fed

uca

tio

n0

.025

*0

.003

*0

.439

Fir

st/s

eco

nd

23

(51

.11

%)

3(2

0%

)9

(60

%)

11

(73

.33

%)

Th

ird

/fo

urt

h2

2(4

8.8

9%

)1

2(8

0%

)6

(40

%)

4(2

6.6

7%

)

N=

sam

ple

size

;S

D=

stan

dar

dd

evia

tio

n;

IV=

intr

aven

ou

s;P

RO

GR

ES

S=

the

pro

gre

ssiv

ep

rog

ram

;H

IGH

=th

eh

igh

-fid

elit

yp

rog

ram

;L

OW

=th

elo

w-fi

del

ity

pro

gra

m;

DO

PS

=D

irec

tO

bse

rvat

ion

of

Pro

cedura

lS

kil

lsto

ol.

Dat

aar

egiv

enas

num

ber

(per

centa

ge)

or

mea

SD

*In

dic

ates

stat

isti

call

yd

iffe

ren

tat

ap\

0.0

5le

vel

inp

rop

ort

ion

so

rm

eans

bet

wee

nth

etw

otr

ain

ing

pro

gra

ms

(PR

OG

RE

SS

ver

sus

HIG

H,

PR

OG

RE

SS

ver

sus

LO

W,

and

HIG

Hv

ersu

sL

OW

).T

he

last

thre

eco

lum

ns

rep

ort

the

resu

lts

of

An

alysi

so

fV

aria

nce

or

Ch

iS

qu

are

test

s,w

hen

com

par

ing

bet

wee

ntw

osp

ecifi

edtr

ain

ing

pro

gra

ms

aP

arti

cip

ants

com

ple

ted

more

than

on

eIV

atte

mp

tin

each

sess

ion

;th

eref

ore

the

exac

tim

ple

men

tati

on

cost

use

din

ou

ran

aly

ses

was

imp

lem

enta

tio

nco

st(p

erIV

atte

mpt)

mult

ipli

edb

yth

eto

tal

nu

mb

ero

fIV

atte

mp

ts(i

.e.,

tria

ls)

each

par

tici

pan

tco

mp

lete

do

nea

chsi

mu

lato

rin

the

resp

ecti

ve

trai

nin

gp

rog

ram

W. Isaranuwatchai et al.

123

Below we present two sets of findings from the cost-effectiveness analysis: one that

considers the implementation cost and a second that considers the total cost. In each set, we

report the cost-effectiveness analysis at various WTP values for each paired comparison:

(1) the progressive and high-fidelity; (2) the progressive and low-fidelity; and (3) the low-

and high-fidelity programs.

Analyses with implementation cost

From the net benefit regression models, Table 3 reports the INB at different WTP for each

paired comparison. We used the NBR data to construct a CEAC for each comparison (see

Fig. 1).

Progressive program versus high fidelity program

The final regression model showed that the progressive program was more cost-effective

when compared to the high-fidelity program at any theoretical WTP to at least $10,000

(because the INB was always positive). Figure 1 shows that the probability that the

Table 3 Results from net benefit regression models reporting incremental net benefit (with p value) forvarious willingness-to-pay values

Willingness-to-pay values PROGRESS versus HIGHINB (p value)

PROGRESS versus LOWINB (p value)

HIGH versus LOWINB (p value)

With cost being the implementation costa

INB with WTP = $0 1,001 (\0.0001) -614 (\0.0001) -1,441 (\0.0001)

INB with WTP = $100 1,736 (\0.01) 149 (0.74) -866 (0.12)

INB with WTP = $200 2,471 (0.01) 911 (0.31) -291 (0.77)

INB with WTP = $300 3,206 (0.01) 1,674 (0.21) 284 (0.85)

INB with WTP = $400 3,941 (0.02) 2,437 (0.18) 859 (0.65)

INB with WTP = $500 4,675 (0.02) 3,200 (0.16) 1,434 (0.55)

INB with WTP = $1,000 8,350 (0.04) 7,014 (0.12) 4,309 (0.36)

INB with WTP = $5,000 37,745 (0.06) 37,525 (0.10) 27,311 (0.24)

INB with WTP = $10,000 74,489 (0.06) 75,664 (0.10) 56,063 (0.23)

With cost being the total costb

INB with WTP = $0 -15,874 (\0.0001) -68,714 (\0.0001) -52,666 (\0.0001)

INB with WTP = $1,000 -8,525 (0.03) -61,086 (\0.0001) -46,916 (\0.0001)

INB with WTP = $2,500 2,498 (0.79) -49,645 (\0.0001) -38,290 (\0.01)

INB with WTP = $5,000 20,870 (0.28) -30,575 (0.18) -23,914 (0.30)

INB with WTP = $9,500 53,939 (0.15) 3,750 (0.93) 1,963 (0.96)

INB with WTP = $10,000 57,614 (0.14) 7,564 (0.86) 4,838 (0.92)

INB with WTP = $50,000 351,564 (0.07) 312,678 (0.17) 234,854 (0.31)

INB with WTP = $100,000 719,002 (0.07) 694,070 (0.13) 522,374 (0.26)

PROGRESS = the progressive program; HIGH = the high-fidelity program; LOW = the low-fidelityprogram; INB = incremental net benefit; WTP = willingness-to-paya These scenarios represent institutions with all simulators in place, but may not be offering these simulatorstogether in one training programb These scenarios represent when institutions do not have all study apparatus

Comparing the cost-effectiveness of simulation modalities

123

progressive program was cost-effective was above 97 % at all WTP values—hence it was

always more cost-effective than the high-fidelity program.

Progressive program versus low fidelity program

Figure 1 shows that when compared to the low-fidelity program, the progressive program

was more cost-effective when decision-makers were willing to pay $100 or more for one

unit of effect. (i.e., there was a 63 % chance that the progressive program would be cost-

effective at WTP = $100). With lower WTP, the chances would be lower.

High-fidelity program versus low-fidelity program

Figure 1 shows that at a WTP of $100, there was only a 6 % chance that the high-fidelity

program would be cost-effective compared to the low-fidelity program. In order for the

high-fidelity program to be considered more cost-effective, the WTP value had to be

increased to $300.

Analyses with total cost

We used the NBR data (Table 3) to construct a CEAC for each comparison (see Fig. 2).

Fig. 1 Cost-effectiveness acceptability curve for each paired comparison using implementation cost.The statistical uncertainty about the cost-effectiveness of the program is reflected on the y-axis (i.e., theprobability that the program was cost-effective); and the range of WTP along the x-axis shows a type ofsensitivity analysis. PROG represents the progressive training program. HIGH and LOW represent thehigh-fidelity and low-fidelity training programs, respectively

W. Isaranuwatchai et al.

123

Progressive program versus high-fidelity program

Figure 2 shows that WTP value had to be $2,500 to produce a 60 % chance that the

progressive program would be cost-effective compared to the high-fidelity program.

Progressive program versus low-fidelity program

The regression results show that at a WTP value of $9,500, the progressive program would

be the cost-effective option. There was almost 60 % chance that the progressive program

would be cost-effective. With higher WTP, the chances would be higher.

High-fidelity program versus low-fidelity program

The regression results show that at a WTP value of $9,500, the high-fidelity program

would be the cost-effective option compared to the low-fidelity program.

Discussion

In this study, we aimed to illustrate how a cost-effectiveness analysis could be used to

provide decision-makers with objective data for their investment decisions (e.g., which

Fig. 2 Cost-effectiveness acceptability curve for each paired comparison using total cost. The statisticaluncertainty about the cost-effectiveness of the program is reflected on the y-axis (i.e., the probability that theprogram was cost-effective); and the range of WTP along the x-axis shows a type of sensitivity analysis.PROG represents the progressive training program. HIGH and LOW represent the high-fidelity and low-fidelity training programs, respectively

Comparing the cost-effectiveness of simulation modalities

123

program was cost-effective). Such an analysis supplements our previously reported out-

come-based comparison of the three different simulation-based training programs.

Importantly, however, our findings should not be used as a basis for decision-makers to

determine how much it would cost to invest in these programs, as the actual costs will be

affected by local economics, program content and context. Instead, our data provide

suggestions about which program may be the most effective investment. Overall, this report

represents a proof-of-concept application of a specific, economics-based methodology to

the study of cost-effectiveness in simulation-based education. Such work addresses the call

for greater study of costs associated with simulation (Zendejas et al. 2012) and represents a

key, novel information source for decision makers and program directors when building

simulation programs. Based on our analyses, Table 4 lists our recommendations for several

possible scenarios at contemporary simulation training centres.

Table 4 Possible scenarios and recommendations based on the study findings

Scenarios Recommendations

With cost being the implementation costa

Institutions that own all three simulators, but useonly the low-fidelity simulator for IV training

The progressive program had higher probability ofbeing cost-effective at all WTP values above $100than the high-fidelity program. Hence, therecommendation would be to combine the high-fidelity and mid-fidelity simulators with the low-fidelity simulator in a progressive training programrather using any simulator in a stand-alone program

Conversely, if the decision-makers’ WTP was less than$100, the low-fidelity program would be the cost-effective option

Institutions that own all three simulators, but useonly the high-fidelity simulator for IV training

An upgrade to the progressive program isrecommended. When compared to the high-fidelityprogram, the probability that the progressive programis cost-effective is higher than 97 %

With cost being the total costb

Institutions that own only the low-fidelitysimulator

If an institution wished to build upon the low-fidelityprogram they have in place, it would be morebeneficial (though not substantively) to add thecomponents of the progressive program rather thanonly the high-fidelity program. When compared tothe low-fidelity program, the progressive programhad higher probability of being cost-effective at thesame WTP than the high-fidelity program

Conversely, if the decision-makers’ WTP was less than$9,500, the current low-fidelity program would be thecost-effective option

Institutions that own only the high-fidelitysimulator

The progressive program was the cost-effective optionif the decision-makers’ WTP was greater than$2,500. Here, the investment decision was lessstraightforward and depended on the institutions’WTP and if they wanted to invest in the resourcesneeded for the progressive program

IV = intravenous; DOPS = Direct Observation of Procedural Skills tool; WTP = willingness-to-pay forone unit of the effect score on the DOPS rating toola These scenarios represent institutions with all simulators in place, but may or may not be offering thesesimulators together in one training programb These scenarios represent when institutions do not have all study apparatus

W. Isaranuwatchai et al.

123

This study has a number of strengths and limitations. With respect to study strengths,

this is one of the first studies to examine the relative cost-effectiveness of different sim-

ulation-based training programs (Zendejas et al. 2012). In particular, our approach is novel

in that it combines learning outcome data with cost data in a way that may be useful for the

simulation research and education communities.

Our use of the net benefit regression model provided information for a range of theoretical

WTPs to assist decision-makers; an approach that has a number of advantages. First, we could

include and adjust for covariates, an element which was not possible in conventional

approach for cost-effectiveness analysis (Hoch et al. 2002). While we used the NBR model to

control for some identified covariates, other researchers may wish to consider and include

other covariates when using this powerful approach. Second, the final NBR model can easily

be used to construct the CEAC, which provides decision-makers with a more comprehensive

analysis. The information provided by the CEAC may assist decision-makers faced with the

choice of whether or not to adopt a new program because it provides a measure of the

uncertainty surrounding the choice. Specifically, if decision-makers are considering

increasing the effectiveness of a training program (by one unit of effect), the CEAC provides

information on the chance that the tested program would be cost-effective and, consequently,

provides rich data to inform decisions on resource allocation.

Our analyses are limited in generalizability as we focused on one institution, one

clinical skill and one cohort of students. Further, not all institutions will have access to or

interest in the simulation resources used. Due to the lack of a control group (i.e., no

simulation training), we could not compare these three simulation programs with a tra-

ditional training program. Thus, the analyses are likely most valuable to institutions with

one of the three simulation programs in place. However, this is in keeping in the most

recent reviews of simulation based education, which conclude and advocate for more

research that compares different simulation approaches rather than one that compares

simulation to no-simulation.

In conclusion, we have demonstrated that for peripheral intravenous catheterization

skills the progressive training program was the most cost-effective option with cost being

defined as the implementation cost. With costs defined as the total cost, the most cost-

effective training program depended on the decision makers’ willingness-to-pay value. We

used economic evaluation techniques that are novel in health professions education

research to assess the relative cost-effectiveness of different simulation-based training

programs. Our approach combines learning outcome data with cost data and includes

several covariates, a robust technique that researchers and educators can use to meet local

needs. In line with recommendations for more comparative evaluations of simulation

training (Cook 2010; Teteris et al. 2012), this study is one of the first to provide evidence

that may help decision makers to make cost-effective investments in simulation-related

equipment.

Acknowledgments Disclosure of funding for this work from any of the following organizations: BMOChair in Health Professions Education Research, Natural Science and Engineering Research Council ofCanada.

References

Alkin, M., & Christie, C. (2004). An evaluation theory tree. Evaluation roots (pp. 12–65). Thousand Oaks,CA: Sage Publications, Inc.

Comparing the cost-effectiveness of simulation modalities

123

Brydges, R., Carnahan, H., Rose, D., Rose, L., & Dubrowski, A. (2010). Coordinating progressive levels ofsimulation fidelity to maximize educational benefit. Academic Medicine, 85(5), 806–812.

Cohen, E., Feinglass, J., Barsuk, J., Barnard, C., O’Donnell, A., McGaghie, W., et al. (2010). Cost savingsfrom reduced catheter-related bloodstream infection after simulation-based education for residents in amedical intensive care unit. Simulation in Healthcare, 5(2), 98–102.

Cook, D. (2010). One drop at a time: Research to advance the science of simulation. Simulation inHealthcare, 5(1), 1–4.

Cook, D., Hatala, R., Brydges, R., Zendejas, B., Szostek, J., Wang, A., et al. (2011). Technology-enhancedsimulation for health professions education: A systematic review and meta-analysis. The Journal of theAmerican Medical Association, 306(9), 978–988.

Drummond, M., Sculpher, M., Torrance, G., O’Brien, B., & Stoddart, G. (2005). Methods for the economicevaluation of health care programmes (3rd ed.). New York: Oxford University Press.

Fenwick, E., O’Brien, B., & Briggs, A. (2004). Cost-effectiveness acceptability curves—facts, fallacies, andfrequently asked questions. Health Economics, 13, 405–415.

Fenwick, E., Marshall, D., Levy, A., & Nichol, G. (2006). Using and interpreting cost-effectivenessacceptability curves: an example using data from a trial of management strategies for atrial fibrillation.BMC Health Services Research, 6(52). doi:10.1186/1472-6963-6-52.

Gold, M., Siegel, J., Russell, L., & Weinstein, M. (1996). Cost-effectiveness in health and medicine.New York: Oxford University Press.

Hoch, J., Briggs, A., & Willan, A. (2002). Something old, something new, something borrowed, somethingblue: a framework for the marriage of health econometrics and cost-effectiveness analysis. HealthEconomics, 11, 415–430.

Hoch, J., Rockx, M., & Krahn, A. (2006). Using the net benefit regression framework to construct cost-effectiveness acceptability curves: an example using data from a trial of external loop recorders versusHolter monitoring for ambulatory monitoring of ‘‘community acquired’’ syncope. BMC HealthServices Research, 6(68). doi:10.1186/1472-6963-6-68.

Hoffman, K., & Abrahamson, S. (1975). The ‘cost-effectiveness’ of Sim One. Journal of Medical Educa-tion, 50, 1127–1128.

Ker, J., & Hogg, G. (2010). Cost-effective simulation. In K. Walsh (Ed.), Cost effectiveness in medicaleducation (pp. 61–71). Oxon: Radcliffe Publishing.

Kneebone, R., Nestel, D., Yadollahi, F., Brown, R., Nolan, C., Durack, J., et al. (2006). Assessing proceduralskills in context: Exploring the feasibility of an integrated procedural performance instrument (IPPI).Medical Education, 40(11), 1105–1114.

Moreno-Briseno, P., Diaz, R., Campos-Romo, A., & Fernandez-Ruiz, J. (2010). Sex-related differences inmotor learning and performance. Behavioral and Brain Functions, 6, 74. doi:10.1186/1744-9081-6-74.

Pocock, S. J., Assmann, S. E., Enos, L. E., & Kasten, L. E. (2002). Subgroup analysis, covariate adjustmentand baseline comparisons in clinical trial reporting: Current practice and problems. Statistics inMedicine, 21, 2917–2930.

Scott, D., Bergen, P., Rege, R., Laycock, R., Tesfay, S., Valentine, R., et al. (2000). Laparoscopic trainingon bench models: Better and more cost effective than operating room experience? Journal of theAmerican College of Surgeons, 191(3), 272–283.

Scott, D., Goova, M., & Tesfay, S. (2007). A cost-effective proficiency-based knot-tying and suturingcurriculum for residency programs. Journal of Surgical Research, 141(1), 7–15.

Senn, S. (1994). Testing for baseline balance in clinical trials. Statistics in Medicine, 13, 1715–1726.Stefanidis, D., Hope, W., Korndorffer, J., Markley, S., & Scott, D. (2010). Initial laparoscopic basic skills

training shortens the learning curve of laparascopic suturing and is cost-effective. Journal of theAmerican College of Surgeons, 210(4), 436–440.

Teteris, E., Fraser, K., Wright, B., & McLaughlin, K. (2012). Does training learners on simulators benefitreal patients? Advances in Health Sciences Education, 17(1), 137–144.

Zendejas, B., Wang, A. T., Brydges, R., Hamstra, S. J., & Cook, D. A. (2012). Cost: The missing outcome insimulation-based medical education research. A systematic review. Surgery, 153, 160–176.

W. Isaranuwatchai et al.

123