15
DISASTER MEDICINE/ORIGINAL RESEARCH Assessment of the Reliability of the Johns Hopkins/Agency for Healthcare Research and Quality Hospital Disaster Drill Evaluation Tool Amy H. Kaji, MD, MPH Roger J. Lewis, MD, PhD From the Department of Emergency Medicine, Harbor–UCLA Medical Center, Torrance, CA (Kaji, Lewis); David Geffen School of Medicine at UCLA, Los Angeles, CA (Kaji, Lewis); Los Angeles Biomedical Research Institute, Torrance, CA (Kaji, Lewis); and The South Bay Disaster Resource Center at Harbor– UCLA Medical Center, Los Angeles, CA (Kaji). Study objective: The Joint Commission requires hospitals to implement 2 disaster drills per year to test the response phase of their emergency management plans. Despite this requirement, there is no direct evidence that such drills improve disaster response. Furthermore, there is no generally accepted, validated tool to evaluate hospital performance during disaster drills. We characterize the internal and interrater reliability of a hospital disaster drill performance evaluation tool developed by the Johns Hopkins University Evidence-based Practice Center, under contract from the Agency for Healthcare Research and Quality (AHRQ). Methods: We evaluated the reliability of the Johns Hopkins/AHRQ drill performance evaluation tool by applying it to multiple hospitals in Los Angeles County, CA, participating in the November 2005 California statewide disaster drill. Thirty-two fourth-year medical student observers were deployed to specific zones (incident command, triage, treatment, and decontamination) in participating hospitals. Each observer completed common tool items, as well as tool items specific to their hospital zone. Two hundred items from the tool were dichotomously coded as indicating better versus poorer preparedness. An unweighted “raw performance” score was calculated by summing these dichotomous indicators. To quantify internal reliability, we calculated the Kuder-Richardson interitem consistency coefficient, and to assess interrater reliability, we computed the coefficient for each of the 11 pairs of observers who were deployed within the same hospital and zone. Results: Of 17 invited hospitals, 6 agreed to participate. The raw performance scores for the 94 common items ranged from 18 (19%) to 63 (67%) across hospitals and zones. The raw performance scores of zone-specific items ranged from 14 of 45 (31%) to 30 of 45 (67%) in the incident command zone, from 2 of 17 (12%) to 15 of 17 (88%) in the triage zone, from 19 of 26 (73%) to 22 of 26 (85%) in the treatment zone, and from 2 of 18 (11%) to 10 of 18 (56%) in the decontamination zone. The Kuder-Richardson internal reliability, by zone, ranged from 0.72 (95% confidence interval [CI] 0.58 to 0.87) in the treatment zone to 0.97 (95% CI 0.95 to 0.99) in the incident command zone. The interrater reliability ranged, across hospital zones, from 0.24 (95% CI 0.09 to 0.38) to 0.72 (95% CI 0.63 to 0.81) for the 11 pairs of observers. Conclusion: We found a high degree of internal reliability in the AHRQ instrument’s items, suggesting the underlying construct of hospital preparedness is valid. Conversely, we found substantial variability in interrater reliability, suggesting that the instrument needs revision or substantial user training, as well as verification of interrater reliability in a particular setting before use. [Ann Emerg Med. 2008;52:204-210.] 0196-0644/$-see front matter Copyright © 2008 by the American College of Emergency Physicians. doi:10.1016/j.annemergmed.2007.07.025 204 Annals of Emergency Medicine Volume , . : September

Assessment of the Reliability of the Johns Hopkins/Agency for Healthcare Research and Quality Hospital Disaster Drill Evaluation Tool

Embed Size (px)

Citation preview

DISASTER MEDICINE/ORIGINAL RESEARCH

Assessment of the Reliability of the Johns Hopkins/Agency forHealthcare Research and Quality Hospital Disaster Drill

Evaluation Tool

Amy H. Kaji, MD, MPHRoger J. Lewis, MD, PhD

From the Department of Emergency Medicine, Harbor–UCLA Medical Center, Torrance, CA (Kaji, Lewis);David Geffen School of Medicine at UCLA, Los Angeles, CA (Kaji, Lewis); Los Angeles BiomedicalResearch Institute, Torrance, CA (Kaji, Lewis); and The South Bay Disaster Resource Center at Harbor–UCLA Medical Center, Los Angeles, CA (Kaji).

Study objective: The Joint Commission requires hospitals to implement 2 disaster drills per year totest the response phase of their emergency management plans. Despite this requirement, there isno direct evidence that such drills improve disaster response. Furthermore, there is no generallyaccepted, validated tool to evaluate hospital performance during disaster drills. We characterize theinternal and interrater reliability of a hospital disaster drill performance evaluation tool developed bythe Johns Hopkins University Evidence-based Practice Center, under contract from the Agency forHealthcare Research and Quality (AHRQ).

Methods: We evaluated the reliability of the Johns Hopkins/AHRQ drill performance evaluation toolby applying it to multiple hospitals in Los Angeles County, CA, participating in the November 2005California statewide disaster drill. Thirty-two fourth-year medical student observers were deployed tospecific zones (incident command, triage, treatment, and decontamination) in participating hospitals.Each observer completed common tool items, as well as tool items specific to their hospital zone.Two hundred items from the tool were dichotomously coded as indicating better versus poorerpreparedness. An unweighted “raw performance” score was calculated by summing thesedichotomous indicators. To quantify internal reliability, we calculated the Kuder-Richardson interitemconsistency coefficient, and to assess interrater reliability, we computed the � coefficient for each ofthe 11 pairs of observers who were deployed within the same hospital and zone.

Results: Of 17 invited hospitals, 6 agreed to participate. The raw performance scores for the 94common items ranged from 18 (19%) to 63 (67%) across hospitals and zones. The raw performancescores of zone-specific items ranged from 14 of 45 (31%) to 30 of 45 (67%) in the incidentcommand zone, from 2 of 17 (12%) to 15 of 17 (88%) in the triage zone, from 19 of 26 (73%) to 22of 26 (85%) in the treatment zone, and from 2 of 18 (11%) to 10 of 18 (56%) in the decontaminationzone. The Kuder-Richardson internal reliability, by zone, ranged from 0.72 (95% confidence interval[CI] 0.58 to 0.87) in the treatment zone to 0.97 (95% CI 0.95 to 0.99) in the incident commandzone. The interrater reliability ranged, across hospital zones, from 0.24 (95% CI 0.09 to 0.38) to0.72 (95% CI 0.63 to 0.81) for the 11 pairs of observers.

Conclusion: We found a high degree of internal reliability in the AHRQ instrument’s items, suggesting theunderlying construct of hospital preparedness is valid. Conversely, we found substantial variability ininterrater reliability, suggesting that the instrument needs revision or substantial user training, as well asverification of interrater reliability in a particular setting before use. [Ann Emerg Med. 2008;52:204-210.]

0196-0644/$-see front matterCopyright © 2008 by the American College of Emergency Physicians.doi:10.1016/j.annemergmed.2007.07.025

204 Annals of Emergency Medicine Volume , . : September

Kaji & Lewis Johns Hopkins/AHRQ Hospital Disaster Drill Evaluation Tool

SEE EDITORIAL, P. 230.

INTRODUCTIONHospitals are viewed by the public as a haven of safety and

therefore may be called on to provide care to large numbersof ill, injured, exposed, or concerned individuals during adisaster. One of the primary components of hospital disasterplanning and preparedness has been the use of drills to trainemployees and to test the hospital’s disaster responsecapability.

The Joint Commission requires each hospital to implement2 disaster drills per year to test the response phase of itsemergency management plan (Environment of Care DisasterDrill Standard 4.20).1 Additionally, any accredited organizationthat provides emergency services or is designated as a disasterreceiving station is required to conduct at least 1 drill per yearthat includes an influx of simulated patients, and the drill mustinvolve enough victims to test the organization’s performanceunder stress.1

Several types of hospital disaster drills are commonly used.Some hospitals use moulaged patients to operationalize disasterdrill scenarios; others perform tabletop drills, consisting ofroundtable discussions without any simulated victims; and stillothers practice the disaster response through computersimulations.2-4 Drills can be difficult to organize and expensivebecause overtime pay may be necessary to compensateparticipants, and on-duty staff may have to defer theperformance of their regular daily duties to participate in the

Editor’s Capsule Summary

What is already known on this topicDisaster drills are hoped to prepare hospitals to handleactual events, but their effectiveness is not established andmethods for evaluating the quality of such drills have notbeen validated.

What question this study addressedThe internal consistency and interrater reliability of theJohns Hopkins/Agency for Healthcare Research andQuality (AHRQ) disaster drill performance evaluationtool developed for the AHRQ.

What this study adds to our knowledgeThe tool was internally consistent—when a particularcriterion was met related criteria were often met—butinterrater reliability of individual items was low,indicating that multiple observers were unable to come tothe same conclusions about preparedness.

How this might change clinical practiceThis study suggests that the tool needs further refinementor improved training of users if assessments performedwith the tool are not to be observer dependent.

disaster drill.

Volume , . : September

Although it is customary for hospitals to conduct an after-drill debriefing session during which participants discussdeficiencies warranting improvement, to our knowledge there isno commonly used, standardized, and validated means toevaluate hospital performance during disaster drills. To addressthis gap, the Johns Hopkins University Evidence-based PracticeCenter, with support from the Agency for Healthcare Researchand Quality (AHRQ), has developed a performance evaluationtool for hospital disaster drills.5 Published in April 2004, theAHRQ evaluation tool is module based, in which separateevaluations take place in 4 areas or zones of the hospital(incident command, triage, treatment, and decontamination).

The AHRQ performance evaluation modules were developedby a multidisciplinary team of experts on public health eventsrelevant to bioterrorism preparedness, using a systematic reviewof published reports on hospital disaster drills.6 Using both TheJoint Commission regulations standard1 and the HospitalEmergency Incident Command System,7 the Johns HopkinsUniversity Evidence-based Practice Center team identifiedaction zones within the hospital that would be evaluated anddefined the elements to be recorded during the drill.

The drill evaluation tool is composed of elements that maybe categorized as being of 3 general types: evaluative (reflectingbetter versus poorer preparedness); illustrative (eg, draw how thehospital zone was configured), without implying better versuspoorer preparedness; and observational (eg, was the incidentcommand zone located within the hospital), again withoutimplying better versus poorer preparedness. Some of theelements are common to all modules, regardless of zone (eg, wasthe boundary for this zone defined?), whereas others are zonespecific (eg, the question, “Was the decontamination zone setup before arrival of first victim?” is included only in thedecontamination module).

To our knowledge, the reliability and validity of the AHRQperformance evaluation tool have not been formally assessed.Establishing the instrument’s reliability is a prerequisite forestablishing its validity.8 If a tool is not reliable, then even if itgenerally measures the desired characteristic on average, itsimprecision and lack of reproducibility will render it useless forpractical purposes. Conversely, validity with respect to acriterion standard is not a prerequisite for establishing a tool’sreliability, composed of both interrater and interitemconsistency. Consider a multi-item questionnaire intended toassess a characteristic that is real, yet has a subjectivecomponent. For example, to assess an individual patient’s levelof pain, questions might include, Has the pain affected theindividual’s ability to perform daily activities? Has the painaffected the patient’s mood or outlook? Has the pain affectedthe patient’s ability to sleep? Has the pain affected the patient’svital signs? Although no criterion standard for pain exists, thecorrelation within each subject of responses to these itemsprovides a measure of the reliability of the tool to measure pain.If individuals demonstrate highly correlated responses on the

multiple items that make up the questionnaire, for example,

Annals of Emergency Medicine 205

Johns Hopkins/AHRQ Hospital Disaster Drill Evaluation Tool Kaji & Lewis

tending to score these all positively or negatively, then thequestionnaire demonstrates high interitem consistency orinternal reliability. As a distinct measure of reliability, if 2 ratersscore individuals’ pain levels similarly, the questionnaire woulddemonstrate high interrater reliability.

Thus, the objective of our study was to assess the internalreliability and the interrater reliability of the AHRQ hospitaldrill performance evaluation tool by using a cohort of hospitalsin Los Angeles County, participating in a statewide disaster drillin November 2005.

MATERIALS AND METHODSThe state of California plans an annual disaster drill during

November, and all hospitals and public health and emergencymedical service providers are required to participatesimultaneously.9 We evaluated the AHRQ hospital disaster drillperformance evaluation tool during the November 2005 drill,which involved a scenario with multiple potentiallycontaminated casualties from an explosion at a public venue.

As part of the National Hospital Bioterrorism Preparedness Plan(NHBPP),10 the Los Angeles County Emergency Medical ServicesAgency has designated 11 hospitals as regional disaster resourcecenters. Each of the 11 regional disaster resource centers isresponsible for enhancing the disaster preparedness capabilities of aspecific geographic area within the county. Toward that end, eachregional disaster resource center hospital oversees a number of otherservice area hospitals, located in the assigned geographic locale.Serving as one of the 11 regional disaster resource centers, Harbor–UCLA Medical Center oversees 8 service area hospitals, all ofwhom were invited to participate in the study. To obtain a largerand more diverse sample, 8 other hospitals were also asked toparticipate. Eleven hospitals refused to participate, most commonlybecause they did not feel prepared to permit external personnel toobserve their drill performance, despite assurances that individualhospitals would not be identified publicly. Thus, the final cohortincluded 6 hospitals (Harbor–UCLA Medical Center, 4 of itsservice area hospitals, and 1 additional hospital, which is a servicearea hospital for another regional disaster resource center).

Before the drill, UCLA fourth-year medical students whosechosen future specialties were emergency medicine, critical care, oranesthesiology received a 4-hour training session, during whichthey were provided an introduction to disaster medicine, a reviewof the pathophysiology of blast injuries, and an instructional sessionabout the use of the AHRQ evaluation tool. A copy of thoseinstructional materials is available from the authors.

On the day of the drill, 6 or 7 medical student scorers weredeployed to each of the participating hospitals, except for Harbor–UCLA, where there was only 1 observer. Harbor–UCLA hadexperienced a bomb scare the previous day, requiring the activationof its disaster plan, and hospital administrators elected to perform atabletop drill, limiting the data that could be collected from thatsite. In addition to being sent to the incident command center,observers were sent to designated areas for triage, treatment, anddecontamination. Duplicate observers, specifically instructed not to

collaborate with one another when completing the evaluation tool,

206 Annals of Emergency Medicine

were sent to 2 or 3 of these stations at each hospital (except forHarbor–UCLA Medical Center). A priori, we decided that itwould be more important to evaluate the instrument’s interraterreliability in the incident command, triage, and treatment zonesthan in the decontamination zone. The likelihood of a masscasualty incident involving an exposure to a chemical or nerve agentwas thought to be low compared to natural hazards such asearthquakes in Los Angeles County. Thus, given the limitednumber of trained personnel, duplicate observers were not sent todecontamination zones.

The AHRQ tool was observed to have no questions thatspecifically addressed pediatric issues in disaster preparedness.Because there is a high likelihood that any disaster will involvepediatric victims, we modified the tool slightly by adding thefollowing 5 evaluative questions: (1) Was there a mechanism inplace to repatriate pediatric patients with their guardians? (2) Werethere any pediatric victims planned in the disaster drill? (3) Did thehospital disaster plan address how to manage pediatric disastervictims? (4) Was pediatric-specific equipment available? and (5)Was there a method in place to estimate pediatric weights?

We allocated a point for each positive response to a series of200 selected evaluative questions from the tool whose contentreflects better preparedness (Appendix E1, available online athttp://www.annemergmed.com), and we calculated a rawperformance score for preparedness for each zone (incidentcommand, treatment, triage, and decontamination). Allmodules include 109 common questions, which were completedby all observers, regardless of their assigned zone. Commonelements included how the boundary of their hospital zone wasdefined, as well as the location of the zone; personnelcharacteristics, such as who took charge of the zone and whetherthe drill participants were easily identifiable; and a descriptionof zone operations, including whether the hospital disaster planwas available and whether there was adequate space allocated forthe zone. Communications was assessed in each zone by notingwhat types of devices were available and whether they were usedduring the drill. Information flow was evaluated through thedocumentation of problems created by delays in receivinginformation. Drill observers also assessed the adequacy ofsecurity: whether and what type of security was present andwhether any security issues arose.

In the incident command zone, observers assessed whethercritical roles were fulfilled (eg, incident commander and safetyand security officer), how the incident command post wasconfigured, the approximate number of people in the zone, andwhether the incident command received relevant surge capacityinformation (eg, availability of operating rooms, staffed floorbeds). In the hospital triage area, observers assessed andrecorded the proportion of victims who were screened forbiological, chemical, or radiation exposure before entry into thetriage area and whether the hospital had a method for expeditedregistration and medical record documentation. Treatment zoneevaluators recorded reasons for delays in victim treatments and

whether or not there were adequate medical supplies. Observations

Volume , . : September

Kaji & Lewis Johns Hopkins/AHRQ Hospital Disaster Drill Evaluation Tool

made by evaluators in the hospital decontamination zoneincluded how nonambulatory victims were decontaminated,whether chain of custody was initiated, whether water runoffwas contained, and whether unexposed staff or victims mixedwith contaminated staff or victims.

Data Collection and Processing and Primary Data AnalysisData were obtained from the disaster drill evaluation form

from each of the observers and recorded on a closed responsedata collection form. All data were entered into a MicrosoftAccess database. Statistical analysis was performed using SASversion 8.1 (SAS Institute, Inc., Cary, NC) and Stata SE 8(StataCorp, College Station, TX).

The Wilcoxon rank sum test was used to compare numericvariables, whereas the Fisher’s exact test was used to compareproportions. Although it is possible to assess interrater reliabilitywith data elements that are evaluative or observational, it ispossible to assess the interitem consistency only by usingevaluative elements (because one response must signify betterpreparedness). The terms “internal reliability” and “interitemconsistency” are equivalent for the purposes of this study.Specifically, there were 55 observational questions (23 from theIncident Command module, 6 from the Triage module, 12from the Treatment module, and 14 from the Commonquestions) that could be used in determining interraterreliability but, because they did not reflect overall disasterpreparedness, could not be used to assess interitem consistency.An example of a question that can be used to assess interraterreliability is, “Was the incident command center configured in 2or more contiguous rooms?” In contrast, the question, “Wasthere a plan in place to maintain order in this zone if securityissues about access in and out arose?” is an example of aquestion that can be used to assess both interitem consistencyand interrater reliability for hospital preparedness.

The interitem consistency of the AHRQ tool was evaluatedwith the Kuder-Richardson formula, which is an extension ofCronbach’s coefficient � score for reliability testing.11,12

Cronbach’s � is an appropriate measurement of internalreliability for an instrument that consists of multiple items thateach yield a numerical response (eg, Likert scale). In contrast,the Kuder-Richardson formula yields an appropriate measure ofinternal reliability when each item yields a binary response.13-15

Both Cronbach’s � and the Kuder-Richardson coefficient arefunctions of the number of test items and the averageintercorrelation among their responses and range from 0 to 1,with higher scores indicating better internal reliability. Althoughthere is some variability, most sources indicate that 0.7 or aboveis an acceptable reliability coefficient.13

We calculated the Kuder-Richardson coefficient separatelyfor zone-specific evaluative questions in each module of theevaluation tool, as well as for the common evaluative items. Weused SAS code supplied by Iacobucci and Duhacek16 tocalculate each Kuder-Richardson coefficient and associatedconfidence intervals (CIs). Although the point estimate of the

consistency coefficient is a function of the number of items in

Volume , . : September

the tool and the correlations among the items, it is notdependent upon the sample size (ie, number of observers).However, the sample size affects the width of the CIs.17,18

Interrater reliability of the tool was assessed by comparingthe responses of the duplicate observers sent to the same zone inthe same hospital. Although there were 32 total observers, therewere only 11 pairs of duplicate observers, 4 of whom were in theincident command zone, 5 in the triage zone, and 2 in thetreatment area. There were no pairs deployed to anydecontamination zone, so we could not assess the interraterreliability of that tool module. To measure the interraterreliability between pairs of observers in the same hospital andzone, we computed the � coefficient and percentage ofagreement, using both zone-specific evaluative and observationalitems in each module, as well as common items. Finally, anoverall percent agreement, as well as an overall � for theinstrument with a CI, was determined by using all data frompaired observers.

The institutional review board at the Los Angeles BiomedicalResearch Institute at Harbor–UCLA Medical Center approvedthe project and deemed it exempt.

RESULTSSix of the 17 invited hospitals agreed to participate. Table 1

lists characteristics of each of the invited and participatinghospitals. Compared with participating hospitals, thenonparticipating hospitals were similar with respect to licensedbed capacity, public versus private status, trauma designation,availability of pediatric capability, affiliation with an academicemergency medicine residency program, and level ofparticipation with their disaster resource center.

Table 2 lists the hospital scores for each zone by eachobserver, demonstrating wide variability both between hospitalsand, at times, between observers within a single hospital zone.The scores ranged from 14 of 45 (31%) to 30 of 45 (67%) inthe incident command zone, from 2 of 17 (12%) to 15 of 17(88%) in the triage zone, from 19 of 26 (73%) to 22 of 26(85%) in the treatment zone, and from 2 of 18 (11%) to 10 of18 (56%) in the decontamination zone. For the 94 commonevaluative items for which observers in all zones recordedresponses, the scores for each of the 6 hospitals ranged from 18of 94 (19%) to 63 of 94 (67%). There was also substantialvariability in the common scores tabulated in different zones,even within single hospitals, though scores within specific zoneswithin the same hospital tended to be closer. For example, athospital F, observer scores ranged from 23 of 94 (24%) inincident command to 63 of 94 (67%) in the treatment zone.

Table 3 delineates the Kuder-Richardson internal reliabilitycoefficients for the set of evaluative and common items and, separately,for each set of zone-specific evaluative items. The internal reliabilitycoefficient ranged from 0.72 in the treatment zone to 0.97 in theincident command zone. Of the 32 total observers, 22 students werepaired up with another individual in the same hospital and zone (Table4). There were 4 pairs in the incident command zone, 2 in the

treatment area, and 5 in the triage zone. Because responses to common

Annals of Emergency Medicine 207

Johns Hopkins/AHRQ Hospital Disaster Drill Evaluation Tool Kaji & Lewis

items reflect the specific zone being evaluated, � and percentage-of-agreement statistics were calculated with both common and zone-specific items in each module. The values of the � statistic ranged from0.24 (observer pair 8 in triage) to 0.72 (observer pair 10 in treatment),whereas the percentage-of-agreement statistic ranged from 70%(observer pair 8 in triage) to 87% (observer pair 10 in treatment).

LIMITATIONSThis study was limited by the small number of hospitals and the

modest number of observer pairs. Because only 6 of 17 hospitalsagreed to participate, there is a possibility the hospitals were not

Table 1. Participating and nonparticipating hospital characterist

Hospital Name PublicTrauma

DesignationPediatric M

Cente

Participating hospitalsHospital A* Yes Level I YesHospital B No None NoHospital C No Level II YesHospital D Yes None NoHospital E No None NoHospital F No None NoNonparticipating hospitalsHospital G No None NoHospital H No None NoHospital I No None NoHospital J Yes Level I YesHospital K No Level I YesHospital L No None NoHospital M No None NoHospital N No Level II NoHospital O No Level II NoHospital P No None NoHospital Q No None No

*Harbor–UCLA Medical Center.

Table 2. Hospital scores by zones and observer.

HospitalObserver inEach Zone*

Incident Command

45 ItemsNo. (%)

94 CommonItems

No. (%)17 ItemsNo. (%)

A† First observer 30 (67) 31 (33)B First observer 25 (56) 32 (34) 14 (82)

Second observer 15 (88)C First observer 28 (62) 29 (31) 2 (12)

Second observer 27 (60) 24 (26) 15 (88)D First observer 15 (33) 20 (21) 13 (76)

Second observer 14 (31) 18 (19) 13 (76)E First observer 23 (51) 24 (26) 9 (53)

Second observer 18 (40) 28 (30) 3 (18)F First observer 22 (49) 24 (26) 2 (12)

Second observer 23 (51) 23 (24) 10 (59)

*Each observer was deployed to a single hospital zone, so the “first observer” re†Harbor–UCLA Medical Center.

representative. It would seem likely that the participating

208 Annals of Emergency Medicine

institutions represent those with better overall preparedness,however, because hospitals refusing to participate generally statedthey were inadequately prepared. Thus, the evaluation tool may nothave been tested across its full range, which may have reduced theobserved interrater reliability. Moreover, because of the design ofour study, we are not able to distinguish whether the tool’s variableinterrater reliability is due to limitations in the tool versus ourchoice of medical student observers.

With 11 pairs of observers available to determine interraterreliability, the CIs obtained for � coefficients were quite wide.

l LicensedBed No.

Affiliated With an AcademicEmergency Medicine

Program

Level of DisasterResource Center

Participation

553 Yes Leader579 No Service area541 Yes Service area537 Yes Service area251 No Service area377 No Service area

337 Yes Service area111 No Service area166 No Service area

1,395 Yes Leader670 Yes Leader308 No Leader427 No Leader290 No Leader539 No Leader438 No Service area74 No Service area

e Treatment Decontamination

4 CommonItems

No. (%)26 ItemsNo. (%)

94 CommonItems

No. (%)18 ItemsNo. (%)

94 CommonItems

No. (%)

25 (27) 19 (73) 43 (46) 2 (11) 18 (19)30 (32) 22 (85) 34 (36)48 (51) 20 (77) 50 (53) 10 (59) 45 (48)54 (57)38 (40) 19 (73) 34 (36) 9 (53) 38 (40)47 (50) 20 (77) 39 (41)32 (34) 22 (85) 36 (38) 8 (47) 45 (48)29 (31)58 (62) 21 (81) 63 (67) 4 (24) 40 (43)53 (56)

o distinct individuals in different zones of the same hospital.

ics.

edicar

Triag

9

fers t

Additionally, we were unable to assess the interrater reliability of

Volume , . : September

Kaji & Lewis Johns Hopkins/AHRQ Hospital Disaster Drill Evaluation Tool

the decontamination module, because there were no observer pairsavailable to send to that zone. Further, some have criticized the useof the � coefficient as a measure of interrater reliability.19-21

DISCUSSIONEstablishing internal and interrater reliability is generally a first

step in establishing the validity of an instrument. Having a valid,uniform, standardized tool to evaluate hospital disaster drills wouldfacilitate comparisons of hospital drill performance both temporallyand across multiple institutions. We found, however, that theinterrater reliability of the disaster drill evaluation tool variedsubstantially among observer pairs and was occasionally quite lowfor some zone-specific modules (eg, the triage zone).

In contrast, the interitem correlation or internal reliability ofthe AHRQ tool is quite high, indicating that the instrument hasthe potential to be highly reliable and that the underlyingconstruct of a global hospital characteristic reflecting disasterpreparedness is valid. However, the high internal reliability mayalso imply that some of the items are redundant and possiblyunnecessary. It may be possible to decrease the number of itemsin the tool and still achieve the objective of assessing overallhospital preparedness. Conversely, if a key objective of the toolis to identify a hospital’s specific deficiencies to guideremediation efforts, then diminishing the scope and specificityof the tool by decreasing the number of items is likely unwise.

The difference between the tool’s high interitem consistencyand its more variable interrater reliability may be the result ofseveral factors. The interrater reliability may depend on anobserver’s background knowledge in disaster response or simplyhis or her attention to detail. The tool’s lower interraterreliability in some modules may also reflect ambiguity in theitems and resulting differences in how individuals interpret thequestions. The design of our study, however, does not permit usto determine whether the tool’s variable interrater reliability isdue to limitations in the tool, our use of medical studentobservers, or both. The fourth-year medical students used asobservers did not undergo extensive training, attending only 4hours of didactics. It is unclear how the interrater reliability ofthe tool would vary if other personnel (eg, nurses, public safetypersonnel) were used as observers, perhaps with more extensivetraining in the use of the tool.

The AHRQ instrument is the only published tool available

Table 3. Internal reliability of tool modules.

Tool Component

Number ofEvaluative

Items*Number ofObservers

Internal Reliability(95% CI)

Common items 94 32 0.90 (0.85-0.94)Incident command zone 45 10 0.97 (0.95-0.99)Triage zone 17 10 0.97 (0.95-0.99)Treatment zone 26 7 0.72 (0.58-0.87)Decontamination zone 18 5 0.92 (0.85-0.99)

*Total number of evaluative items in entire tool is 200.

to evaluate the performance of hospital disaster drills. Yet,

Volume , . : September

among our cohort of hospitals and observers, we found greatvariability in the tool’s interrater reliability, which ranged frompoor to good, whereas the tool appeared to consistently reflect avalid underlying construct of hospital preparedness. Thissuggests that that the instrument needs revision or substantialuser training, as well as verification of interrater reliability in aparticular setting, before use.

We wish to thank the following personnel and entities for supportof this work: Agency for Healthcare Research and Quality (AHRQ)grant 1 F32 HS013985 to Dr. Kaji, Emergency MedicalFoundation (EMF) Research Fellowship Grant to Dr. Kaji, thehospital disaster coordinators from each participating hospital, andthe members of Dr. Kaji’s epidemiology doctoral dissertationcommittee for their guidance: Robert Kim-Farley, MD, MPH; JornOlsen, MD, PhD; and Scott Layne, MD.

Supervising editor: Jonathan L. Burstein, MD

Author contributions: AHK and RJL conceived and designedthe study, obtained research funding, supervised the conductof the data collection, had full access to the data, and takefull responsibility for the integrity of the data and the accuracyof the data analysis. AHK undertook recruitment ofparticipating centers and managed the data. AHK and RJLanalyzed the data. AHK drafted the article, and both authorscontributed substantially to its revision. AHK takesresponsibility for the paper as a whole.

Funding and support: By Annals policy, all authors are requiredto disclose any and all commercial, financial, and otherrelationships in any way related to the subject of this article,that might create any potential conflict of interest. The authorshave stated that no such relationships exist. See theManuscript Submission Agreement in this issue for examples

Table 4. � Statistic for interrater reliability of observers ateach zone.*

ZoneObserver Pair

Number Kappa (95% CI)

PercentAgreement(95% CI)

Incident command Observer pair 1 0.69 (0.59-0.79) 85 (79-89)Observer pair 2 0.63 (0.52-0.72) 84 (77-88)Observer pair 3 0.53 (0.40-0.66) 81 (75-86)Observer pair 4 0.61 (0.49-0.72) 84 (78-88)

Triage Observer pair 5 0.63 (0.51-0.74) 84 (78-89)Observer pair 6 0.58 (0.45-0.71) 81 (74-87)Observer pair 7 0.59 (0.47-0.70) 82 (77-87)Observer pair 8 0.24 (0.09-0.38) 70 (63-76)Observer pair 9 0.40 (0.28-0.52) 73 (66-78)

Treatment Observer pair 10 0.72 (0.63-0.81) 87 (82-91)Observer pair 11 0.40 (0.28-0.52) 74 (68-79)

Overall 0.57 (0.53-0.60) 81 (79-83)

*Both common questions and the zone-specific questions are included in thecalculation of each pair’s � statistic and percentage of agreement.

of specific conflicts covered by this statement.

Annals of Emergency Medicine 209

Johns Hopkins/AHRQ Hospital Disaster Drill Evaluation Tool Kaji & Lewis

Publication dates: Received for publication April 13, 2007.Revision received July 24, 2007. Accepted for publication July31, 2007. Available online October 15, 2007.

Reprints not available from the authors.

Address for correspondence: Amy H. Kaji, MD, MPH,Department of Emergency Medicine, Harbor–UCLA MedicalCenter, 1000 West Carson Street, Box 21, Torrance, CA90509; 310-222-3500, fax 310-782-1763; [email protected].

REFERENCES1. The Joint Commission. Available at: http://www.JCAHO.org.

Accessed April 2, 2007.2. Levi L, Bregman D, Geva H, et al. Hospital disaster management

simulation system. Prehosp Disaster Med. 1998;13:29-34.3. Gofrit ON, Leibovici D, Shemer J, et al. The efficacy of integrating

“smart simulated casualties” in hospital disaster drills. PrehospDisaster Med. 1997;12:97-101.

4. Levy K, Aghababian RV, Hirsche EF, et al. An Internet-basedexercise as a component of an overall training programaddressing medical aspects of radiation emergency management.Prehosp Disaster Med. 2000;15:18-25.

5. Evaluation of Hospital Disaster Drills: A Module-Based Approach.Prepared for the Agency for Healthcare Research and Quality.Contract No. 290-02-0018, and prepared by the Johns HopkinsUniversity Evidence-based Practice Center, the Johns HopkinsUniversity Bloomberg School of Public Health, and the JohnsHopkins University Applied Physics Laboratory, April 2004.

6. Training of hospital staff to respond to a mass casualty incident.Available at: http://www.ahcpr.gov/clinic/epcix.htm. Accessed onApril 2, 2007.

7. Available at: visit http://www.emsa.cahwnet.gov/dms2/download.

htm. Accessed on April 2, 2007.

210 Annals of Emergency Medicine

8. Available at: http://www.ambpeds.org/ReliabilityandValidity.pdf.Accessed on April 2, 2007.

9. Available at: http://www.emsa.cahwnet.gov/dms2/hospambex.asp. Accessed on April 2, 2007.

10. Available at: http://www.nhs.gov/aspr/opeo/hpp. Accessed onApril 2, 2007.

11. Hatcher L. A Step-By-Step Approach Using the SAS(R) System forFactor Analysis and Structural Equation Modeling. Cary, NC: SASInstitute.

12. Yu CH. An introduction to computing and interpreting Cronbachcoefficient alpha in SAS. Statistics, data analysis, and datamining. Available at: http://www.creative-wisdom.com/pub/cronbach.html. Accessed April 2, 2007.

13. Nunnaly J. Psychometric Theory. New York, NY: McGraw-HillPublishing; 1978.

14. Santos JR. Cronbach’s alpha: a tool for assessing the reliabilityof scales. Available at: http://www.joe.org/joe/1999april/tt3.html. Accessed April 2, 2007.

15. Cronbach LJ. Coefficient alpha and the internal structure of tests.Psychometrika. 1951;16:297-333.

16. Iacobucci D, Duhacek A. Advancing alpha: measuring reliabilitywith confidence. J Consumer Psychol. 2003;13:478-487.

17. Ebel RL. Estimation of the reliability of ratings. Psychometrika.1951;16:407-424.

18. Duhachek A, Coughlan AT, Iacobucci D. Results on the standarderror of the coefficient alpha index of reliability. Market Sci. 2005;24:294-301.

19. Thompson WD, Walter SD. Variance and dissent. Areappraisal of the kappa coefficient. J Clin Epidemiol. 1988;41:949-958.

20. Thompson WD. Kappa and attenuation of the odds ratio.Epidemiology. 1990;1:357-369.

21. Gwet K. Kappa statistic is not satisfactory for assessing theextent of agreement between raters. Available athttp://www.msu.edu/course/psy/818/deshon/Projects/

Project 2/generalized kappa.doc. Accessed September 6, 2007.

IMAGES IN EMERGENCY MEDICINE(continued from p. 202)

DIAGNOSIS:Low-flow (ischemic) priapism. Priapism is classified as low-flow/ischemic, a urologic emergency, or high-flow/

nonischemic, a condition that warrants only observation and outpatient follow-up.1 Prompt differentiationbetween the 2 is important for the emergency physician and has classically been accomplished through corporalblood gas analysis, a potentially painful procedure with risk of iatrogenic injury to the cavernosal artery.2 It mayalso overestimate the actual flow state in ischemic priapism.3 The use of color Doppler ultrasonography inpriapism is well documented in the urology literature, with many studies showing it to be an equivalent orpreferred modality.1,3,4 While a high-frequency linear transducer is used, the penis is scanned on the ventral aspectin both transverse and longitudinal planes. The presence or lack of flow from the cavernosal arteries can be rapidlydetermined.1 Emergency medicine treatment algorithms recommend early consultation with urology when thediagnosis is suspected.5

REFERENCES1. Sadeghi-Nejad H, Dogra V, Seftel AD, et al. Priapism. Radiol Clin North Am. 2004;42:427-443.2. Altman AL, Seftel AD, Brown SL, et al. Cocaine associated priapism. J Urol. 1999;161:1817-1818.3. Bochinski DJ, Dean RC, Lue TF. Erectile dysfunction and priapism. Nat Clin Pract Urol. 2004;1:49-53.4. Secil M, Arslan D, Goktay AY, et al. The prediction of papaverine induced priapism by color Doppler sonography. J Urol.

2001;165:416-418.5. Monkhouse SJ, Bell S. Low-flow priapism needs recognition and early correct treatment. Emerg Med J. 2007;24:209-210.

Volume , . : September

Volum

Appendix E1.

Zone QuesNum

Ques Ques(Eva /Illustratiiona

BetteEval

Common Illustrative (1)

C.6 Draw a picture of the zone set-up Illus

CommoObse(14)

C.7a Was this zone located on the amramp

Observational

C.7b Was this zone located inside the hospital? Observational C.7c Was Observational C.7d Was this zone loca Observational C.9a Was the boundary to this zone defined by a

barricade/vehicle/sign? Observational

C.9b Was by secur el?

Obse

C.9f/ Was perm

Observational

C.16 Was there a physician assigned to this zone? Observational C.17 Was Obse C.18 Were there

security, cleaning, etc.) assigned to this zone? Observational

C.19 Weredurin

Observational

C.31 Was durin

Obse

C.62 If se was there morehosp

Observational

C.104 Did this zone close at any tidrill?

Observational

Common Evaluative (94)

C.8 Was the boundary to this zone defined? Eval Yes

C.10 Werethis zone?

Eval Yes

C.11 Did s ? Eval Yes

C.12 If somtake less than 30 lactiv

Evaluat Yes

C.13 If som of this zone, was it the offic

Eval e Yes

C.14 Was ident

Eval Yes

C.15 Were Eval Yes C.15b Were Eval Yes C.15c Were Evaluative Yes C.15d Were nel identifiable? Evaluative Yes C.15f Were the drill Evaluat Yes C.15e Were Eval Yes C.15g Were personnel identifiable? Eval Yes C.23 Was

beforEval Yes

C.24 Was the hospital disaster plan available? Evaluative Yes C.25a Was the hospital disaster plan accessible as a

compform

Evaluative Yes

C.25b Weredisas

Evaluat Yes

C.25c Were b action sheets available as part of the disaster plan?

Evaluative Yes

C.26 Wasthe in

Eval Yes

C.27 Was the hospital disaster plan?

Evaluative Yes

C.28 Was therthe h

Evaluat

tion ber

tion Text tion Type luativeve/Observatl)trative

r Answer if uative

n rvational

bulance?

this zone located in the parking lot? ted on the street/road?

the boundary to this zone defined ity personn

rvational

g the boundary to this zone defined by a anent or temporary wall?

there a nurse assigned to this zone? ancillary personnel (registrars,

rvational

additional drill participants added g the drill? this zone used for the same functions g non-drill operations? curity personnel were present,

rvational

than one type of security (local police, ital security, state police, FBI, etc)?

me during this

uative

providers able to move easily through uative

omeone take charge of this zone

eone took charge of this zone, did it minutes after the dril

uative

ive

ities began for this person to take charge? eone took charge

ially designated person? the person in charge of the zone ified?

uativ

uative

a the drill evaluators identifiable? the drill organizers identifiable? the media identifiable? the medical person

uative uative

observers identifiable? the mock victims identifiable? the security

ive uative uative

the location of this zone determined e the drill?

uative

lete manual, in its hard-copy, paper ? flow diagrams available as part of the ter plan?

ive

jo

a the hospital disaster plan accessible via ternet? there a biological incident component to

uative

e a radiation incident component to ospital disaster plan?

ive Yes

e , . : September Annals of Emergency Medicine 210.e1

C.29 Wasadequate

Evaluative Yes

C.30 If thewas t ated overflow zone?

Eval Yes/

C.31 Did clinical staff interact direcfamil

Evaluat

C.33 Did clinical staff interact direc h famil

Evaluat

C.34 Were specially desig

Eval Yes

C.35 Was the p victims ensured (with curta

Evaluative Yes

C.36 Were r comm

Eval Yes

the space allocated for the zone ?

re was not enough space for the zone, here a design

uative NA

tly with ies of victims?

ive Yes

tly wities of victims?

ive Yes

families of victims referred tonated staff?

rivacy of

uative

ins, privacy screens, etc)? 2-way radio phones made available founication?

uative

C.37 Were rcomm

Eval Yes

C.38 Were landline phones comm

Evaluat Yes

C.39 Were es made available for comm

Evaluative Yes

C.40 Werecomm

Eval Yes

C.41 Wereof in

Evaluative Yes

C.42 Was nucomm

Evaluat Yes

C.43 Was ailable for comm

Evaluative Yes

C.44 Was comm

Eval Yes

C.45 Wasinfor

Eval Yes

C.46 Werecomm

Evaluat Yes

direct lines made available founication?

uative

made available for unication?

wireless/cellphon

ive

unication? AM/FM radios made available for unication or as a source of information?

Televisions made available as a source formation?

uative

meric paging available for unication?

overhead paging av

ive

unication? text paging available for unication?

uative

the internet made available as a source of mation?

uative

fax machines available for unication?

ive

C.47 Wereuse?

Evaluative Yes

C.48 Were comm

Eval Yes

C.49 Were runners availcomm

Evaluat Yes

C.50 Was formation recorded (comboard

Evaluative Yes

C.53 Was the incorecorded?

Evaluative Yes

C.54 Wassurro fax, runner, telephone,

Eval Yes

C.55 Did tzone egarding the detai

Eval Yes

intercoms available for communication

megaphones available for physical unication?

uative

able to facilitate unication?

incoming in

ive

puter, posted paper, notepaper, white , etc)?

ming information to the zone

this zone notified of the details unding the event (via

etc)? he incident command center notify this and update this zone r

uative

ls surrounding the event?

uative

C.56 Did your zone receive updates regarding the situation outside the hospital?

Evaluative Yes

C.58 Was gene

Eval Yes

C.59 Were lays in recei

Eval No

C.60 Were e zone? Evaluative Yes C.63 Did a ns

of coEval Yes

C.61 If security wer as there a readi

Evaluative Yes/NA

C.64 Were ts strictly controlled in this a

Evaluative Yes

C.65 Did a eregar

Eval No

C.65 If secand o

Eval Yes/

this zone kept aware of the ongoing ral situation within the hospital?

uative

there any problems created by deving information? security personnel present in th

uative

ll security present have a portable meammunication?

e not present, w

uative

ly usable means to contact them? entrances and exirea? ny security issues arise in this zonding access in and out?

a uative

b urity issues arose regarding access in ut, did security respond?

uative NA

210.e2 Annals of Emergency Medicine Volume , . : September

C.65c If sec ess in and out,

Evaluative Yes/NA

C.66 Did aregar y members?

Eval

urity issues arose regarding acc was order maintained?

ny security issues arise in this zone ding assistance for famil

a uative

C.66b If sec ceurity issues arose regarding assistanfor fa ?mily members, did security respond

Evaluative Yes/NA

C.66c If sec egarding assistance urity issues arose rfor fam ined? ily members, was order mainta

Evaluative Yes/NA

C.67 Did aregarvictim

Eval Noa ny security issues arise in this zone ding assistance lifting supplies or

s?

uative

C.67b If sec nceurity issues arose regarding assistaliftin s arose, did security g supplies or victimrespond?

Evaluative Yes/NA

C.67c If sec istance urity issues arose regarding assliftin ose, was order g supplies or victims armaintained?

Evaluative Yes/

C.68 Did aregar l?

Eval No

NA

a ny security issues arise in this zone ding crowd contro

uative

C.68b If security issues arose regarding crowd contr pond? ol, did security res

Evaluative Yes/NA

C.68c If security issues arose regarding crowd control, was order maintained?

Evaluative Yes/NA

C.69a Did aregar

Evaluative ny security issues arise in this zone ding media control?

C.69b If security issues arose regarding media control, did security respond?

Evaluative Yes/NA

C.69c If secur egarding media contr

Evaluative Yes/NA

C.70a Did aregar /traffic control?

Evaluative No

ity issues arose rol, was order maintained? ny security issues arise in this zone ding transportation

C.70b If security issues arose regarding transportation/traffic control, did security respond?

Evaluative Yes/NA

C.71a Did aregar

Evaluative No ny security issues arise in this zone ding unruly victims?

C.71b If security issues arose regarding unruly victims, did security respond?

Evaluative Yes/NA

C.71c If security issues arose regarding unruly victims, was order maintained?

Evaluative Yes/

C.74 Weregivenrecord nu

Eval Yes

C.75 Was therrecor

Evaluat Yes

C.76 Was a l list of victims generated for this zone

Evaluative Yes

C.77 Werevisible

Evaluat Yes

C.78 Did tvictim

Eval Yes

C.79 Was clinical inforacces

Evaluat Yes

C.80 Was riving in this zoneone-h

Evaluative Yes

C.81 Did a bottleneck develop in this zone? Evaluative No C.83 If a b

bottleEval Yes/

C.84 Weremark

Evaluative Yes

C.85 If thdirections given by

Eval Yes/

C.86 Werestaff to an area separate from higher acuity

Evaluative Yes

NA

all incoming victims registered and a unique identification or medical

mber?

uative

e a method of documenting victim ds in this zone? centra

ive

? the triage markers on the victims clearly?

ive

he triage markers stay affixed to the s while in this zone?

uative

mation about victimssible to caregivers? the proportion of victims ar

ive

labeled with a triage level greater than alf?

ottleneck developed in this zone, was the neck resolved? the paths leading to the next zone

uative NA

ed? e paths were not marked, were verbal

zone staff? uative NA

the lowest acuity victims directed by

Volume , . : September Annals of Emergency Medicine 210.e3

victims? C.87 Were e of there any treatment delays becaus

staffing shortages in this zone? Evaluative No

C.88 Was place victims?

Evaluative Yes

C.89 Was ovedecea

Eval Yes

C.91 Was healt

Evaluative Yes

C.92 Werehealt

Eval Yes

C.93 Were isolation gowns adequate forhealthcare workers?

Evaluative Yes

C.94 Were carework

Eval Yes

C.98 Was n/shift change

Evaluative Yes

C.10 If thearise?

Eval Yes/

C.102 Were dual brief

Evaluative Yes

C.103 Wasif necessary?

Evaluat Yes

Decontamination Evaluative (18)

DE.2 Waslocat

Eval Yes

there a designated, quiet and separate for mock expiring there a plan in place to rapidly remsed victims from this zone?

eye protection adequate for the hcare workers?

uative

waterproof gowns adequate for the hcare workers?

the

uative

the gloves adequate for the healther? there a scheduled staff rotatio

uative

? re was a staff rotation, did problems

incoming staff updated (indivi

0 uative NA

ing, group briefing, written notes, etc)? there a plan in place to relocate the zone ive

the decontamination area in a covered ion?

uative

DE.4 Was the decontaarriv

Evaluat

DE.5 Weredeco

Evaluative Yes

DE.6 Were nonamdeco

Evaluat

DE.8b Whedecontatransthen was the victimdeco

Evaluative Yes

DE.9 Were nonaensur

Evaluative Yes

DE.10 Were and fema

Evaluative Yes

DE.14 Were any comf

Evaluat

DE.17 Were this zone and the n

Evaluative Yes

DE.2 Was Eval Yes DE.21 Did t

normEvaluative Yes

DE.22 Was MS traffic

Evaluative Yes

DE.2 Was chem

Eval Yes

DE.28 Did t ve to wait for staff

Evaluative Yes

DE.3 Did u thconta

Eval Yes

DE.3 Were Eval Yes DE.32 Did s PE have an effective

means of coEvaluative Yes

TriageObservational(6)

TG.2 Was the e ent?

Observational

TG.3 If theemergencyaway

Observational

TG.6b Were ysical examradia

Observational

mination zone set up prior to al of the first victim? all victims sent immediately through

ive Yes

ntamination on arrival in this zone? more than 2 ambulatory and 1 bulatory victims able to undergo

ntamination simultaneously? n nonambulatory victims were

ive Yes

minated, was the victim either ferred to another means of transport and put through decontamination, or

and the means of transport put through ntamination together?

mbulatory victims repositioned to e decontamination of all surfaces? separate provisions made for malele victims?

measures taken to improve victims’ ort? there any barriers between

ive Yes

ext? contaminated water run-off contained? he decontamination zone affect the

0 uative

al flow of EMS traffic? a plan in place for re-routing the E?

your zone made aware of the actual ical, radiological, or biological agent?he first arriving victims ha

6 uative

to don PPE? ncontaminated staff or victims mix wiminated staff or victims?

0 uative

1 there any problems with the PPE? taff dressed in P

uative

mmunicating with the victims? the triage zone contiguous or located in mergency departm

triage zone was not contiguous to the department, estimate distance

in feet. the victims screened by phination for biological, chemical, or tion exposure?

TG.6c Were Observational the victims screened by a screening

210.e4 Annals of Emergency Medicine Volume , . : September

devic TG.6a Were

intervObservational

TG.7 Whareceiand contr

Observational

TriageEvaluative (17)

TG.4 Did awitho y?

Eval

TG.5 Werecheminto t

Eval Yes

TG.8 Did tregisdocu

Eval Yes

TG.9 Did a victims enter this zone? Evaluative No TG.1 Were Eval Yes TG.1 Was Evaluat TG.12 Was blood pr Evaluative Yes TG.1 Were Eval Yes TG.1 Were Eval Yes TG.15 Were Evaluative Yes TG.16 Were stethoscopes available? Evaluative Yes TG.1 Were Eval Yes TG.1 Was Evaluat TG.19 Were Evaluative Yes TG.20 Were access supplies (catheters,

fluidEvaluative Yes

TG.21 Were ? Evaluative Yes TG. Was

pediaEval Yes

IncidentCommand Observational(23)

IC.1 Who is zone? Observational

IC.26 Wasin tw rooms?

Obse

IC.2 Wasin tw

Observational

IC.26b Was the incident command center configured in on

Observational

IC.27 Was incident co

Observational

IC.2 Was incid

Observational

IC.27c Was roximate number of people in the incid

Observational

IC.2 Was incid more than 20?

Observational

IC.4 Did t ayradio m within the hospital

Observational

IC.45b Did the incident command center use cellphones to receive data from within the hosp

Observational

IC.4 Did tcomphosp

Observational

IC.45 Did trecei within the hospital?

Observational

IC.45e Did the incident comphonhosp

Observational

IC.45f Did t gers to rec

Observational

IC.4 Did tto rec

Observational

IC.46 Did tunderagen ?

Observational

IC.57a Did t Observational

e? the victims screened by personal iew?

t proportion of victims in the triage zone ved care beyond basic airway maneuvers

ol of active bleeding? nyone perform triage independently and ut authorit

uative

all patients screened for biological, ical, or radiation exposure before entry he triage area?

uative

he hospital have a method for expedited tration and/or medical record mentation? ny contaminated

uative

01

bandages available? basic airway equipment available?

essure equipment available?

uative ive Yes

34

oxygen tanks available? oxygen masks available? splints available?

uative uative

78

stretchers available? suction equipment available? surgical masks available? vascular

uative ive Yes

s, etc) available? wheelchairs availablethere a mechanism in place to repatriate tric patients with their guardians? first took charge of th

uative

the incident command center configured o or more contiguous

rvational

6a the incident command center configured o or more non-contiguous rooms?

e room? the approximate number of people in the a

mmand center 11-20? the approximate number of people in the ent command center 6-10? the app

7b

ent command center less than 5? the approximate number of people in the ent command center

7d

5a he incident command center use 2-ws to receive data fro

?

ital? he incident command center use uters to receive data from within the

5c

ital? he incident command center use fax to ve data from

d

mand center use landline es to receive data from within the ital? he incident command center use paeive data from within the hospital? he incident command center use runnerseive data from within the hospital? he hospital activate a memorandum of standing (MOU) with any external

cy regarding use of services or resourceshe incident command center use 2-way

5g

Volume , . : September Annals of Emergency Medicine 210.e5

radios to receive information from outside the hospital

IC.5 Did tcellphosp

Observational

IC.5 Did tcomphospital

Observational

IC.5 Did trecei outside the hospital?

Observational

IC.5 Did t nephonhosp

Observational

IC.5 Did t gers to rechospital

Observational

IC.5 Did tto rec thehosp

Observational

IncidentCommand Evaluative (45)

IC.2 Did tcomm minutes after the drill

Eval Yes

IC.3 Werezone easily identifiable

Evaluat

IC.4 Was so fulfilling the role of the incid

Evaluative Yes

IC.5 Was f the public infor

Evaluative Yes

IC.6 Wasoffic

Eval Yes

IC.7 Wasand security of

Eval Yes

IC.8 Waslogis

Eval Yes

IC.9 Wasfacili

Evaluative Yes

IC.10 Wascomm

Eval Yes

IC.11 Was someone fulfilling the role of the patient trans

Evaluative Yes

IC.12 Wasmater

Eval e Yes

IC.13 Was sonutri

Evaluative Yes

IC.14 Wasplann

Eval e Yes

IC.15 Was laborpool

Eval Yes

IC.16 Was e role of the medical staff

Eval e Yes

IC.17 Was illing the role of the nursing unit leader?

Eval e Yes

IC.18 Waschief?

Eval Yes

IC.19 Wasopera

Evaluative Yes

IC.20 Was f the medical care

Eval Yes

IC.21 Wasancillary services director?

Evaluative Yes

IC.22 Was e human servi

Eval Yes

IC.28 Did tcente with effective comm

Evaluative No

? he incident command center use hones to receive data from outside the ital?

7b

7c he incident command center use uters to receive data from outside the

? 7d he incident command center use fax to

ve information from7e he incident command center use landli

es to receive data from outside the ital?

7f he incident command center use paeive information from outside the

? 7g he incident command center use runners

eive information from outside ital? he officially designated incident ander arrive within ten

activities in this zone began? other members of the incident command

? meone

uative

ive Yes

ent commander? someone fulfilling the role omation officer? someone fulfilling the role of the liaison er?

uative

someone fulfilling the role of the safety ficer?

uative

someone fulfilling the role of the tics chief officer? someone fulfilling the role of the

uative

ties management unit leader? someone fulfilling the role of the unications unit leader?

uative

portation unit leader? someone fulfilling the role of the ials/supply unit leader?

meone fulfilling the role of the

uativ

tional supply unit leader? someone fulfilling the role of the ing chief?

uativ

someone fulfilling the role of the unit leader? someone fulfilling th

uative

uativunit leader? someone fulf

someone fulfilling the role of the finance

someone fulfilling the role of the tions chief?

uativ

uative

someone fulfilling the role odirector? someone fulfilling the role of the

uative

someone fulfilling the role of thces director? he noise level in the incident commandr interfere

uative

unication? IC.29 If the

commthe pr

Eval Yes

IC.30 Wasthe nu

Eval Yes

noise level interfered with unications, were steps taken to correctoblem?

uative

information regarding the availability of mber of operating rooms received by

uative

210.e6 Annals of Emergency Medicine Volume , . : September

the incident command center at least once? IC.31 Was inform

the nu received by the in

Evaluat

IC.32 Was informthe nu nit beds recei cident command center at least once?

Evaluat

IC.33 Was bility of the nu ceivedby the incident co ?

Eval Yes

IC.34 Was number of arriv ent comm

Eval Yes

IC.35 Was timated time window of victi the incid

Eval Yes

IC.36 Was d triage levelcomm

Eval Yes

IC.37 Was of victimincid ce?

Evaluat

IC.38 Was clinic ntcomm

Eval Yes

IC.39 Was regarding the total number of ex tcommand center at least once?

Evaluative Yes

IC.40 Was information regarding the potential discharges of ‘actual’ patients received by the incident co

Evaluative Yes

IC.41 Wasnumbincid and center at least once?

Eval Yes

IC.45 Wereincid from withiphon ndlines, pagers, and runne

Eval Yes

IC.48 Was er in comm

Eval e Yes

IC.49 Wascomm

Eval Yes

IC.51 Wascomm

Eval Yes

IC.53 Wascomm

Eval e Yes

IC.54 Wascomm ation with the police department?

Eval e Yes

IC.57 Wereincidoutsiphon rs, fax, landlines, pagers, and runne

Evaluat

IC. Werethe d

Eval Yes

IC. Did the hospital disaster plan address how to mana

Evaluative Yes

IC. Was pedia was separate from adult victim

Eval e Yes

Treat ent Observational(12)

TX.2 Durinnormdepar

Obse

TX.2 Durinnorm ?

Observational

TX.2c During r does this area normally function as a medical inpatient unit?

Observational

ation regarding the availability of mber of staffed floor beds cident command center at least once?

ation regarding the availability of

ive Yes

mber of staffed intensive care uved by the in

information regarding the availa

ive Yes

mber of staffed isolation beds remmand center at least once

uative

information regarding theing victims received by the incid

and center at least once?

uative

information regarding the esms’ arrival received by

ent command center at least once?

uative

information regarding the expecteof the victims received by the incident and center at least once?

information regarding the numbery the

uative

s the ED can accept received bent command center at least on

ive Yes

information regarding the number of al staff available received by the incideand center at least once?

information

uative

pected victims received by the inciden

mmand center at least once? information regarding the available er of support staff received by the

ent comm

uative

there mechanisms in place for the ent command center to receive data n the hospital (via 2-way radios, cell es, computers, fax, lars)?

the incident command cent

uative

unication with the fire department? the incident command center in unication with the fire department?

uativ

uative

the incident command center in unication with the media?

the incident command center in

uative

uativunication with other hospitals?

the incident command center in unic

uativ

there mechanisms in place for the ent command center to receive data from de the hospital (via 2-way radios, cell es, computers)? there any pediatric victims planned in isaster drill?

ive Yes

uative

ge pediatric disaster victims? there a separate area designated for tric victims that s? g regular functioning, does this area ally function as an emergency tment?

uativ

m a rvational

b g regular functioning, does this area ally function as an intensive care unit

egular functioning,

Volume , . : September Annals of Emergency Medicine 210.e7

TX.2 Durinnorm

Observational

TX.2e Durin does this area norm

Observational

TX.2 Durinnorm

Observational

TX.3 Loca Observational TX.4 Did a

treatmObservational

TX.13 Did this zone have an assigned tr ? Observational TX.16 Were requested from an outside

sourcObservational

TX.34 Were Observational TX.39 Were ObservationalTreatment Evaluative (26)

TX.5 Were n the treatment zone

Evaluative Yes

TX.6 If victhey

Eval Yes/

TX.7 Were tim treatment becau ?

Evaluative No

TX.8 Werebecau

Eval No

TX.9 Were ther s in victim treatment becau ?

Evaluative No

TX.10 Werebecau with transport services?

Eval e No

TX.1 Werebecau

Eval No

TX.12 Did adisposition decisions made at drill termi

Eval e Yes

TX.1 Did a e? Eval No TX.15 Were

victimEvaluat

TX.1 Were Eval Yes TX.20 Was Eval e Yes TX.21 Were s available? Evaluative Yes TX.2 Was Eval Yes TX.2 Were

contaEval Yes

TX.25 Were Evaluative Yes TX.2 Were Eval Yes TX.2 Was Evaluat TX.28 Were Evaluative Yes TX.30 Were masks available? Evaluative Yes TX.31 Were oxy Evaluative Yes TX.32 Were splints available? Evaluative Yes TX.3 Were Eval Yes TX.38 Were Evaluative Yes TX. Was pediatri

availEvaluat

TX. Was to estimate pedia

Evaluative Yes

d g regular functioning does this area ally serve medical outpatients? g regular functioning,ally serve surgical inpatients? g regular functioning, does this area ally serve surgical outpatients? tion of unit

f

ctual patients remain in the drill ent area (along with the mock victims)?

ansport staffmedications

e? stretchers available? wheelchairs available? all patients reassessed i

? tims were not previously triaged, were sent back to the triage zone? there any delays in vic

uative NA

se of problems with radiology services there any delays in victim treatment se of problems with laboratory services?

e any delay

uative

se of problems with pharmacy services there any delays in victim treatment se of problems

uativ

1 there any delays in victim treatment se of problems with supplies? t least half of the patients have

uative

uativ

nation? ny contaminated victims enter this zon medications needed for the treatment of

4 uative ive Yes

s available within the hospital? bandages available? basic airway equipment available? blood drawing supplie

9 uative uativ

24

blood pressure equipment available? cleaning supplies available for minated equipment available? crash carts available?

uative uative

67

intravenous fluids available? intubation equipment available? medications available? oxygen

gen tanks available?

uative ive Yes

7 vascular access supplies available? ventilators available?

uative

c-specific airway equipment able? there a method in place

ive Yes

tric weights?

210.e8 Annals of Emergency Medicine Volume , . : September