Upload
jason-b-hack
View
216
Download
1
Embed Size (px)
Citation preview
ewctmqaci½iavwnmeaaettwf(brdUnc
RA
The Journal of Emergency Medicine, Vol. 37, No. 3, pp. 313–318, 2009Copyright © 2009 Elsevier Inc.
Printed in the USA. All rights reserved0736-4679/09 $–see front matter
doi:10.1016/j.jemermed.2007.07.021
Education
EMERGENCY MEDICINE RESIDENTS AND STATISTICS:WHAT IS THE CONFIDENCE?
Jason B. Hack, MD,* Poopak Bakhtiari, MD,* and Kevin O’Brien, PHD†
*Department of Emergency Medicine and †Department of Biostatistics, School of Allied Sciences, East Carolina University, Greenville,North Carolina
Reprint Address: Jason B. Hack, MD, Department of Emergency Medicine, East Carolina University, 600 Moye Blvd., Emergency
Medicine Towers, 3rd Floor, Greenville, NC 27834tE
e
PsttdicUtimts
fMEgmtw
Abstract—The objective of this study was to assesshether residents have the essential tools and a sense of
ompetency when evaluating published studies, especiallyhe statistics. Questionnaires were mailed to emergencyedicine (EM) residency programs in the United States
uerying residents’ demographics and training in statisticss well as their impressions and use of statistics in theurrent literature; a five-question statistical quiz was alsoncluded. Possible responses of—almost always, more than
time, ½ time, less than ½ time, almost never—were talliedndividually as well as compared in groups of polarizednswers: over 1/2 time (almost always � more than ½ time)s. under ½ time (less than ½ time � almost never). Thereere 495 questionnaires returned from 42 centers. No sig-ificant difference was found when comparing quiz perfor-ance with participants’ self-reported statistical knowl-
dge. There were considerable differences in the polarizednswers (Over vs. Under), whether statistics: were usedppropriately (40% vs. 15%, respectively); were used tonhance weak data (54% vs. 13%, respectively); enhancedheir understanding of information (38% vs. 24%, respec-ively); simplified complex data (26% vs. 41%, respectively);ere understood by them (23% vs. 38%, respectively); con-
used them (37% vs. 24%, respectively); were skipped52% vs. 23%, respectively). Participants felt there shoulde more statistical training (49% vs. 22%, Over vs. Under,espectively). There was no difference in respondents whoid or did not read the statistics (39% vs. 34%, Over vs.nder, respectively). Many EM residents surveyed doot trust, read, or understand statistics presented inurrent journal articles. Residency programs may want
ECEIVED: 24 January 2007; FINAL SUBMISSION RECEIVED:
CCEPTED: 2 July 2007313
o consider enhanced training in statistics. © 2009lsevier Inc.
Keywords—statistics; emergency physicians; residents
INTRODUCTION
hysicians in training are especially vulnerable to theuggestions of published literature. It is during this timehat they are creating a personal database to justify theirreatment paradigms from literature supplied from bothependable and questionable sources, including the res-dency reading lists, articles suggested by other physi-ians, lifelong learning and self-assessment (in thenited States), various internet sources, and pharmaceu-
ical companies’ promotional brochures. Therefore, it ismperative that residents have a firm understanding of theany aspects of research to appropriately accept or reject
he information within these articles. The attainment oftatistical competence may be particularly challenging.
A perusal of 100 articles published in the last year inour emergency medicine journals (Annals of Emergencyedicine, Academic Emergency Medicine, Journal ofmergency Medicine, and American Journal of Emer-ency Medicine) revealed that 96% contained some ele-ent of statistical analysis. The potential errors within
he statistical analysis are multiple and include use of therong test for the information being evaluated, examin-
y 2007;
26 Maiuptb(pcMedoh
uecswApuhetua
S
ApqEaavpssa1u(trtptwh
tdwtoM
qavRowov
S
StStqdurbrtc
S
Tsa4
P
AAcwtww
rfi
314 J. B. Hack et al.
ng the wrong data sets, not describing which test wassed, and choosing the wrong p value. Ideally, errors inublished studies should be infrequent. Unfortunately,he rate of errors within statistics in published articles haseen found consistently and repeatedly in excess of 50%1–6). In other series, these statistical errors were sorofound that between 8% and 22% of the articles hadonclusions that were not supported by the data (1,7).any articles have explored and described the statistical
rrors found in published textbooks, whole journals, in-ividual studies, abstracts, booklets, and literature fromther countries (2–6,8–18). Other articles have exploredow to fix these problems (19–23).
Whereas it is clear that statistics have become ubiq-itous in current literature and the amount of publishedrrors within these statistics is well documented, it is notlear how emergency medicine residents (EMRs) usetatistics, or if they have enough knowledge and comfortith statistics to recognize when errors are present.search of Medline, PubMed, and Ovid revealed no
revious data evaluating EMRs’ comfort, knowledge, orse of statistics. In this study, we sought to determineow a sample of EMRs from across the United Statesstimated their own understanding of statistics, whetherhey actually did have a strong foundation of statisticalnderstanding, and how they report using the statisticalnalysis of the data within articles.
METHODS
tudy Design
fter an initial set of interviews were conducted, weiloted and developed a short, closed-ended, anonymousuestionnaire (24–27). It was designed to elicit fromMRs an evaluation of their own statistical knowledgend their impression, use, and trust in statistics as theyre used in today’s journals. The questionnaire was di-ided into three sections. The first inquired about thehysicians’ current level of training, their teaching re-ponsibilities, and their formal training of statistics. Theecond section inquired about the number of medicalrticles residents read each month. This was followed by0 questions about the frequency with which residentssed, or felt a specific way about, published statisticsalmost always, more than ½ time, ½ time, less than ½ime, almost never). The questions included whether theesidents thought they had a good understanding of sta-istics; whether they thought statistics were used appro-riately; how often they read the statistics; how oftenhey understood the statistics presented; how often theyere confused by the statistics; how often statistics en-
anced their understanding of the information; how often shey thought the statistical analysis simplified complexata; how often statistical analysis was used to enhanceeak data; how often they skipped the statistics and read
he conclusion to understand a paper; and finally, howften they thought more statistical training in Emergencyedicine (EM) education was needed.The third section of the study contained a five-question
uiz with validated basic statistical questions culled fromn Introduction to Statistics course at East Carolina Uni-ersity. This study was approved by the Institutionaleview Board of East Carolina University, Brody Schoolf Medicine. The postage, photocopying, and materialsere funded by the research division of the Departmentf Emergency Medicine, East Carolina University, Green-ille, North Carolina.
tudy Setting and Population
urvey packets were sent by mail to 120 academic cen-ers with EM residency programs across the Unitedtates. Each packet contained a cover letter addressed to
he program director that explained the study, and 20uestionnaires. The letter requested that each programirector ask all EMRs present during a normally sched-led educational conference to voluntarily complete andeturn the questionnaire during the conference. The num-ers of doctors present and the number of questionnaireseturned were recorded. The forms were returned by mailo the author. A reminder letter and e-mail were sent toenters not returning the questionnaires after 3 months.
tudy Protocol
his study was a mailed, voluntary, questionnaire-basedtudy that queried EMRs at all levels of training atcademic centers across the United States over a-month period.
rimary Data Analysis
ll data from the questionnaires were entered into anccess database (Microsoft Office Access, 2002; Mi-
rosoft Corporation, Redmond, WA). The responsesere categorized by each of the five possible answers to
he questions in part 2 of the survey instrument. The dataere then examined with SPSS (SPSS Statistical Soft-are, Version 13.0, 2004; SPSS Inc., Chicago, IL).The Kruskal-Wallis test was used to compare the
esponse variable “average percent correct” on theve-item statistical test between various groups of
ubjects (28). The Kruskal-Wallis test was selected asirm
TgtCIMN[rrdUpggpad2ted
ftttqs
r9s
rf1g
r(ttbti
w(
ctsHs(aasptasfu
bpottp(
oc(acs½
ot
Fwe
Residents and Statistics 315
t does not require an assumption of normality for theesponse variable, and it is suitable for an ordinaleasurement (28).
RESULTS
here were 495 questionnaires returned from a broadeographical base: 42 centers (35% of the centers con-acted) in 25 states (Alaska, Arizona, California [3],onnecticut, Colorado, Delaware, Florida, Georgia [2],
llinois, Indiana, Iowa, Kansas, Kentucky [2], Louisiana,assachusetts, Michigan [3], Nebraska, North Carolina,ew York [8], Ohio, Oregon, Pennsylvania [2], Texas
3], Virginia [2], and West Virginia) and Puerto Rico,epresenting 67% of states with EM residencies. Theespondents were from types of programs similar inistribution to the types of programs found across thenited States: 80% of the respondents were from 1,2,3rograms (national percent 75%), 14% from 1,2,3,4 pro-rams (national percent 14%) and 5% from 2,3,4 pro-rams (national percent 10%). The average percent ofhysicians attending the conferences at the survey sitesnd participating center was 57.5% (range 11–96%). Theistribution of physicians was evenly split between 1st,nd, and 3rd year residents (30%, 31%, and 27%, respec-ively), with many fewer 4th year residents (9%), asxpected with the low number of 4-year programs; 3%id not list a year of residency.
Five questionnaires were unusable as there wereewer than seven questions answered in the second sec-ion. Additionally, 41 questionnaires had fewer thanhree quiz questions answered and were excluded fromhe evaluation of the self-reported statistical knowledgeuestion, but were able to be used for the rest of thetudy.
Of 429 respondents, 191 (44.5%) reported they hadesident teaching responsibility; and of 452 respondents,0 (19.9%) reported that they had formal training intatistics.
Of 472 respondents, the greatest numbers reportedeading between 0 and 5 medical articles a month (59%),ollowed by 6–10 articles (27%), 11–15 articles (6%),6–20 articles (2%), and the least in the � 20 articlesroup (1%).
Statistical comparison of the “score” (percent of cor-ect responses) on the five-question quiz among groupsdefined by physicians’ self-assessment of how oftenhey thought they had a “good understanding of statis-ics”) revealed that there was no difference (p � 0.118)etween how well participants thought they knew statis-ics and how many answers they got correct. This finding
ndicates that self-reported understanding of statistics tas not significantly related to higher or lower scoresFigure 1).
To gauge participants’ impressions of statistical use inurrent literature, they were asked, “How often do youhink statistics are used appropriately?”—192 (40%) an-wered over ½ time, 74 (15%) answered under ½ time.owever, when asked “how often do you think that
tatistical analysis is used to enhance weak data?”—25954%) felt this occurred over ½ time and 63 (13%)nswered this occurred under ½ time. Of physiciansnswering the question, “how often do you think thetatistics enhance your understanding of the informationresented?”—119 (24%) reported this occurred over ½ime and 186 (38%) under ½ time. Additionally, whensked “how often do you think that statistical analysisimplifies complex data?”—122 (26%) answered theyelt this over ½ time and 199 (41%) felt this occurrednder ½ time (Figure 2).
Physicians’ comprehension of statistics was assessedy inquiring “how often do you understand the statisticsresented in these articles?”—112 (23%) reported thisccurred over ½ time and 186 (38%) under ½ time. Forhe question “how often are you confused by the statis-ics presented in these articles?”—182 (37%) partici-ants reported feeling confused over ½ time and 11624%) under ½ time (Figure 3).
Physicians were almost evenly split when asked “howften do you read the statistics presented in these arti-les?”—191 (39%) responded over ½ time and 16634%) reported doing so under ½ time. However, whensked “how often do you skip the statistics and read theonclusion to understand a paper?”—254 (52%) re-ponded they did this over ½ time and 113 (23%) under
time (Figure 4).Respondents replied affirmatively to the question “how
ften do you think that there should be more statisticalraining in EM education?”—241 (49%) replied over ½
igure 1. Self-reported familiarity with statistics comparedith performance on quiz. There was no statistical differ-nce between percent of correct answers between groups.
ime and 106 (22%) replied under ½ time (Figure 5).
TrtAsir
aiciuctfit
F pariso
Fs
316 J. B. Hack et al.
DISCUSSION
his study attempts to ascertain Emergency Medicineesidents’ impression and relationship with statistics ashey are used in currently published journal articles.lthough familiarity with the many elements forming a
tudy and the knowledge of how each has the potential tonfluence the validity and reliability of clinical researchesults is paramount (e.g., selection of hypotheses, char-
igure 2. Respondents’ answers for each category and com
igure 3. Respondents’ estimation of their comprehension oftatistics.
Fa
cterization and definition of patient populations, blind-ng and randomization, and similar issues), statisticalompetence may be particularly challenging. This topics important, as statistical analysis has become bothbiquitous in research and increasingly complex whenompared to studies published 20 years ago. As errors inhe literature are frequent, resident understanding andamiliarity with all aspects of literature is especiallymportant, as they are developing skills that will servehem throughout their careers.
n of Over vs. Under.
igure 4. How respondents use the statistics in publishedrticles.
ttspdi
ifsac
tfipw
bSEstsccdAdpwivot
wtt
it2ip(Grmoctfilfelae
rsiosa
Tmrvw
ut
Mtscr
Fs
Residents and Statistics 317
We found that many EMRs do not trust statistics ashey are used in current articles, with some feeling thathey are not used appropriately and most feeling that thetatistics are used to enhance weak data. Many of thesehysicians feel that statistics do not simplify complexata, and often do not enhance their understanding of thenformation.
Although most would assume that physicians in train-ng read research papers thoroughly to be able to decideor themselves the validity of a study, there was an evenplit of EMRs who read vs. did not read the statistics, and
majority reported skipping the statistics to read theonclusions to understand the articles.
Additionally, regardless of the EMRs’ estimation ofheir own statistical knowledge, their performance on ave-question statistical quiz was essentially identical—ossibly indicating that the residents might be unsure ofhat they actually understand and what they do not.Statistical skill development while in training has
een made even more important for EMRs in the Unitedtates by the recent institution by the American Board ofmergency Medicine of a requirement to pass an annualingle-information source test for the maintenance ofheir certification: the Lifelong Learning and Self As-essment component of Emergency Medicine continuousertification. Physicians’ specialty certification has be-ome dependent upon their ability to study specificallyesignated articles and pass a test based upon them.lthough the American Board of Emergency Medicineoes not specifically endorse these articles or what theyurportedly show, testing physicians on data containedithin a unique source of information may inherently
mply support. It is therefore important that EMRs de-elop their understanding of the foibles of research, notnly in construction and execution but also in the statis-
igure 5. Respondents’ estimation of need for additionaltatistical training.
ics used to interpret the results, to help them decide t
hether to accept or reject the information presented inhese emphasized articles and others they will readhroughout their careers.
Currently, journal clubs and critical review of articlesncluding statistical competence are required and essen-ial parts in the training of emergency physicians. The005 Model of the Clinical Practice of Emergency Med-cine states in broad terms that residents should be com-etent in “interpretation of the medical literature”29,30). This is repeated in the Accreditation Council forraduate Medical Education’s program requirement for
esidency education in emergency medicine: “Residentsust be taught an understanding of basic research meth-
dologies, statistical analysis, and critical analysis ofurrent medical literature” (31). However, if our findingshat residents are not comfortable, trusting, or have dif-culty interpreting research statistics are confirmed with
arger studies, there must be more done to emphasize andormalize the teaching of statistics during residency tonsure that these physicians develop the tools to evaluateiterature that may affect their future practice. Addition-lly, what may be true of residents might also be true ofmergency physicians in practice.
Steps to resolve these statistical difficulties duringesidency training might include grand rounds taught bytatisticians, an abbreviated statistical course given dur-ng the research rotation; for EPs in general, explanationf statistical methods might be included in publishedtudies and statistical courses might be useful at nationalnd local educational meetings.
LIMITATIONS
he limitations of this study include those intrinsic to aailed survey. Although, on average, more than half the
esidents at the enrolled centers participated, this was aoluntary study performed at remote locations, so thereas no way to ensure higher rates of participation.As there are no previous studies examining physician
nderstanding or facility with statistics, it is impossibleo evaluate whether our findings are typical.
CONCLUSION
any Emergency Medicine residents surveyed do notrust, use, read, or understand statistics as they are pre-ented in current journal articles. If these findings areonfirmed with a larger study with more participants,esidency programs may want to consider enhanced
raining in statistical and research methods.1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
3
3
318 J. B. Hack et al.
REFERENCES
1. Gore SM, Jones IG, Rytter EC. Misuse of statistical methods:critical assessment of articles in British Medical Journal fromJanuary to March 1976. Br Med J 1977;1:85–7.
2. Hoffmann O. Application of statistics and frequency of statisticalerrors in articles in Acta Neurochirurgica. Acta Neurochir (Wien)1984;71:307–15.
3. Cruess DF. Review of use of statistics in the American Journal ofTropical Medicine and Hygiene for January–December 1988.Am J Trop Med Hyg 1989;41:619–26.
4. McGuigan SM. The use of statistics in the British Journal ofPsychiatry. Br J Psychiatry 1995;167:683–8.
5. White SJ. Statistical errors in papers in the British Journal ofPsychiatry. Br J Psychiatry 1979;336–42.
6. Macarthur RD, Jackson GG. An evaluation of the use of statisticalmethodology in the Journal of Infectious Diseases. J Infect Dis1984;149:349–54.
7. Kanter MH, Taylor JR. Accuracy of statistical methods in Trans-fusion: a review of articles from July/August 1992 through June1993. Transfusion 1994;34:607–701.
8. Bland JM, Altman DG. Misleading statistics: errors in textbooks,software and manuals. Int J Epidemiol 1988;17:245–7.
9. Olsen CH. Review of the use of statistics in infection and immu-nology. Infect Immunol 2003;71:6689–92.
0. Avram MJ, Shanks CA, Dydus MHM, Ronai AK, Stiers WM.Statistical methods in anesthesia articles: an evaluation of twoAmerican journals during two six-month periods. Anesth Analg1985;64:607–11.
1. Davies J. A critical survey of scientific methods in two psychiatryjournals. Aust N Z J Psychiatry 1987;21:367–73.
2. Anderson B. Methological errors in medical research. Oxford, UK:Blackwell; 1993.
3. Anthony D. A review of statistical methods in the Journal ofAdvanced Nursing. J Adv Nurs 1996;24:1089–94.
4. Plummer W. Screening for depression in primary care. Scientificand statistical errors should have been picked up in peer review.BMJ 2003;326:982.
5. Mcnamara DA, Grannell M, Watson RGK, Bouchler-Hayes DJ.The research abstract: worth getting it right. Ir J Med Sci 2001;
170:38–40.6. Goodman NW, Hughes AO. Statistical awareness of researchworkers in British anaesthesia. Br J Anaesth 1992;68:321–4.
7. Wang Q, Zhang B. Research design and statistical methods inChinese medical journals. JAMA 1998;280:283–5.
8. Ambrosano G, Reis A, Giannini M, Pereira A. Use of statisticalprocedures in Brazilian and international dental journals. BrazDent J 2004;15:231–7.
9. Hart A. Towards better research: a discussion of some commonmistakes in statistical analysis. Complement Ther Med 2000;8:37– 42.
0. Murphy J. Statistical errors in immunologic research. J AllergyClin Immunol 2004;114:1259–63.
1. Buderer NMF, Plewa MC. Collaboration among emergency med-icine physician researchers and statisticians: resources and atti-tudes. Am J Emerg Med 1999;17:692–4.
2. Goodman SN, Altman DG, George SL. Statistical reviewing pol-icies of medical journals: caveat lector? J Gen Intern Med 1998;13:753–6.
3. Altman DG. Statistical reviewing for medical journals. Stat Med1998;17:2661–74.
4. Berdie DR, Anderson JF, Niebuhr MA. Questionnaires: design anduse, 2nd edn. Metuchen, NJ: Scarecrow Press; 1986.
5. Freed M. In quest of better questionnaires. Pers Guid J 1964;43:187–8.
6. Oppenheim A. Questionnaire design and attitude measurement.New York. Basic Books Inc.; 1966.
7. Hubbard R, Little E. Promised contributions to charity and mailsurvey responses. Public Opin Q 1988;52:223–30.
8. Sprent P, Smeeton NC. Applied nonparametric statistical methods,3rd edn. Boca Raton, FL: Chapman & Hall/CRC; 2001.
9. Chapman DM, Hayden S, Sanders AB, et al. Integrating theAccreditation Council for Graduate Medical Education core com-petencies into the model of the clinical practice of emergencymedicine. Acad Emerg Med 2004;11:674–85.
0. Thomas HA, Binder LS, Chapman DM, et al. The 2003 model ofthe clinical practice of emergency medicine: the 2005 update. AcadEmerg Med 2006;13:1070–3.
1. Accreditation Council for Graduate Medical Education. Emer-gency medicine menu. Available at: http://www.acgme.org/
acWebsite/navPages/nav_110.asp. Accessed May 20, 2007.