12
The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

The Checkbox Is Not the PatientJonathan A. Handler, MD, FACEP

Page 2: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

About the Author

Dr. Jonathan A. Handler, MD, FACEP joined M*Modal in June, 2012 as Chief Medical Information Officer. Dr. Handler is responsible for enhancing the company’s clinical content strategy and medical informatics expertise, and providing input and physician perspective to the design and implementation of M*Modal’s technology solutions to the healthcare industry.

Dr. Handler is a board-certified emergency physician with twenty years of experience in Medical Informatics. In addition to more than a decade of clinical practice at academic medical centers, he previously served as the Health Solutions Group’s Chief Deployment Architect at Microsoft, Director of Azyxxi Development at the National Institute for Medical Informatics, and Director of Emergency Medicine Research and Informatics at the Northwestern University Feinberg School of Medicine, where he remains an Adjunct Associate Professor of Emergency Medicine.

Dr. Handler’s professional activities include serving as Director of Development for the National Center for Emergency Medicine Informatics, past President and past Secretary/Treasurer of the American College of Emergency Physicians Informatics Section, and a founding member of the Academic Informatics Group of the Society for Academic Emergency Medicine.

Table of Contents

Searching for the Truth.................................................................3

The Map Is Not the Territory........................................................4

Lost in Translation: The Checkbox Is Not the Patient..............4

Do What Comes Naturally............................................................5

It's About the Outcomes..............................................................6

A Prescription for HIT....................................................................7

Page 3: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

Searching for the Truth

A commonly stated mission of the EMR is to be the single source of truth. The EMR might be a single location for clinicians to access lots of patient information, but this collection of maps should not be confused with “the truth.” It is guaranteed to be incomplete, and in many places an inaccurate picture of the patient’s health – the very opposite of “the truth.” Confusing an EMR with “the truth” can create serious problems, including bad patient outcomes.

In attempting to build a “single source of truth” that supports computer-assisted reporting and decision-making, EMRs have tried to make clinicians enter data in formats that computers understand. The primary EMR tool to achieve these goals has been the checkbox and its variants (e.g. radio buttons and list boxes). It takes only a second to check a box, and computers can understand the structured data that results. The simplicity of the checkbox is its power.

Like most great tools, the checkbox and its cousins are overused. Some EMRs have nearly eliminated free-text and filled their EMR with checkboxes. Unfortunately, the checkbox is an extreme abstraction, and thus more likely to be incomplete and inaccurate. For example, consider a patient who presents with dizziness. Despite your best efforts, you cannot characterize his symptoms any further. “I’m just dizzy!” he exclaims. The available checkboxes are limited to “vertigo” and “lightheadedness” because they were designed to help the computer guess the diagnosis. Which checkbox do you choose? Either one is an untruth. You could leave the checkboxes blank (if the EMR allows it), but then you have lost important data, and worse, implied that the patient has no dizziness – yet another untruth. The simple solution: add another checkbox.

This simple solution leads to the massive proliferation of checkboxes, increasing complexity and reducing usability for both people3 and computers. Why is checkbox proliferation a problem for computers? Imagine a stroke diagnosis algorithm that only understands “vertigo” and “lightheadedness.” If you add a checkbox for the “fuzzier” (more general, less granular) concept of “dizziness” then how will the algorithm use it? The need to resolve the fuzziness has simply moved downstream.

We encounter this problem of “fuzziness matching” frequently. Mapping the patient history into standard terminology codes, mapping from one terminology to another, using EMR data in decision support algorithms – all of these require fuzziness matching.

The most widely utilized strategy to eliminate these mapping problems has been to select an approved terminology for each data type (e.g. LOINC for lab results, SNOMED-CT for clinical findings, etc.). The government even provides incentives to use approved terminologies. In the ultimate incarnation of this strategy, all documentation would be structured data entry, with each checkbox directly mapped to a government-sanctioned terminology code. As long as the clinician does all the work of translating the patient story into government-approved codes by selecting the right checkboxes, everything else “justworks.”

Or does it?

3

Page 4: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

This paper will discuss the limitations of a solely templated approach and how a more complete strategy that incorporates narrative is vital for addressing today's more collaborative, coordinated care models and achieving the most health from every second of care.

The Map Is Not the Territory

In 1931, Albert Korzbyski famously wrote, “The map is not the territory.” Even compared to highly detailed maps, the actual land is much bigger and its borders are far more complex. Of course, Korzbyski was using maps as a metaphor for language. Our words are a representation of reality, not the reality itself. This seems obvious, but people often confuse words with reality itself.1

For example, I might say, “My car is green.” More correctly, my car also has chrome mirrors, black tires, a white roof, and red taillights. If I am talking with friends about the colors of our cars, then simply saying, “my car is green” is much better than “my car is mostly green, but with some red, white, black, silver, and other colors scattered throughout.” However, if I am trying to describe my car so an artist can make a detailed drawing, then “my car is green” is not nearly good enough. As Paul Valery wrote in 1937, “Everything simple is false. Everything which is complex is unusable.”2 We strive for the right balance in our communications, and different tasks require different maps.

Similarly, a medical history is a map of the patient’s health, not the health itself. When the patient says, “I am dizzy,” that could mean vertigo, lightheadedness, confusion, or a host of other things. Even if we pin the patient down to “vertigo,” the patient might be using the word differently than the textbook medical meaning. Even if the meanings match, we still haven’t fully characterized the vertigo. For example, we don’t yet know if the vertigo is horizontal or vertical. That’s important because the probable diagnosis differs based on the direction of the vertigo. No matter how far we dig, the history is an incomplete description of the patient’s health.

Lost in Translation: The Checkbox Is Not the Patient

The distortion of data required to force the square peg of the patient story to fit in the round hole of the radio button puts the results of any downstream process in question. As they say, “garbage in, garbage out.” Decision support based on incomplete and distorted data can be dangerous, and downstream reports can be misleading.

The most problematic part of the strategy is that real-world algorithms and reports often require data elements not found in today’s terminologies. Imagine a pneumothorax (collapsed lung) algorithm that saves lives, but it requires specification of pneumothorax size, location, number and timing of any prior pneumothoraces, and response to any prior treatment. Which terminology standard can provide all that data today in a widely accepted and/or government-sanctioned form?

4

Page 5: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

Even though terminologies will improve over time, there are still two problems:

• Theclosertheyapproximatetherichnessoflanguage,themorecomplexandunusablethey become. Think how many checkboxes would be needed just to capture the pneumothorax characteristics above.

• Untiltheterminologiesmature,thedatalostin the translation into structured format can never be recovered. Consider a patient whose complaint is “a slight pain on the left side of my head just above my ear covering a dime-sized area of my scalp.” If that gets translated into the EMR as “headache,” then a tremendous amount of data has been permanently lost. It cannot be recovered later when terminologies mature enough to handle the original complaint. To paraphrase the rock group Foreigner, we’re blowing away a fortune in data and someday we’ll pay.

What can be done?

Human language (spoken and written) has evolved over millennia to become the most effective form of human communication. Major sections of the brain are specifically dedicated to this task. As long as natural language is the most commonly used and most highly effective form of communication, it will have to play a continuing role in the medical record. This role may evolve, but three trends make it likely that the role of natural language in the EMR will grow quickly:

1) Increasing recognition that EMRs filled with structured data are failing to deliver on many of their essential promises, despite major advances in computers, software, and standards.

2) Major improvements in speech recognition software (SR) and hardware.

3) Major improvements in natural language processing (NLP) that converts free text into a format that can be used by computers similar to structured data in a database.

Do What Comes Naturally

Making natural language a “first-class EMR citizen” rather than a “necessary evil” is the most important step to making the EMR a more productive citizen in our medical community. Despite some inherent weaknesses, natural language is our closest approximation to “the truth.” It is more content rich, more understandable, and more closely hewn to the mind of the clinician who authored it than any other form of communication. Natural language documentation is quickly and easily created through automated speech recognition, transcription services, clinician typing, handwriting recognition, and other mechanisms.

5

The distortion of data required to force the square peg of the patient story to fit in the round hole of the radio button puts the results of any downstream process in question.

Page 6: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

Some might argue that computers cannot easily understand natural language. While enabling computer systems to talk to one another may be a laudable goal of documentation, the primary purpose of medical documentation is to enable communication between clinicians, not computers. The bland, “cookie-cutter” chart produced by highly structured documentation too often fails to transmit enough information to be

clinically useful or distinguish between patients. It’s costly for humans to create and cumbersome for humans to read.3-11 As long as humans remain the primary consumers of clinical documentation, documentation optimized for computer con-sumption at the expense of human readability is detrimental to clinical care.

Fortunately, NLP has come a long way. Computers can now readily transform the important natural language content into structured and encoded data for use by machines in much the same way as they would use data in a database field. This approach also retains the full richness of the original free-text for human consumption or computer reprocessing at any time in the future.

The biggest concerns with NLP relate to its accuracy. An oft-heard refrain is, “If it can’t accurately and reliably populate the EMR’s structured fields for reporting and decision support, then we can’t use it.” This demonstrates a significant misunderstanding of the most important use of an EMR. The primary purpose of the EMR is to record the encounter so others can read it later – reporting and decision support are only secondary.12

Proper use of natural language speeds up documentation,13 which improves patient access to care. It more accurately and completely captures clinician intent, and is faster and better for clinicians to read, all of which improves patient care. For these reasons, data that naturally lives as free text (e.g. history, assessment, and plan) should be captured, stored, and made available as free text.

We have overestimated the value and underestimated the costs of structured data, while simultaneously underestimating the value and overestimating the costs of natural language. We are starting to see data demonstrating the limitations of structured data technologies even as data increasingly shows that natural language technologies have blossomed. Modern NLP has proven to be very accurate for many common uses.14-22 Studies have shown that NLP can extract data into structured format approaching the accuracy and reliability of humans in some cases.23 With the right design, we can combine dictation, speech recognition, free text, NLP, and checkboxes to achieve the best of all worlds.

It's About the Outcomes

“No compromise” solutions are critical because the momentum in informatics today has allowed time at the computer to steal time away from direct patient care.3 Proponents assume that migrating clinician time from the bedside to computers will be net positive for patients – but the data does not support this assumption.12, 24-27 While many studies show that clinical decision support systems (CDSS) can create

6

While enabling computer systems to talk to one another may be a laudable goal of documentation, the primary purpose of medical documentation is to enable communication between clinicians, not computers.

Page 7: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

small-to-moderate improvements in adherence to guidelines, most studies show zero-to small impact on actual patient outcomes.

For example, a review in JAMA of 100 studies found no improvement in outcomes when using CDSS for diabetic care, drug selection and dosing (excluding anticoagulants), and assisted diagnosis. Small to moderate improvements were found in four other categories.24 Of the 52 CDSS trials that assessed patient outcomes, only 13% reported improvements. A more recent JAMIA article from July 2012 looked extensively at medication related CCDS and their conclusion was “Most studies did not measure or demonstrate the impact of CCDS on patient outcomes, despite aiming to improve them.”25

Similarly, medication reconciliation is time-consuming and cumbersome, yet studies of outcomes are rare, and those that exist have not shown a benefit.26, 27

Why is there so little evidence of improvement in outcomes despite greater adherence to guidelines and best practices? Several likely reasons include:

• Human Resiliency: Even when doctors make mistakes, patients usually survive them.

• Alert Fatigue: When there are many false or minor alerts, people ignore good advice when it does come (“the boy who cried wolf”).

• Low Impact CDSS Targets: The errors that most commonly cause harm are diagnostic errors.28 These errors are not effectively covered by today’s quality efforts.24

We are eroding patient care with unproven and even disproven processes and technologies. Patients get sicker and even die while waiting for care.29 This issue is so important that the Institute of Medicine listed timelines of care as one of the “Six Aims for Improvement” in its landmark report

on healthcare quality.30 With a serious and worsening doctor shortage,31 the situation is deteriorating. Every wasted second of doctor time harms patients by reducing access to care for those waiting to be seen.

Future studies will help us better identify which best practices improve outcomes. Until then, we are forced to choose between unproven, time-consuming quality efforts and tried-and-true direct patient care – and direct patient care is losing the battle.3 The best way around the conundrum is to design quality efforts that save clinician time and improve adherence to best practice guidelines. We need to help clinicians create the most health from every second of care.

7

Every wasted second of doctor time harms patients by reducing access to care for those waiting to be seen.

Page 8: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

A Prescription for HIT

Our top three recommendations to maximize the impact of patient care:

1. Enable clinicians to input data faster and with higher quality by supplementing the EMR’s “point- and-click” with natural language: Instead of forcing humans to speak the computer's language of point and click, the computer should understand human language. Published data suggests that doctors who dictate are more satisfied and see many more patients than those who type or point and click in the EMR.13 NLP can convert free-text to structured data so that we can get closer to the best of all worlds. Natural language services and technologies are not a cure-all, but they are severely underutilized today.

2. Reduce clinician effort and errors through greater use of computerized automation: Make the computer do more of the work, or at least more of the preparatory work. For example, consider medication reconciliation: why make doctors enter structured data into an EMR that can't even use that information to guess that Crestor has replaced Lipitor for treating the patient’s high cholesterol? It might be easier and faster for clinicians to edit the computer’s first pass at medication reconciliation than to start with a blank slate and make clinicians do all the work. There are other examples of success using computers to semi-automate clinician work.32 Supplementing the structured dataset using NLP on natural language should make it easier to develop and apply these algorithms.

3. Improve clinician speed and performance through better processes and user interfaces (UIs): We (informaticists) are our own worst enemies on the process/UI front. In a well-intentioned effort to do the right thing, we are forcing vendors to build terrible UIs. We can start to do better with these two suggestions:

•Improve our approach to alerts: We have largely ignored the published data on alerting. The literature has identified only a few alerts that actually improve patient outcomes. There is published data telling us that most alerts are ignored.33-35 We know that clinicians who are overwhelmed with false or trivial alerts get “alert fatigue” which causes them to ignore important alerts. There is published data on which alerts actually improve patient outcome36 and published data on how to design effective alert UIs.35, 37 Well-intentioned efforts to be very liberal with alerts “just to be safe” are commonly implemented, but they are unsupported by the data, ineffective, costing money, and delaying care. We need to limit interruptive alerts solely to the few things proven to enhance patient outcomes, and create non- interruptive decision support for everything else. Doctors will suffer fewer interruptions, and those few alerts will actually improve outcomes. Everyone wins.

•Improve our approach to time savers: Many HIT implementations eschew time-savers because clinicians sometimes abuse them. During transitions of care, a single button to easily continue all prior orders is often disabled. Instead, the doctor is forced to reconsider each order through manual re-entry. Similarly, many try to limit the use of “copy/paste” in documentation (saving time by copying yesterday’s text into the new

8

Supplementing the structured dataset using NLP on natural language should make it easier to develop and apply these algorithms.

We need to limit interruptive alerts solely to the few things proven to enhance patient outcomes, and create non-interruptive decision support for everything else.

Page 9: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

note) because clinicians sometimes fail to update the copy/pasted text with new information. We know the harm caused by making clinicians do extra work, and we have zero data to suggest that manual re-entry improves patient outcomes. “Re-do fatigue” might paradoxically make it even more unlikely that clinicians will catch an error. We have seen that the squiggly line under misspellings helps reduce the number of errors in a Microsoft Word document. Similar approaches to UIs may help us increase safety without decreasing productivity. We can enable clinicians to use common time savers with enhanced quality of care by utilizing speech-based input, and by creating UIs that highlight important material and seamlessly integrate supplemental context that enhances decision-making.

Time to Care

At the end of the day, it’s not about technologies, data formats, paradigms, or philosophies. It’s about doing whatever it takes to achieve the most health from every second of care. Everything we do must be oriented around this goal. To achieve this goal, we believe that the issue is not “paper vs. computer” or “structured data vs. natural language.” Rather, the issue is “direct patient care vs. everything else.” We believe that every assertion in this document naturally follows from this goal and the published data available today.

New clinical products, services, and processes should either increase clinician time at the bedside, demonstrably improve patient outcomes, or both. The data has shown that adherence to best practice guidelines is not an adequate proxy for improved patient outcomes.12, 24-27 Efforts to improve compliance with best practices that have not yet been proven to enhance patient outcomes are fine as long as they are at least net neutral on clinician time. If an unproven quality effort takes time away from direct patient care, then at least one of the following steps should be taken before deployment:

• Redesignittobenetneutraloncliniciantime.

• Studyittoproveitenhancespatientoutcomes.

• Killit.

Achieving the goal of the most health from every second of care may take work, but at least that work will be borne on our backs rather than the backs of overworked clinicians and the patients they serve. Fortunately, we have already done much of the research and technology development needed to achieve the goal – now it’s time to apply it.

M*Modal delivers innovative solutions that capture the complete patient story by facilitating clinical workflows, enabling collaboration and providing insight for improved delivery of care. M*Modal is the leading provider of interactive clinical documentation and Speech and Natural Language Understanding technology, as well as medical transcription, narrative capture and support services.

Our flexible, cloud-based technology and services convert the physician narrative into a high quality and customized electronic record to enable hospitals, clinics and physician practices to improve the quality of clinical data, as well as accelerate and automate the documentation process. Our solutions address the critical issues for the future of the healthcare industry — from EHR adoption to accurate ICD-10 coding to enhanced business analytics.

9

New clinical products, services, and processes should either increase clinician time at the bedside, demonstrably improve patient outcomes, or both.

Page 10: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

Bibliography

1. Map–territory relation. Wikipedia, The Free Encyclopedia 2012 2012/08/21 [cited 2012 09/09]; Available from: The checkbox

is not the patient jah v01g.docx

2. Bonini's paradox. Wikipedia, The Free Encyclopedia 2011 2011/10/05 [cited 2012 09/09]; Available from: http://en.wikipedia.

org/w/index.php?title=Bonini%27s_paradox&oldid=454052358

3. Park SY, Lee SY, Chen Y. The effects of EMR deployment on doctors' work practices: a qualitative study in the emergency

department of a teaching hospital. International Journal of Medical Informatics. 2012 Mar; 81 (3): 204-17.

4. Mellick LB. It's time to give up costly template charts. ED management: the monthly update on emergency department

management. 2002 Jul; 14 (7): 79-80, suppl 1-2.

5. Ventres W, Kooienga S, Vuckovic N, Marlin R, Nygren P, Stewart V. Physicians, patients, and the electronic health record: an

ethnographic analysis. Annals of Family Medicine. 2006 Mar-Apr; 4 (2): 124-31.

6. Fritsch J. Speech Technology to Drive EHR Adoption. ExecutiveInsight 2012 2012/08/15 [cited 2012 2012/09/09]; Available from:

http://healthcare-executive-insight.advanceweb.com/Features/Articles/Speech-Technology-to-Drive-EHR-Adoption.aspx

7. Ramesh D. We must get past the complaints doctors have about EMRs. KevinMDcom 2012 2012/06/01 [cited 2012

2012/09/09]; Available from: http://www.kevinmd.com/blog/2012/06/complaints-doctors-emrs.html

8. Sibert KS. Electronic records don’t tell us stories that make cognitive sense. KevinMDcom 2012 [cited 2012 2012/09/09];

Available from: http://www.kevinmd.com/blog/2012/05/electronic-records-dont-stories-cognitive-sense.html

9. Goldstein J. Can Technology Cure Health Care? The Wall Street Journal. 2010 2010/04/13.

10. Nuance Communications I. Results of EHR Meaningful Use Physician Study. [cited 2012 2012/09/09];

Available from: http://www.nuance.com/healthcare/ehr-meaningful-use-study/

11. Hartzband P, Groopman J. Off the record--avoiding the pitfalls of going electronic. The New England Journal of Medicine.

2008 Apr 17; 358 (16): 1656-8.

12. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? The New England Journal of

Medicine. 2010 Mar 25; 362 (12): 1066-9.

13. Handler JA, Adams JG. In response to: Method of electronic health record documentation and quality of primary care.

Journal of the American Medical Informatics Association: JAMIA. 2012 Aug 9.

14. Xu Y, Hong K, Tsujii J, Chang EI. Feature engineering combined with machine learning and rule-based methods for

structured information extraction from narrative clinical discharge summaries. Journal of the American Medical Informatics

Association : JAMIA. 2012 Sep 1; 19 (5): 824-32.

15. Doan S, Collier N, Xu H, Duy PH, Phuong TM. Recognition of medication information from discharge summaries using

ensembles of classifiers. BMC Medical Informatics and Decision-Making. 2012 May 7; 12 (1): 36.

16. Uzuner O, South BR, Shen S, DuVall SL. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text.

Journal of the American Medical Informatics Association : JAMIA. 2011 Sep-Oct; 18 (5): 552-6.

17. de Bruijn B, Cherry C, Kiritchenko S, Martin J, Zhu X. Machine-learned solutions for three stages of clinical information

extraction: the state of the art at i2b2 2010. Journal of the American Medical Informatics Association :

JAMIA. 2011 Sep-Oct; 18 (5): 557-62.

10

Page 11: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

18. Friedlin J, Overhage M, Al-Haddad MA, et al. Comparing methods for identifying pancreatic cancer patients using

electronic data sources. AMIA Annual Symposium proceedings / AMIA Symposium AMIA Symposium. 2010; 2010: 237-41.

19. Savova GK, Masanz JJ, Ogren PV, et al. Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES):

architecture, component evaluation and applications. Journal of the American Medical Informatics Association :

JAMIA. 2010 Sep-Oct; 17 (5): 507-13.

20. Uzuner O, Solti I, Cadag E. Extracting medication information from clinical text. Journal of the American Medical

Informatics Association : JAMIA. 2010 Sep-Oct; 17 (5): 514-8.

21. Jagannathan V, Mullett CJ, Arbogast JG, et al. Assessment of commercial NLP engines for medication information

extraction from dictated clinical notes. International Journal of Medical Informatics. 2009 Apr; 78 (4): 284-91.

22. Uzuner O, Goldstein I, Luo Y, Kohane I. Identifying patient smoking status from medical discharge records. Journal of the

American Medical Informatics Association : JAMIA. 2008 Jan-Feb; 15 (1): 14-24.

23. Roberts A, Gaizauskas R, Hepple M, Guo Y. Mining clinical relationships from patient narratives. BMC Bioinformatics.

2008;9 Suppl 11:S3.

24. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner

performance and patient outcomes: a systematic review. JAMA : the journal of the American Medical Association. 2005

Mar 9; 293 (10): 1223-38.

25. Stultz JS, Nahata MC. Computerized clinical decision support for medication prescribing and utilization in pediatrics.

Journal of the American Medical Informatics Association : JAMIA. 2012 Jul 19.

26. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital-based medication reconciliation practices: a systematic review.

Archives of Internal Medicine. 2012 Jul 23; 172 (14): 1057-69.

27. Hellstrom LM, Hoglund P, Bondesson A, Petersson G, Eriksson T. Clinical implementation of systematic medication

reconciliation and review as part of the Lund Integrated Medicines Management model - impact on all-cause emergency

department revisits. Journal of Clinical Pharmacy and Therapeutics. 2012 Aug 28.

28. CRICO. High Risk Areas. 2012 2012/01/01 [cited 2012 2012/09/09]; Available from: http://www.rmf.harvard.edu/Clinician-

Resources/Article/2011/High-Risk-Areas

29. Cox L. ER Death Points to Growing Wait-Time Problem. ABC News. 2008 2008/09/25.

30. Committee on Quality of Health Care in America IoM. Crossing the Quality Chasm: A New Health System for the 21st

Century, Free Executive Summary: Institute of Medicine; 2001.

31. Lowrey A, Pear R. Doctor Shortage Likely to Worsen With Health Law. The New York Times. 2012.

32. Kargul GJ, Wright SM, Knight AM, McNichol MT, Riggio JM. The Hybrid Progress Note: Semiautomating Daily Progress

Notes to Achieve High-Quality Documentation and Improve Provider Efficiency. American Journal of Medical Quality: the

official journal of the American College of Medical Quality. 2012 Jun 8.

33. Hsieh TC, Kuperman GJ, Jaggi T, et al. Characteristics and consequences of drug allergy alert overrides in a computerized

physician order entry system. Journal of the American Medical Informatics Association: JAMIA. 2004

Nov-Dec; 11 (6): 482-91.

34. Jani YH, Barber N, Wong IC. Characteristics of clinical decision support alert overrides in an electronic prescribing system

at a tertiary care paediatric hospital. The International Journal of Pharmacy Practice. 2011 Oct; 19 (5): 363-6.

11

Page 12: The Checkbox Is Not the Patient Jonathan A. Handler, MD, FACEP

WorldHeadquarters:5000MeridianBoulevard,Suite200•Franklin,TN37067 www.mmodal.com•[email protected]•866-542-7253

© 2012 MModal IP LLC. All rights reserved.

12

35. Langemeijer MM, Peute LW, Jaspers MW. Impact of alert specifications on clinicians' adherence. Studies in Health

Technology and Informatics. 2011; 169: 930-4.

36. Schedlbauer A, Prasad V, Mulvaney C, et al. What evidence supports the use of computerized alerts and prompts to

improve clinicians' prescribing behavior? Journal of the American Medical Informatics Association : JAMIA. 2009 Jul-

Aug; 16 (4): 531-8.

37. Seidling HM, Phansalkar S, Seger DL, et al. Factors influencing alert acceptance: a novel approach for predicting the

success of clinical decision support. Journal of the American Medical Informatics Association : JAMIA. 2011

Jul-Aug; 18 (4): 479-84.