11
20 JOURNAL OF HEALTHCARE RISK MANAOEMENT Comparing Risk Management Information: An Invitation to Disaster? By John C. West, JD, MHA, DFASHRM, Cincinnati, OH Abstract: While many healthcare entities want to compare their risk management data with other entities, most such projects fail. This is usually due to weaknesses in terminology and differences in work practices. In addition, virtually all data in healthcare risk management are subjective and may readily be manipulated. To benchmark their data, facilities should be careful to understand what their data represents, and how the data compare with the data generated by other facilities. This usually requires an in-depth review of work practices and procedures, coverages, and culture. Risk management data can be shared, but it must be done carefully. Introduction Risk management, as a discipline, is engaged in a quest to discover what is to many the Holy Grail of healthcare in the late 1990s: data that will allow an . objective and quantifiable determination of the relative effectiveness of programs and interventions. The individual risk manager wants to know how well his or her program is working. Administrators and boards want to know if risk management is effective (and cost-effective).In an age where turbulence is the only stable commodity in healthcare, risk management is not immune from the plaintive cry of administration: “Is there a better way to do this?” Unfortunately, there are people, in governance, administrative, and management positions, who consider themselves “data driven,” and for whom any data, regardless of reliability, are better than no data. Risk management will need to develop reliable data to satisfy these demands, or be content with the data requirements imposed on it by persons outside the discipline. The era of the traditional assessment of risk management’s effectiveness by subjective and anecdotal evidence may be past. The purpose of this article is not to attempt to dissuade risk managers from sharing data. Risk management, like any other process, can only improve if it knows what works well or what does not work at all. This recognition, however, may be elusive. Quantitative and reliable data will yield the answers that risk management needs, but the keys to such data are not as readily available as one might think, and relying on incomplete or unreliable data is often worse than having none at all. There is precious little “hard” data in risk management. By demonstrating the weaknesses in what could be considered reliable data, it is hoped that a consensus on truly reliable risk management data may be achieved. Although there are a number of aspects of risk management on which data could be gathered, this article will focus on professional liability (“malpractice”) claims data, which are, admittedly, only a small part of risk management.

Comparing risk management information: An invitation to disaster?

Embed Size (px)

Citation preview

Page 1: Comparing risk management information: An invitation to disaster?

20 J O U R N A L O F H E A L T H C A R E R I S K M A N A O E M E N T

Comparing Risk Management Information: An Invitation to Disaster?

By John C. West,

JD, MHA, DFASHRM,

Cincinnat i , OH

Abstract: While many healthcare entities want to compare their risk management data with other entities, most such projects fail. This is usually due to weaknesses in terminology and differences in work practices. In addition, virtually all data in healthcare risk management are subjective and may readily be manipulated. To benchmark their data, facilities should be careful to understand what their data represents, and how the data compare with the data generated by other facilities. This usually requires an in-depth review of work practices and procedures, coverages, and culture. Risk management data can be shared, but it must be done carefully.

Introduction Risk management, as a discipline, is engaged in a quest to discover what is to many the Holy Grail of healthcare in the late 1990s: data that will allow an . objective and quantifiable determination of the relative effectiveness of programs and interventions. The individual risk manager wants to know how well his or her program is working. Administrators and boards want to know if risk management is effective (and cost-effective). In an age where turbulence is the only stable commodity in healthcare, risk management is not

immune from the plaintive cry of administration: “Is there a better way to do this?” Unfortunately, there are people, in governance, administrative, and management positions, who consider themselves “data driven,” and for whom any data, regardless of reliability, are better than no data. Risk management will need to develop reliable data to satisfy these demands, or be content with the data requirements imposed on it by persons outside the discipline. The era of the traditional assessment of risk management’s effectiveness by subjective and anecdotal evidence may be past.

The purpose of this article is not to attempt to dissuade risk managers from sharing data. Risk management, like any other process, can only improve if it knows what works well or what does not work at all. This recognition, however, may be elusive. Quantitative and reliable data will yield the answers that risk management needs, but the keys to such data are not as readily available as one might think, and relying on incomplete or unreliable data is often worse than having none at all. There is precious little “hard” data in risk management. By demonstrating the weaknesses in what could be considered reliable data, it is hoped that a consensus on truly reliable risk management data may be achieved. Although there are a number of aspects of risk management on which data could be gathered, this article will focus on professional liability (“malpractice”) claims data, which are, admittedly, only a small part of risk management.

Page 2: Comparing risk management information: An invitation to disaster?

http://www.ashrm.org W I N T E R 2 0 0 0 21

Chronology of Claims Data Risk management information can be gleaned from various points in the continuum between the incident or occurrence (collectively “incidents”) and the ultimate resolution of the claim. Like all processes, there is a beginning, a middle, and an end to it. Since the relative reliability of the data is dependent on its point of capture in the continuum, it is necessary to address the chronology of the claim process and define its terms (at least for the purposes of this article) in order to standardize the discussion.

An incident is typically the beginning of the process. An incident is widely defined as “any happening that is not consistent with the routine care of a particular patient” (ref. l), or any event that causes injury to a person or damage to property. It may be expanded to include any event that has the potential to cause injury to a person or damage to property. If a staff member becomes aware of an incident, facility policy normally requires that it be reported to risk management. This is not to say, however, that all claims begin, or even should begin, with an incident. Many claims are brought because of a poor outcome or the perception of substandard care, and there will be no discrete incident to which one can point as the beginning of the process. These are generally referred to as “first notice lawsuits.’’ However, there may be a “background” rate of such claims that can be measured and for which a correction could be available.

At some point, an incident will make the transition to a claim. This may be, and often is, months after the incident. If it appears that the facility, or its insurer, will pay money to someone as a result of an incident, the matter may be considered a claim. For the purposes of this article, a

claim will be considered a matter in which a request for compensation has been made, or in which the facility reasonably believes that such a request may be made, but in which there has been no formal filing of a Petition or a Complaint with an administrative or judicial body. A claim may be recognized through a variety of methods. Patient complaints, informal demands for damages (including the writing off of a bill or certain charges), requests for records by attorneys, or requests to toll the statute of limitations may all give rise to claims. The facility may also recognize a claim simply because the intuitions of the risk manager indicate that one may be filed.

If the facility is unable or unwilling to resolve the matter while it remains a claim, the matter may become a lawsuit. This may be a number of years after the incident. For the purposes of this article, a “lawsuit” includes any matter filed with an agency external to the facility, including administrative agencies or courts.

At some point, all such matters will be resolved in one way or another. The entity or its carrier may simply decide that the patient or claimant will take no further action, and the claim file will be closed. A payment (which may include writing off all or a part of the charges) may be made prior to suit being filed. A lawsuit can be dismissed, settled, or tried to a verdict. Dismissals and verdicts are often public information, although these are often not the final resolution (the parties agree to settle the dispute rather than undergo an appeals process). Unfortunately for the student of risk management data, most settlements are made behind closed doors and the parties keep their terms confidential, hence this information may be difficult to collect, or to externally validate once collected.

Page 3: Comparing risk management information: An invitation to disaster?

22 J O U R N A L O F H E A L T H C A R E R I S K M A N A O E M E N T

Reliability of The Data Data Definition There is no point in collecting data if the data are not uniform. Comparisons between facilities or programs will be ultimately futile unless there are common definitions for all of the parts of the claim resolution continuum. If one facility does not open a claim file until a written demand is received, but another opens a claim file as soon as it believes that a claim m y be asserted, any comparison of the number of claim files between the two will be meaningless. Similarly, facilities in states that have statutes requiring a “notice of intent to file suit” may count these as lawsuits, but another facility may not. While precise definitions for pieces of risk management information are beyond the scope of this article (and should be developed by consensus), it must be recognized that there are distinct political ramifications inherent in developing such definitions. Entities wishing to benchmark data against each other would be well advised to begin the process by completely and thoroughly discussing their respective practices and procedures to find dissimilarities and incongruencies before progressing to the discussion of data collection, analysis, and comparison.

Unfortunately, the process of defining data elements will not be simple. The adoption of definitions may alter the manner in which risk management programs are conducted, and the change may be fundamental. For example, changing a facility’s definition of “claim” may alter the way in which it handles claims, may cause its reserves to appear inflated or inadequate, may affect its insurance premium, and may render historical data meaningless when compared with current data. Most facilities have adopted programs that make sense and work for them, and they may be resistant to change. As a general rule, everyone is interested in standardization, as long as the standardized result is the same as his or her current practices.

Data Reliability As a general rule, as one moves forward from the incident to the resolution of the claim, the data become more objective and quantifiable. Although it will never be possible to correct for all factors that may cause the data to be variable, it becomes increasingly possible to determine the actual numbers of the pieces of the puzzle as the claim progresses. A facility will never know how many incidents it had in a given month, but it should know how many incident reports it received in a given month, or how many open lawsuits it has at any given time (except for those filed but not served), or how many lawsuits it resolved in the past year.

. .

Incidents There are a number of levels of unreliability inherent in incident data. There is, of course, an important distinction between the number of incidents and the number of reported incidents. An incident may not be recognized as such, staff may be unaware of it, or it may simply not be reported. It may never be possible to determine, with any level of certainty, the actual number of incidents within a given facility in any given period. One commentator has stated that reported incidents constitute approximately 15% of all incidents, but this is probably speculation (ref. 2). It is ’ possible to determine the number of reported incidents in a given period, since this is merely a function of counting the reports. It seems that it should be possible, to extrapolate the total number of incidents based on the number of reported incidents, but this would require the determination of the ratio between reported and unreported incidents, which is not possible because the total number of incidents (reported plus unreported incidents) will never be known. Even if a facility could accurately calculate this ratio, this may not lead to comparable data, because the ratio may change over time or the ratio for other facilities may not be comparable.

Page 4: Comparing risk management information: An invitation to disaster?

h ttp t //www,as h c m ,org W I N T E R 2 0 0 0 23

Assuming that appropriate denominators (ref. 3) can be used (e.g., number of reported medication errors per x number of medication doses), it is certainly possible to compare the number of incident report? generated within one facility versus the number generated in another facility. The comparability of these data may, however, be suspect. For example, if the staff of the emergency department (ED) at hospital A is extremely diligent in recognizing and reporting incidents, but the staff of the ED at hospital B is largely unaware of the need for reporting, the rate of reported incidents between the two facilities may be comparable, but the comparison will be meaningless. Any conclusions regarding the quality of care, or the effectiveness of risk management that are drawn from these data alone, will be useless, if not dangerous.

Incident report data can often be validated by other means. For example, a comparison of ED incident reports between entities, coupled with a comparison of the number of patients who return to the respective EDs within 24 hours of treatment at the comparing entities, may assist in determining whether the number of ED incident reports is exaggerated due to good reporting or is inaccurately low.

Claims A determination of the number of claims presently pending against a given healthcare facility will be highly variable depending on the facility’s definition of claim or the risk manager’s level of suspicion (paranoia). If one restricts the definition of claim to those matters in which a demand for damages has been made, or at least suggested, the actual number may be ascertainable and only minimally variable due to subjectivity. If one considers only those matters

that have been reported to the entity’s insurance carrier and on which an indemnity reserve has been placed, the number may be ascertainable, but will be highly variable between entities due to subjectivity. If one considers all matters involving past events on which money may be paid to be claims, the number may never be ascertainable.

The definitional issues can be alleviated, to some extent, by having various classes of claims. At a minimum, claims can be segregated into two major categories: those that were opened because a written or verbal demand for damages was received (“demand” or “active” claims), and those that were opened simply because the facility believed that a claim might result (‘(suspicion” or (‘potential” claims). This will allow facilities to share appropriate data by standardizing the data between the reporting facilities.

Lawsuits As long as the facility applies a standard definition to the term “lawsuit,” this number should be readily ascertainable. The rate of litigation may vary geographi- cally or by jurisdiction, which may make it difficult to compare numbers of lawsuits or the rate of filing of lawsuits between locales or jurisdictions. This variance may cause the data to be unreliable when trying to use this number as an indicator of the quality of care provided.

Indemnity Payments It is certainly possible to compare the amount of money paid in indemnity by or on behalf of a given entity in a given year with similar numbers from a comparable facility. It is also possible to stratify payments in terms of payments per resolved lawsuit, per closed claim, or per occurrence.

Page 5: Comparing risk management information: An invitation to disaster?

24 J O U R N A L O F H E A L T H C A R E R I S K

It must be remembered, however, that these p,ayment rates are subject to variance due to a great number of factors. Few, if any, healthcare entities capture every dollar of indemnity payment that they make or that are made on their behalf. One must be careful to ensure that the data collection practices are similar between entities sharing such information. For example, many entities do not capture information on write-offs (whether in total or partial) due to potential liability issues in their claims database (it may be captured elsewhere). If the write-offs are captured, are they calculated based on the total bill or only the amount recoverable from the patient’s insurance carrier? Are payments made from a petty cash/slush fund to resolve liability issues captured on a claim-by claim basis? Are payments made within a deductible or self-insured retention layer adequately captured? In short, do the entity’s indemnity payments accurately reflect the cost of risk, or, at the least, are the elements captured in each entity’s data comparable?

Multivariate Analysis It may be possible to remove some of the subjectivity in the entire process of collecting and analyzing risk management data by studying various parts in relation to other parts. None of these factors is determinative, in and of itself, of the success or effectiveness of a risk management program. If a systematic approach is taken to this analysis, objective indicators may be demonstrated.

Relationship of Incident Reports to . Claims. One can look at the ratio of incident reports to the opening of claim files. For example, in 1998 an entity may have received 2,500 incident reports and opened 25 claim files (1 claim file per 100 incident reports), while in 1997 the entity received 2,600 incident reports and opened 13 claim files (1 claim file per 200 incident reports). This may be an

M A N A G E M E N T

indicator of the severity of the incidents that were the subject of the reports (which can be validated if the severity of the incident is independently captured), but the analysis will suffer from a lack of temporal congruency. For example, it is entirely possible, and actually very likely, that the claim files opened in 1998 did not relate to incidents that occurred in 1998. One may look at the number of claim files opened for which an incident report had previously been filed, regardless of the lag time, but this analysis is some- what difficult and may skew the data from more mature years as opposed to more recent years. This can be alleviated to some extent by creating a loss triangle (ref. 4), but this is also complicated and is a better historical, rather than real-time, approach to analyzing the data. Thus, this measure may not give the reviewer a reli- able or a real-time analysis of the current state of the risk management program.

One may also study the opening of claim files with relation to the existence of an incident report at the time of the incident. While there will always be a background rate of claims for which no incident report was filed (e.g., bad outcomes that occurred after the episode of care was completed), this rate should be, to some extent, measurable. However, if one measures the number of claim files opened that had an incident report previously filed, versus claim files that did not, one can test the effectiveness of the reporting program. This analysis will be more effective if one also analyzes the claims and classifies them as claims in which one would expect to have had an incident report filed, versus those in which one would not normally expect to have received an incident report. If accomplished carefully, it may be possible to compare these ratios with other facilities, and they could be used to measure the effectiveness of educational programs or other interventions on reporting.

Page 6: Comparing risk management information: An invitation to disaster?

http://www,a r h r m . org W I N T E R 2 0 0 0 25

Relationship of Claims to Lawsuits. There will always be lawsuits that ate filed for which no claim file had previously been opened, and there will always be claim files in which no lawsuit will be filed. However, a comparison of the rates may be useful in assessing the risk management program.

For example, in a facility in which the risk manager becomes actively involved in managing a claim (however it is defined, and these measurements could be taken on the various classes of claims) prior to suit, and is successful in resolving them, one would expect a low rate of claims turning into lawsuits. Examining the indemnity payment (including write-offs) per claim closed without suit, or the per- cent of claims closed without suit in which an indemnity payment was made, could also validate the effectiveness of active claim management. If, on the other hand, the facility opens both suspicion and demand claim files, and can compare these rates separately with another facility, an increased rate of suspicion files being closed prior to suit and a higher average indemnity payment per claim may indicate that the risk manager is being too aggressive in settling claims prior to suit.

Relationship of Incident Reports to Lawsuits. It is very difficult to synchronize incident reports and lawsuits due to the lag time in the filing of a lawsuit after an incident. However, one may be able to predict the average lag time based on the entity’s experience and the statute of limitations for a given jurisdiction. Thus, in a state with a one-year statute of limitations, an entity might discover that 95% of its suits (in which there was a discernible incident) were filed between 7 months and 18 months after the incident, with the average being 11 months. Using 11 months as a reference, the entity could look at lawsuits filed in one year versus incident reports in the

preceding year. This figure may be comparable between facilities, at least those within a given jurisdiction.

Indemnity Payments per Closed Claim. This figure can be a good indicator of the effectiveness of the risk management program, especially one that handles claims aggressively. However, one must be careful to ensure that the entities that are comparing data are capturing comparable data, given the discussion above on all variables attendant to the definition of claim and the capture of indemnity payment information.

Indemnity Payments per Resolved Lawsuit. This ratio can be compared with data from other entities, as long as the data are comparable. One of the obstacles to comparing these data, at least on an individual entity level, is that the number of lawsuits resolved by an individual entity in a given year will be very small. Thus, the lawsuits resolved by one entity may be wholly dissimilar to another entity’s lawsuits. An outlier, such as a catastrophic case in which high damages were paid, can dramatically skew these numbers. This problem can be rectified, to some extent, by applying some form of “capping” (e.g., use $200,000 as the maximum payment) to the indemnity payments to reduce the skewing that large losses can cause. If the types of lawsuits are roughly comparable, or the number of lawsuits is very large, and the skewing effect of large losses can be minimized, this can be a useful comparison.

Loss-Adjusting Expense per Resolved Lawsuit. Loss-adjusting expense (LAE) can be subject to a number of variables, some of which may be controlled by the entity or its insurance carrier, but many of which cannot. Hourly rates charged by defense counsel, court reporters, and expert witnesses can usually be affected to a small extent by bargaining, or to a

Page 7: Comparing risk management information: An invitation to disaster?

36 J O U R N A L O F H E A L T H C A R E R I S K M A N A G E M E N T

slightly greater extent by the selection of the provider. Bills from the providers can be audited pursuant to the entity’s guidelines and suspect charges eliminated. But, for the most part, LAE will be driven by the nature of the case and the going rate for services.

It is entirely possible to compare LAEs between adjuster, between law firms, or between healthcare entities or providers. However, these expenses will normally not arise out of a large number of cases and, in many respects, professional liability cases tend to have unique defense requirements. If one wished to compare LAEs for a thousand slip and fall cases, the expenses could be comparable. When one looks at LAE for an entire range of professional and general liability claims, especially when there are only a few claims to be compared, the comparability of the LAE will be diminished.

If the obstacles of hourly rate variation (e.g., the LAE could be reduced to the number of hours spent by attorneys or paralegals to resolve the matter) and the dissimilarity of claims can be overcome, the comparison of LAEs between healthcare entities may be enlightening. An entity that sets solid guidelines for outside counsel, carefully audits legal bills for conformity with its guidelines, scrutinizes the use of expert witnesses, and judiciously manages its legal resources, should show a reduced LAE per claim when compared with an entity that does not undertake such efforts.

In this, as in many aspects of this discussion, LAE should not be viewed in isolation. It is extremely easy to reduce LAE by settling claims early. If one can achieve an appropriate settlement by settling claims early, this is all to the good. If, however, early settlement of claims drives up the indemnity payment to settle the claim, the reduction in LAE

is false economy. Similarly, law firm A may be able to resolve 20 cases for an average cost of $15,000 per claim, while law firm B charges an average of $20,000 per claim for the resolution of a similar number of cases. If the cases resolved by law firm A involve an average indemnity payment of $60,000, while the average indemnity payment in the cases resolved by law firm B average $40,000, all other things being equal, a healthcare entity would save $300,000 by having law firm B handle these 20 cases.

Indemnity Reserves per Claim File or Lawsuit. Indemnity reserves can be a useful device for comparing the severity of claims asserted against healthcare entities. However, this study is once again fraught with subjectivity that must be minimized or avoided altogether. It is apparent that the comparing entities must have similar definitions and categories for claims because the reserve placed on a suspicion claim will usually be far less than that placed on a demand claim. It is also imperative that the entities follow similar reserving practices because the practices vary widely across the industry. Some entities place a final reserve on a claim on opening it or within a short period thereafter, while others prefer to adjust the reserve as information about the severity of the claim is received. Some entities place a “worst case” reserve on claims in order to be covered in the event that the claim turns out to be catastrophic, while others prefer to place a realistic reserve that depicts what the entity actually expects to pay. Before any comparison of indemnity reserves is made, the comparing entities must understand the ways in which their practices are similar and the ways in which they are not.

Loss-Adjusting Expense Reserves per Claim File or Lawsuit. It is difficult to compare LAE reserves between entities

Page 8: Comparing risk management information: An invitation to disaster?

httpr//www.ashrm.org W I N T E R 2 0 0 0 27

because of the wide variation in practices. Some entities do not reserve for LAE at all. Some entities merely place an LAE reserve that is a set percentage of the indemnity reserve. Some entities set the LAE reserve independently based on their perception of the severity of the claim and the expense necessary to resolve it. Some entities use internal resources to litigate claims and lawsuits, hence any LAE reserve placed on a claim might not be accurate. In all cases, as the case progresses, the LAE reserve will vary with the payment of submitted legal bills, hence reserves will not necessarily indicate anything about the underlying claim. In short, there may be no practical way to benchmark LAE reserves.

Total Incurred per Claim File or Lawsuit. Total incurred may be defined, for the purposes of this article, as the total of all reserves and all payments to date. If reserves are reduced by the amount of a payment, the total incurred amount for a given claim does not vary unless the reserves are adjusted. While these data could be used for benchmarking purposes, since it is a mix of payments and reserves, the data may lack the amount of subtlety necessary to pinpoint the reasons for the loss history of the entity.

Pitfalls In Comparing Data Even assuming that facilities have adopted common definitions for the various parts of the claim resolution process, and have standardized their procedures well enough to make them comparable, there may still be hazards associated with sharing data between them.

Controlling Occurrences, Not Reports It is extremely difficult to measure the success or failure of a risk management program or intervention, particularly in the short term. The activities of the risk

manager today may affect the current incident rate, but will not affect the number of claims for a number of months or years, and will not affect the number of lawsuits for a number of years. Since the lag time following the implementation of a program is so great, it is tempting to look for success in current measurements. One area in which success may be seen is by a reduction in the number of incidents.

Unfortunately, it is very difficult to reduce the

number of incidents, but extremely easy to

reduce the number of incident reports.

Unfortunately, it is very difficult to reduce the number of incidents, but extremely easy to reduce the number of incident reports. If sufficient emphasis (read “pressure”) is applied to reducing the number of incidents, the response may simply be to report them less frequently. This is commonly seen when incident reports are used as the basis for discipli- nary action. No one wants to incriminate himself or herself unnecessarily. It is clearly a laudable goal to reduce the number of incidents in a facility, but the facility must be extremely careful to ensure that it is reducing the number of incidents, not just the number of reports.

Improper Corrective Action(s) If risk management data are viewed as having some empiric or absolute, rather than merely relative, value, it is entirely possible that a facility would take action based on an incorrect interpretation of the data. For example, a comparison of the incident reports generated in 100 hospitals might show that the average reporting rate is one incident report per occupied bed per month. A given facility may, however, have an incident report rate of 1.8 reports per occupied bed per month. This may mean that the hospital is experiencing too many incidents per

Page 9: Comparing risk management information: An invitation to disaster?

98 J O U R N A L O F H E A L T H C A R E R I S K M A W A O E M E N T

month, or it may mean that its reporting system is very good when compared with the other facilities. The former conclusion would be bad news, but the latter may be simply be indicative of a good risk management program (this may be extremely difficult to explain to administration, however). It is extremely difficult, especially when one is outside the risk management process, to determine which conclusion is correct. If the facility decides, on the basis of this information, that it needs to replace its risk management program, and the latter conclusion above was actually the correct one, it will certainly not have done itself (or its risk manager) a service.

It is conventional wisdom that 20% of all

occurrences will cause 80% of the losses.

Failing to Take Corrective Action (s) The goal of comparing data is to assist in determining where the facility’s problems may lie. However, this is not the end of the matter. It is apparent that, whenever data are compared between facilities, some facilities may appear to be perform- ing at a higher level than the others. It is entirely possible that these facilities are performing at a higher level. However, it is also possible that the facility may be lulled into a false sense of security and will fail to take action when action is warranted. Healthcare facilities must remain vigilant, even in the face of comforting statistics, to prevent injury to patients and visitors. .

Law of Small Numbers It is usually impossible to draw statistically significant conclusions regarding a process when the population of observed events (its “n”) is small. Healthcare facilities traditionally attempt to “drill down” to determine the root cause of a trend, which can certainly be helpful, but which may serve to diminish its n with respect

to the portion of the process being scrutinized. If the number of incidents reported in a hospital’s ED normally ranges between three and five per month, does it mean that there is a trend if there are six incidents in a given month? Similarly, if a department has not reported an incident in 14 months, is it a trend when it does report one? If neither of these are considered trends, does that mean that the departments can safely be ignored?

Similarly, it is conventional wisdom that 20% of all occurrences will cause 80% of the losses. When dealing with professional liability claims, it is generally true that a very small percentage of all claims will cause an extremely disproportionate percentage of the losses. If a risk manage- ment program prevents two major claims in a given year, it will have been highly effective, even if this only represents less than one percent of the facility’s total reported incidents.

Conclusions and Recommendations There are clearly any number of obstacles to implementing a system that would allow the sharing of objective, reliable, and quantifiable data, and there are any number of hazards associated with the sharing of unreliable data. Consequently, there are a few imperatives that must be followed if any project of this sort is to succeed.

The data must be consistently defined, collected, and compiled. Common definitions must be agreed on, and each facility must be willing to change its approach. At a minimum, entities must be willing to adopt a classification scheme that will properly stratify its data so that data in certain strata can be compared. The system must provide information that is truly comparative of the risk management processes at the facilities being studied, and this will only occur if comparable data are analyzed.

Page 10: Comparing risk management information: An invitation to disaster?

http://www.ashrm.org W I N T E R 2 0 0 0 29

The participants in any study must agree to refrain from attempting to influence the data in artificial ways. For example, an entity could reduce its average indemnity payment per claim by opening a number of claim files in which it does not believe that any payment will need to be made. When these files are closed without payment, its average indemnity payment per claim file closed will go down. Similarly, an entity may fail to report the full number of claims, indemnity payments or reserves, or large verdicts in order to make its risk management program appear more favorable in the study. Such behavior has little utility in the attempt to gain valid information.

The data must be properly normalized for geographic location, level of services offered, and volume or activity level. The sharing of data will be meaningless, and probably unacceptable, if the information cannot be reduced to a rate that is meaningful to the facility, and which allows for meaningful comparisons between facilities.

All data elements need to be adjusted for severity. A high number of incidents may not be significant if none of them resulted in significant injuries. A high number of lawsuits may not be an indicator of the effectiveness of the risk management program if the indemnity payment or reserve per claim is very low, or if it is believed that most of the lawsuits are frivolous.

No data element can be studied in isolation. The number of incident reports alone will not give a complete picture, but neither will the number of claims or lawsuits. Facilities will need to look at all of the data to determine an accurate picture of their loss exposure. For example, the effectiveness of the incident reporting system can be verified by determining the number of claims that arise from other reporting sources (such as patient complaints, patient satisfaction

surveys, requests for records from attorneys, etc.) or the number of lawsuits for which no incident report was ever filed (but in which an incident occurred). The number of reported incidents that become claims can be an indicator of severity or an indicator of diligent reporting. The number of reported incidents or claims must be considered in light of the number of pending lawsuits. The number of lawsuits pending at any given time may be an indicator of the effectiveness of risk management, but this must be considered in light of the average indemnity payment or reserve per lawsuit. In short, all of the data are mere indicators of effectiveness, but none are of overriding importance alone.

No data element can be studied in isolation.

One should use all of the data in the continuum between incident and resolution to validate the other, often less reliable, data in the continuum. For example, if a facility has an incident report rate that is statistically significantly higher than the average rate, but its rates of claims and lawsuits (both first notice and conventional), as well as its indemnity payments (including indemnity per resolved claim), are at or below the average rate, one may well conclude that the facility has a higher than average reporting rate, but not a higher than average incident rate.

All participants should agree to carefully examine the data that are received back from the study and make good faith efforts to determine the cause of aberrant data. If the problem is due to processes within the entity, these should be addressed. Problems with the data collection or analysis process should be evaluated to see if corrections should be made. The participants should not cavalierly dismiss the results on the grounds that their situation is different and hence the data are not comparable.

Page 11: Comparing risk management information: An invitation to disaster?

30 J O U R N A L O F H E A L T H C A R E R I S K M A N A G E M E N T

As nqted at the outset of this article, the sharing of data in risk management can be an extremely valuable tool, but it can also present enormous risks. h s k management needs to carefully consider its options, and it needs to proceed with care. A system for sharing information requires commit- ment, careful planning, careful implemen- tation, careful monitoring, and a complete understanding of the lessons that the information can, and cannot, teach.

References 1. Vanagunas, A.M. Systems for risk identification. In Carroll, R. (ed.) Handbook of Health Care Risk Management, 2nd edition. 2. Serb, C. The uncalculated risks: Why risk managers won’t benchmark. Hospitals B Health Networks.

3. Some possible ratios are: 71(13):28-30, July 5, 1997.

Total incidents per 1,000 adjusted (or weighted) patient days Medication errors per 10,000 doses dispensed

Falls per 1,000 adjusted (or weighted) patient days Emergency department incidents per 1,000 ED visits IV incidents per 1,000 IV sets dispensed Surgical incidents per 1,000 surgical cases The scale of the rate is important, but there is nothing magic about using 100, 1,000, or 10,000 in the denominator. It .

is simply easier to relate to rates like 325 medication errors per 10,000 doses dispensed, rather than 0.0325 medica- tion errors per dose dispensed.

4. A “loss triangle” recognizes that losses “mature” with the passage of time. It measures levels of activity in one period versus levels of activity in another period, with the measurements taken at standard points in time. For example, the loss triangle for incidents giving rise to claims could look like the following:

1999 19

Thus, for incidents occurring in 1994, 21 claim files were opened during that year, 12 were opened in 1995, 2 in 1996, and 1 was opened in 1999, for a total of 36 claim files. If one neglects the maturation of years, it would appear that the number of claims has gone down in recent years (1994 = 36, 1995 = 33, 1996 = 37, 1997 = 35, 1998 = 28, 1999 = 19), when, in fact, the number may be very stable. It merely takes an additional two years after the year in which the incident occurred for the majority of claim files to be opened.