6
ELSEVIER DISCUSSION: Errors, Misunderstandings, and Fraud George W. Williams, PhD Merck Research Laboratories, West Point, Pennsylvania ABSTRACT: Although considerable attention has recently focused on dramatic instances of misconduct in clinical trials, a larger impact on science may come from inadequate experiment design and poor execution. Although it may be impossible to avoid all errors and inconsistencies in data resulting from clinical trials, careful attention should be devoted during the design, conduct, and analysis stages of clinical trials to minimize these problems. In terms of design, issues such as randomization, sample size, and blinding need to be considered. Protocols should be realistic in the amount of data to be collected and the ability of patients and investigators to comply. In terms of conduct, investigators need to be carefully selected. Trials require adequate funding and resources and well-trained investigators, data managers, and laboratory personnel. Data monitor- ing needs to be implemented early in the course of a clinical trial so that all corrective actions can be taken. Auditing of clinical trial data against source documents can aid in the identification of errors and fraud but is quite expensive when done for large percentages of the collected data. At the analysis stage, intention to treat approaches to analyses can reduce bias. Despite all efforts to avoid problems of errors, misunder- standing, and fraud, such problems can arise. They must be addressed promptly and their impact evaluated. Controlled Clin Trials 1997;18:661-666 0 Elsevier Science Inc. 1997 KEY WORDS: Design, quality control, auditing, analysis INTRODUCTION As noted by Alberts and Shine in a 1994 paper in Science [l], “The scientific research enterprise is built on a foundation of trust: trust that the results reported by others are valid.” Unfortunately, over the last several years, public attention has focused on some dramatic instances of misconduct- occasional cases of fabrication and plagiarism. For example, there have been investigations concerning a 1986 paper in Cell co-authored by David Baltimore that created the celebrated “Baltimore Case” in which a coauthor of the renowned scientist was accused of fabricating data. Considerable attention has been given to the fabrication of data by Dr. J.R. Darsee at Harvard. More recently, in March 1994, the Chicago Tribune published an account of fraud in the National Surgical Address reprint requests to: George W. Williams, PhD, Merck Research Laboratories, BL 3-2, West Point, PA 19486. Received August 19, 1996; revised January 12, 1997; accepted ]anuary 28, 1997. Controlled Clinical Trials 181661-666 (1997) 0 Elsevier Science Inc. 1997 655 Avenue of the Americas, New York, NY 10010 0197-2456/97/$17.00 PII SO197-2456(97)00001-9

Discussion: Errors, misunderstandings and fraud

Embed Size (px)

Citation preview

Page 1: Discussion: Errors, misunderstandings and fraud

ELSEVIER

DISCUSSION:

Errors, Misunderstandings, and Fraud

George W. Williams, PhD Merck Research Laboratories, West Point, Pennsylvania

ABSTRACT: Although considerable attention has recently focused on dramatic instances of misconduct in clinical trials, a larger impact on science may come from inadequate experiment design and poor execution. Although it may be impossible to avoid all errors and inconsistencies in data resulting from clinical trials, careful attention should be devoted during the design, conduct, and analysis stages of clinical trials to minimize these problems. In terms of design, issues such as randomization, sample size, and blinding need to be considered. Protocols should be realistic in the amount of data to be collected and the ability of patients and investigators to comply. In terms of conduct, investigators need to be carefully selected. Trials require adequate funding and resources and well-trained investigators, data managers, and laboratory personnel. Data monitor- ing needs to be implemented early in the course of a clinical trial so that all corrective actions can be taken. Auditing of clinical trial data against source documents can aid in the identification of errors and fraud but is quite expensive when done for large percentages of the collected data. At the analysis stage, intention to treat approaches to analyses can reduce bias. Despite all efforts to avoid problems of errors, misunder- standing, and fraud, such problems can arise. They must be addressed promptly and their impact evaluated. Controlled Clin Trials 1997;18:661-666 0 Elsevier Science Inc. 1997

KEY WORDS: Design, quality control, auditing, analysis

INTRODUCTION

As noted by Alberts and Shine in a 1994 paper in Science [l], “The scientific research enterprise is built on a foundation of trust: trust that the results reported by others are valid.” Unfortunately, over the last several years, public attention has focused on some dramatic instances of misconduct- occasional cases of fabrication and plagiarism. For example, there have been investigations concerning a 1986 paper in Cell co-authored by David Baltimore that created the celebrated “Baltimore Case” in which a coauthor of the renowned scientist was accused of fabricating data. Considerable attention has been given to the fabrication of data by Dr. J.R. Darsee at Harvard. More recently, in March 1994, the Chicago Tribune published an account of fraud in the National Surgical

Address reprint requests to: George W. Williams, PhD, Merck Research Laboratories, BL 3-2, West Point, PA 19486.

Received August 19, 1996; revised January 12, 1997; accepted ]anuary 28, 1997.

Controlled Clinical Trials 181661-666 (1997) 0 Elsevier Science Inc. 1997 655 Avenue of the Americas, New York, NY 10010

0197-2456/97/$17.00 PII SO197-2456(97)00001-9

Page 2: Discussion: Errors, misunderstandings and fraud

662

Adjuvant Breast and Bowel Project (NSABP). As Smigel notes in a 1994 article

G.W. Williams

in the @urnal of the National dancer Institute [Z], “From this one event, the validity of lumpectomy plus radiation as treatment for early stage breast cancer became an issue, the NSABP chairman was asked to step down, and a massive audit of all cases in the trial was undertaken by both the National Cancer Institute and NSABP itself. The NSABP restructured itself and the NC1 reno- vated its auditing enforcement and other clinical trials accountability measures.”

However, as noted in an article in a 1993 paper in the Journal of the Royal College of Physicians [3], the conduct of most clinical research is honest and honorable. Occasionally, however, the sponsor of a clinical study may be faced with data that are suspect. Such data might or might not be fraudulent. For example, Lederberg, in a 1995 article in Science [4], notes that a larger toll on the scientific enterprise is extracted by inadequate experiment design and sloppy execution. It was noted in a session at the recent Society for Clinical Trials meeting in Seattle that clinical trials will always be imperfect and that there will be errors and inconsistencies. Patients will not follow protocols perfectly.

David Harrington noted at the recent annual meeting of the American Statis- tical Association (ASA) [5] that there is a profound lack of understanding of clinical trials at all levels. There is a need for considerable education of study investigators, patients, sponsors, and the public. I am confident that this session will aid us in clarifying the distinctions among errors, misunderstandings, and fraud in clinical trials and suggest methods for avoiding these problems.

In this session, Dr. David DeMets has clarified the distinctions between fraud, errors, incompetence, misunderstanding, and bias by reference to several specific examples from clinical trials. He has noted the robustness of clinical trials to many, but not all, of these problems. In all of these areas he has emphasized the importance of early detection of problems so that corrections can be made as soon as possible. The importance of minimizing the occurrence of these problems through training and study design has also been emphasized. However, when problems do occur, Dr. DeMets has emphasized that it is critical to report what happened and evaluate the impact on the validity of the trial.

In this session, Dr. Robert Califf has suggested the importance of appropri- ately large trials focused on key questions. He has also emphasized an appro- priate balance between answering the key study questions and devoting re- sources to quality control. He has presented very interesting data on costs.

I would like to focus my comments by underscoring some of the ways to prevent, detect, and remedy the problems of errors, misunderstandings, and fraud in clinical trials that have been mentioned by Drs. DeMets and Califf.

Ideally, the best approach to dealing with these problems in clinical trials is to prevent them in the first place. Such prevention should begin at the design stage of a clinical trial.

DESIGN

As Dave DeMets has noted in this session, fundamental study design features such as randomization and blinding are important for avoiding systematic error or bias. Moreover, trials should be appropriately designed in terms of sample size to accommodate random variation.

Page 3: Discussion: Errors, misunderstandings and fraud

Errors, Misunderstandings, and Fraud 663

Protocols should be realistic in terms of patients’ and investigators’ ability to be compliant. Overly complex protocols can be expected to lead to errors in execution. As Dave DeMets noted at a session at Eastern North American Region of the International Biometric society (ENAR) [6] this spring, we collect too much data “just in case” the FDA might ask us a question. Peto [7] has advocated the value of large but simple trials in order to focus on the critical primary questions to be addressed by the trial and accommodate random variation.

Patient eligibility definitions should be simple. Clear definitions of entry and diagnostic criteria and methodology need to be written so that they can be applied consistently and improve reliability of measurements. Rob Califf referred to some of the problems with reliability.

Let us now turn our attention to the conduct of clinical trials. Problems can also be minimized in terms of how clinical trials are conducted.

CONDUCT

Careful attention should be given to the qualifications of investigators in the selection of clinical sites and central facilities for participation in the clinical trial. Many factors need to be considered in the selection of investigators, including their expertise, scientific training, ability to recruit patients, and willingness to conform to the protocol. Investigators need to have sufficient time to devote to the trial. Conflict of interest (real or apparent) on the part of all participants should be avoided. As Meinert has noted in his textbook [8], a staff that appreciates honesty and integrity is the first step in assuring such characteristics of the trial itself.

Dr. Harlan noted during the Johns Hopkins course referred to earlier [9] that it is important to set realistic goals for investigators regarding recruitment, randomization, and resources needed.

There simply must be adequate funding and time for investigators, data management staff, and central laboratory staff. Dr. Califf has emphasized these resource constraints.

Endpoint committees, central reading centers, or laboratories can be quite helpful in standardization and should be blinded to study treatment codes. Central randomization with checks on patient eligibility can reduce the enroll- ment of ineligible patients. Study forms need to be well designed and pretested. Manuals of operations need to be prepared.

Training of investigators, data managers, and laboratory personnel is critical and should include appropriate attention to guidelines for good clinical prac- tice. Training sessions and certification procedures will promote standardiza- tion and minimize error. Dave DeMets has appropriately emphasized the im- portance of periodic retraining and recertification. With regard to the logistics of training, it may be useful to consider the number of clinical centers included in a multi-centered trial or at least to consider the advantages of a sufficiently large number of patients per site.

Data monitoring aids in catching errors early in the course of a clinical trial so that corrective actions may be taken before such errors propagate throughout the trial. Data monitoring should be initiated as early as possible in the trial and should include clinical centers, central laboratories, etc. The data coordinating center should monitor data for consistency among clinics and variability within

Page 4: Discussion: Errors, misunderstandings and fraud

664 G.W. Williams

clinics. Study forms should be monitored for completeness, internal consistency, consistency with other forms, and consistency over time. Analyses of the per- centage of missing values by institution can be helpful as the rate of missing data frequently is a reflection of the overall quality of the data. Completeness of follow-up information is particularly important. In general, examining insti- tutional differences can be used as a way of assessing data quality and revealing problem areas.

Data monitoring within a pharmaceutical company consists of ongoing data management review and involves both field staff and in-house staff who are blinded to treatment assignment.

Visits to study sites provide the opportunity to observe clinic procedures and to check consistency of data with source documents. As Cohen notes in a 1994 Science article [lo], the most rigorous check of data in a multi-site study is auditing at the trial site by comparison of the trial’s case report forms with original patient records. On-site auditing can seek out sloppiness, carelessness, and fraud by comparing the patient’s clinical record with the data entered in the trial. However, as Rob Califf mentioned and as Steve George noted at the recent ASA conference [5], the audit process is limited in its ability to detect fraud since in many NIH trials, only approximately 10% of study forms are checked against source documents. In the pharmaceutical industry, this percent- age is higher.

In addition, quality assurance audits are done at selected sites and are usually performed at the completion of the trial. There is a review of computer files and written reports as compared to patient charts and a review of compliance with regulations and procedures.

As Meinert noted during the Johns Hopkins course [ll], auditing has a high false positive rate (i.e., lots of discrepancies but few indications of fraud). Most discrepancies do not affect conclusions. Meinert suggests that audits should address the data that are most important to the conclusions of the trial.

Data monitoring boards are frequently established to aid in the data monitor- ing when severe adverse events are expected or life-threatening endpoints are being studied. In some trials, independent groups have been established to verify study results, such as in the Persantine-Aspirin Reinfarction Study (PARIS) [12].

To summarize these comments on data monitoring and auditing, comments made by Clarke in a textbook on clinical data management [13] are relevant: “Defining, setting up, and appiying validation is a time-consuming and labor- intensive process, so it is important to assess the value of the effort put into validation against the resulting improvement in the data. . . . Validation should ease when there is a fair degree of certainty that any remaining errors will not materially affect the results or conclusions of any analyses or the reliability of the trial.”

ANALYSIS

Although I will not concentrate on analysis issues, it is important to note, as Dave DeMets has done, the benefit of intention to treat analyses to avoid bias. Moreover, appropriate analytical approaches are essential for valid infer- ence from a clinical trial. These analytical approaches should be described prospectively in a well-developed data analysis plan.

Page 5: Discussion: Errors, misunderstandings and fraud

Errors, Misunderstandings, and Fraud 665

Despite all of the above efforts to avoid problems resulting from errors, misunderstanding, and fraud, such problems can arise and must be addressed.

At the recent ASA meeting, a panel chaired by Dr. Zelen [5] responded to the question of what should be done if one is notified of the possibility that fraud has occurred. There was a clear consensus that the response should be swift and clear. A team should be dispatched to the site quickly. Dave DeMets has emphasized the need for quick but careful assessments, as well.

The impact of such problems must be evaluated, and reanalysis of the clinical trial excluding the data that are in question may be necessary. When one learns that data are incorrect in a published paper, the question of the appropriate method for informing the scientific community needs to be addressed.

SUMMARY

As noted in the textbook by Buyse et al on cancer clinical trials [14], “Quality must be consciously built into the scientific process. It will not happen by chance.” We know from well-developed clinical trial methodology how to build quality into clinical trials. However, we must keep an appropriate balance between answering the key research question and devoting resources to quality control at the various stages of research.

REFERENCES

1. Alberts B, Shine K. Scientists and the integrity of research. Science 1994;266: 1660-1661.

2. Smigel K. Top cancer-related news stories focus on fraud, breast cancer, and the hope of early detection. INCl 1995;87:12-14.

3. Association of the British Pharmaceutical Industry. Fraud and malpractice in the context of clinical research. I Roy Cal Pkys London 1993;27:4546.

4. Lederberg J. Sloppy research extracts a greater toll than misconduct. Science 1995;9:13.

5. Zelen M, Carbone P, Harrington D, et al. Data integrity and fraud in cancer clinical trials: what are the options? Panel Discussion. Joint Statistical Meetings, Orlando, Florida, August 14, 1995.

6. DeMets, DL. Data integrity in clinical trials: looking back at 25 years. Presentation at International Biometric Society Eastern North American Region Spring Meeting, Birmingham, Alabama, March 28, 1995.

7. Peto, R. Statistics of chronic disease control. Nature 1992;356:557-558.

8. Meinert CL. Clinical Trials: Design, Conduct, and Analysis. New York: Oxford Univer- sity Press; 1986.

9. Harlan WR. Fraud in Clinical Trials: Prevention, Detection, and Consequences. Johns Hopkins Medical Institutions, Baltimore, MD, June 1995.

10. Cohen J. Clinical trial monitoring: hit or miss? Science 1994;264:534-537.

11. Meinert CL. Fraud in clinical trials: Introduction; Quality assurance: what is it and who does it?; Record auditing: what is it and what does it produce? Fraud in Clinical Trials: Prevention, Detection, and Consequences. Johns Hopkins Medical Institutions, Baltimore, MD, June 1995.

Page 6: Discussion: Errors, misunderstandings and fraud

666 G.W. Williams

12. Persantine-Aspirin Reinfarction Study Research Group: Persantine and aspirin in coronary heart disease. Circulation 1980;62:449461.

13. Clarke PA. Data validation. In: Rondel RK, Varley SA, Webb LF. Clinical Data Management. New York: John Wiley and Sons; 1993.

14. Buyse ME, Staquet MJ, Sylvester RJ. Cancer Clinical Trials: Methods and Practice. New York: Oxford University Press; 1984.