2
897 Whither Meta-analysis? WHILST improvement in the success rate achieved by a new method of treatment may be small, such improvement may still be worthwhile. Small gains are especially important in a disease such as cancer, which can affect many people for whom the consequence of therapeutic failure is premature death. A prospective randomised clinical trial is the best method of showing whether one treatment is better than another, but unless a large number of patients can be recruited it will lack sufficient statistical power to detect the small difference that may reasonably be expected. Thus several trials with limited patient entry may each end up failing to refute convincingly the null hypothesis, even though a real and useful difference exists between the results of the two treatments being compared. A method of overcoming this problem is to analyse the pooled results of a number of similar trials-a procedure known as an overview or meta- analysis. Cuzick and colleagues1 have now reported a good example of the technique and its associated difficulties. Cuzick and co-workers studied the possible effect on survival of postoperative radiotherapy in breast cancer. There is little doubt that radiotherapy reduces local recurrence, but there have been suggestions2 that irradiation might adversely affect survival. The first question to be decided in undertaking an overview is which studies should be included. Obviously they should be prospective and randomised to avoid bias, and should all unambiguously address the same question-in this case the benefit or otherwise of adjuvant radiotherapy. Thus, although there might be differences between trials in the primary surgical technique used, the only difference between the two arms in any one trial should be that one received radiotherapy and the other did not. The next important point is that, as far as possible, all trials that have been carried out should be included in the meta-analysis whether their results have been 1. Cuzick J, Stewart H, Peto R, et al. Overview of randomized trials of postoperative adjuvant radiotherapy in breast cancer. Cancer Treat Rep 1987; 71: 15-29. 2. Stjemswärd J. Decreased survival related to irradiation postoperatively in early operable breast cancer. Lancet 1974; ii: 1285-86. published or not. This precaution is to avoid publication bias-ie, a tendency not to publish negative results, so that the published papers will be biased in favour of showing a difference. Simes3 has lately shown this effect in trials of combination chemotherapy versus single-agent chemotherapy in advanced ovarian cancer: an overview of published trials showed a significant advantage for combination chemotherapy, but an overview of trials registered with a trial data bank (not all published) showed no clear advantage. Cuzick et all carried out their overview on ten trials, two of which have not yielded published results. Two further trials could not be included because of insufficient follow-up information beyond five years. It is probably best in an overview not to exclude any patients who have been randomised, even if they have been excluded by the individual trial organisers from their analyses. This approach will ensure that no bias is introduced by treatment-dependent patient exclusion, and was the procedure adopted by Cuzick and colleagues.l The method of analysis must be such that comparisons are made only between patients within the same trial. Cuzick et all fulfilled this requirement by calculating the observed minus the expected deaths in each trial separately; they then obtained a summary log-rank statistic by adding these differences across trials. This method assigns more weight to the studies with large numbers, but, in the calculation of the confidence intervals, makes no adjustment for possible study-to-study heterogeneity. As discussed by Gelber and Goldhirsch,4 the confidence interval may then be underestimated. The main finding of the overview by Cuzick et all was that, although there was hardly any difference in survival between radiotherapy and no radiotherapy arms in the first ten years, follow-up of the ten-year survivors showed a significant worsening of survival for the radiotherapy- treated patients. All but one trial found a difference in favour of no radiotherapy, and in three the difference was individually significant. The only difference in the other direction was in a trial in which the number of events after ten years was small and the 95 % confidence interval correspondingly large. There seemed, therefore, to be no pronounced heterogeneity between trials. It is very important in the published report of an overview that the summary outcome measures from each individual study should- be presented so that readers can judge for themselves the extent of any study heterogeneity and whether the overall findings are greatly influenced by a large outlying trial. The report should also give enough detail about each trial to enable the quality of its contribution to be assessed, and the pre-defined protocol used in carrying out the 3. Simes RJ. Confronting publication bias: a cohort design for meta-analysis. Stat Med 1987; 6: 11-29. 4. Gelber RD, Goldhirsch A. The concept of an overview of cancer clinical trials with special emphasis on early breast cancer. J Clin Oncol 1986; 4: 1696-703.

Whither Meta-analysis?

Embed Size (px)

Citation preview

Page 1: Whither Meta-analysis?

897

Whither Meta-analysis?WHILST improvement in the success rate achieved

by a new method of treatment may be small, suchimprovement may still be worthwhile. Small gains areespecially important in a disease such as cancer, whichcan affect many people for whom the consequence oftherapeutic failure is premature death. A prospectiverandomised clinical trial is the best method of showingwhether one treatment is better than another, butunless a large number of patients can be recruited itwill lack sufficient statistical power to detect the smalldifference that may reasonably be expected. Thusseveral trials with limited patient entry may each endup failing to refute convincingly the null hypothesis,even though a real and useful difference existsbetween the results of the two treatments beingcompared. A method of overcoming this problem is toanalyse the pooled results of a number of similartrials-a procedure known as an overview or meta-analysis. Cuzick and colleagues1 have now reported agood example of the technique and its associateddifficulties.

Cuzick and co-workers studied the possible effecton survival of postoperative radiotherapy in breastcancer. There is little doubt that radiotherapy reduceslocal recurrence, but there have been suggestions2 thatirradiation might adversely affect survival. The firstquestion to be decided in undertaking an overview iswhich studies should be included. Obviously theyshould be prospective and randomised to avoid bias,and should all unambiguously address the same

question-in this case the benefit or otherwise ofadjuvant radiotherapy. Thus, although there might bedifferences between trials in the primary surgicaltechnique used, the only difference between the twoarms in any one trial should be that one received

radiotherapy and the other did not.The next important point is that, as far as possible,

all trials that have been carried out should be includedin the meta-analysis whether their results have been

1. Cuzick J, Stewart H, Peto R, et al. Overview of randomized trials of postoperativeadjuvant radiotherapy in breast cancer. Cancer Treat Rep 1987; 71: 15-29.

2. Stjemswärd J. Decreased survival related to irradiation postoperatively in earlyoperable breast cancer. Lancet 1974; ii: 1285-86.

published or not. This precaution is to avoid

publication bias-ie, a tendency not to publishnegative results, so that the published papers will bebiased in favour of showing a difference. Simes3 haslately shown this effect in trials of combination

chemotherapy versus single-agent chemotherapy inadvanced ovarian cancer: an overview of publishedtrials showed a significant advantage for combinationchemotherapy, but an overview of trials registeredwith a trial data bank (not all published) showed noclear advantage. Cuzick et all carried out theiroverview on ten trials, two of which have not

yielded published results. Two further trials couldnot be included because of insufficient follow-upinformation beyond five years. It is probably best in anoverview not to exclude any patients who have beenrandomised, even if they have been excluded by theindividual trial organisers from their analyses. Thisapproach will ensure that no bias is introduced bytreatment-dependent patient exclusion, and was theprocedure adopted by Cuzick and colleagues.lThe method of analysis must be such that

comparisons are made only between patients withinthe same trial. Cuzick et all fulfilled this requirementby calculating the observed minus the expected deathsin each trial separately; they then obtained a summarylog-rank statistic by adding these differences acrosstrials. This method assigns more weight to the studieswith large numbers, but, in the calculation of theconfidence intervals, makes no adjustment for

possible study-to-study heterogeneity. As discussedby Gelber and Goldhirsch,4 the confidence intervalmay then be underestimated. The main finding of theoverview by Cuzick et all was that, although there washardly any difference in survival between

radiotherapy and no radiotherapy arms in the first tenyears, follow-up of the ten-year survivors showed asignificant worsening of survival for the radiotherapy-treated patients. All but one trial found a difference infavour of no radiotherapy, and in three the differencewas individually significant. The only difference inthe other direction was in a trial in which the numberof events after ten years was small and the 95 %confidence interval correspondingly large. There

seemed, therefore, to be no pronounced heterogeneitybetween trials.

It is very important in the published report of anoverview that the summary outcome measures fromeach individual study should- be presented so thatreaders can judge for themselves the extent of anystudy heterogeneity and whether the overall findingsare greatly influenced by a large outlying trial. Thereport should also give enough detail about each trialto enable the quality of its contribution to be assessed,and the pre-defined protocol used in carrying out the

3. Simes RJ. Confronting publication bias: a cohort design for meta-analysis. Stat Med1987; 6: 11-29.

4. Gelber RD, Goldhirsch A. The concept of an overview of cancer clinical trials withspecial emphasis on early breast cancer. J Clin Oncol 1986; 4: 1696-703.

Page 2: Whither Meta-analysis?

898

overview should be clearly stated. The full reportshould be published with a minimum of delay oncethe analysis has been completed; results presentedverbally at meetings may be widely cited and havegreat influence, but are no substitute for a

comprehensive written report. It is regrettable thatone of the most important overviews in the breastcancer field-the analysis of trials of adjuvantchemotherapy and tamoxifen-has so far been thesubject of only a very brief published report, althoughthe data have frequently been given at meetings.Individuals and groups who contribute their results toan overview should be prepared to give every facilityfor the early publication of a full account.

Interpretation of the result of an overview may notalways be straightforward. When the onset dates ofthe individual trials span a wide range, the survivaldata at long follow-up will come largely from patientstreated in the earlier trials. In the analysis of Cuzicket all the onset dates spanned twenty-five years. Thefinding of a significant difference after ten years infavour of no radiotherapy was therefore influenced bythe results of the two earliest trials, when orthovoltageirradiation was used rather than the megavoltageirradiation of more modem practice. Another

problem of interpretation concerns the magnitude ofthe effect found. It is common in meta-analyses tocombine results from trials in which, although thedifference between the two treatment arms was solelythe addition or omission of the agent being tested, thetreatment with which that agent was combined mayhave been vastly different in each trial. Thus, in theoverview of adjuvant tamoxifen trials,5 there wereseveral other treatments to which tamoxifen was

added; it is possible that tamoxifen might be morebeneficial when combined with one treatment thanwith another. If so, estimate of the magnitude of thetreatment effect found in the overview will beconservative by comparison with the optimum effectobtainable by the agent.

Overviews are undoubtedly valuable and can

demonstrate a real and useful treatment effect when

single trials have produced non-significant results. Animportant contribution has been made to resolvingsome of the uncertainties about the use of adjuvanttreatments (radiotherapy, chemotherapy, hormonetherapy) in early breast cancer. In another area, thesurvival benefits of long-term beta-blockade after

myocardial infarction have been documented.6 6

Nevertheless, the possibility of overview analysis mustnot be allowed to discourage the organisation of largetrials. One trial based on a single protocol, providedthat sufficient patient numbers can be acquired, willalways avoid some of the heterogeneity disadvantagesinherent in an overview.

5. Anon. Review of mortality results in randomised trials in early breast cancer. Lancet1984; ii: 1205.

6. Yusuf S, Peto R, Lewis J, Collins R, Sleight P. Beta blockade during and aftermyocardial infarction: an overview of the randomised trials. Prog Cardiovasc Dis1985; 27: 335-71.

Postgraduate and ContinuingEducation in Need of

Orchestration

TWENTY-FIVE years to the day (Dec 16,1961) aftera memorable conference at Christ Church, Oxford,lthe UK Conference of Postgraduate Deans, inassociation with the National Association of Clinical

Tutors, held a commemorative seminar at Green

College of the same university to review the presentstate of postgraduate medical education and trainingand to put forward proposals for new initiatives. TheChrist Church conference was notable for its catalyticeffect on the development of postgraduate medicalactivities within the regions and districts of theNational Health Service, and, in particular, on thegrowth of what is known as the postgraduate medicalcentre movement. The timing was opportune, theparticipants were influential, and objectives weredefined which were practical and reasonablyattainable. The aims were to provide an educationalatmosphere in each NHS region, to encourage allconsultants to recognise their responsibilities for

training their junior staff, and to provide appropriatefacilities in district hospitals for the education of alldoctors in their neighbourhood. The need for

postgraduate medical centres was widely accepted anda regional organisation of postgraduate deans andclinical tutors became firmly established. Proposalswere also made for a national body to coordinate thethree major but independent interests in postgraduateeducation-the royal colleges and their faculties, theuniversity medical schools, and the NHS.

Almost all doctors in the UK now have access to a

postgraduate centre within reasonable reach of theirplace of work, and clinical tutors have the support ofan increasingly complex network of specialty tutors.Arrangements for general practice training haveincluded appointment of regional advisers and theirassociates, course organisers, and Royal Collegeof General Practitioners’ tutors; hospital-basedspecialties have also appointed tutors to oversee

training in their disciplines. Organisational growthhas brought with it increasing activity in postgraduatetraining and continuing education, which has changedand complicated the clinical tutors’ role. Their

original major responsibilities for providing refreshercourses for general practitioners have now beenlargely assumed by the general practice organisation,and the tutors have acquired the complex task ofcoordinating the many postgraduate activities withintheir district.The participants in the Green College seminar,

who hold senior and responsible positions in

postgraduate medical education, were pleased thatmuch had been done, but expressed dissatisfaction

1. Anon. Postgraduate medical education. Conference convened by the NuffieldProvincial Hospitals Trust. Lancet 1962; i: 367-68