54
Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Embed Size (px)

Citation preview

Page 1: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Studies of Diagnostic Tests

Thomas B. Newman, MD, MPH

October 11, 2012

Page 2: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Reminders/Announcements OK (encouraged!) to help each other, but give

credit HW Write down answers to as many of the problems

in the book as you can (not just those assigned) and check your answers!

Homework/exam problem due by 11/15 (preferably sooner)

Final exam to be passed out 11/29, reviewed 12/6 Tom and Michael away next week at meeting of

the Society for Medical Decision Making– Screening lecture by Dr. Andi Marmor

Page 3: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Overview Common biases of studies of diagnostic test

accuracy Prevalence, spectrum and nonindependence Meta-analyses of diagnostic tests Checklist & systematic approach Examples:

– Pain with percussion, hopping or cough for appendicitis

– Clinical diagnosis of pertussis

Page 4: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #1 Example Study of BNP to diagnose congestive

heart failure (CHF; Chapter 4, Problem 3)

Page 5: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #1 Example Gold standard: determination of CHF by two

cardiologists blinded to BNP “The best clinical predictor of congestive

heart failure was an increased heart size on chest roentgenogram (accuracy, 81 percent)”

Is there a problem with assessing accuracy of chest x-rays to diagnose CHF in this study?

*Maisel AS, Krishnaswamy P, Nowak RM, McCord J, Hollander JE, Duc P, et al. Rapid measurement of B-type natriuretic peptide in the emergency diagnosis of heart failure. N Engl J Med 2002;347(3):161-7.

Page 6: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #1: Incorporation bias

Cardiologists not blinded to chest x-ray

Probably used (incorporated) chest x-ray to make final diagnosis

Incorporation bias for assessment of chest x-ray (not BNP)

Biases both sensitivity and specificity upward

©2000 by British Medical Journal Publishing Group

Page 7: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #2 Example: Visual assessment of jaundice

in newborns– Study patients who are getting a

bilirubin measurement– Ask clinicians to estimate extent of

jaundice at time of blood draw– Compare with blood test

Page 8: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Visual Assessment of jaundice*: Results

*Moyer et al., APAM 2000; 154:391

Sensitivity of jaundice below the nipple line for bilirubin ≥ 12 mg/dL = 97%

Specificity = 19%

What is the problem?

Editor’s Note: The take-home message for me is that no jaundice below the nipple line equals no bilirubin test, unless there’s some other indication.

--Catherine D. DeAngelis, MD

Page 9: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #2: Verification Bias* -1 Inclusion criterion for study: gold standard

test was done – in this case, blood test for bilirubin

Subjects with positive index tests are more likely to be get the gold standard and to be included in the study– clinicians usually don’t order blood test for bilirubin

if there is little or no jaundice How does this affect sensitivity and

specificity?

*AKA Work-up, Referral Bias, or Ascertainment Bias

Page 10: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #2: Verification Bias

TSB >12 TSB < 12

Jaundice below nipple

a b

No jaundice below nipple

c d

Sensitivity, a/(a+c), is biased ___.

Specificity, d/(b+d), is biased ___.

*AKA Work-up, Referral Bias, or Ascertainment Bias

Page 11: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Visual Assessment of jaundice*: Results

*Moyer et al., Archives Pediatr Adol Med 2000; 154:391

Recall “Gold Standard” was bilirubin ≥ 12 mg/dL Specificity = 19% This low specificity was a clue! What does it

mean? NIH: 19% of newborns who don’t have a bilirubin

≥ 12 mg/dL are not jaundiced below the nipple line 81% of babies with bilirubin <12 mg/dL are

jaundiced below the nipple line

Page 12: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Copyright restrictions may apply.

Does This Child Have Appendicitis?JAMA. 2007;298:438-451.

RLQ Pain: Sensitivity = 96% Specificity = 5% (1 – Specificity = 95%)

Likelihood Ratio =1.0

RLQ pain was present in 96% of those with appendicitis and 95% of those without appendicitis.

Page 13: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias #3 Example: PIOPED study of accuracy of

ventilation/perfusion (V/Q) scan to diagnose pulmonary embolism*

Study Population: All patients presenting to the ED who received a V/Q scan

Test: V/Q Scan Disease: Pulmonary embolism (PE) Gold Standards:

– 1. Pulmonary arteriogram (PA-gram) if done (more likely with more abnormal V/Q scan)

– 2. Clinical follow-up in other patients (more likely with normal VQ scan

* (Blood clot in the lungs. PIOPED. JAMA 1990;263(20):2753-9.

Page 14: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Double Gold Standard Bias Also called differential verification bias Two different “gold standards”

– One gold standard (usually an immediate, more invasive test, e.g., angiogram, surgery) is more likely to be applied in patients with positive index test

– Second gold standard (e.g., clinical follow-up) is more likely to be applied in patients with a negative index test.

Page 15: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Double Gold Standard Bias

There are some patients in whom the two “gold standards” do not give the same answer– Spontaneously resolving disease (positive with

immediate invasive test, but not with follow-up)– Newly occurring or newly detectable disease

(positive with follow-up but not with immediate invasive test)

Page 16: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Effect of Double Gold Standard Bias 1: Spontaneously resolving disease

Test result will always agree with gold standard Both sensitivity and specificity increase Example: Joe has a small pulmonary embolus (PE)

that will resolve spontaneously. – If his VQ scan is positive, he will get an

angiogram that shows the PE (true positive) – If his VQ scan is negative, his PE will resolve and

we will think he never had one (true negative) VQ scan can’t be wrong!

Page 17: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Effect of Double Gold Standard Bias 2: Newly occurring or newly detectable disease

Test result will always disagree with gold standard Both sensitivity and specificity decrease Example: Jane has a nasty breast cancer but it is

currently undetectable by biopsy– If her mammogram is positive, she will get biopsies that

will not find the tumor (mammogram will look falsely positive)

– If her mammogram is negative, she will return in several months an we will think the tumor was initially missed (mammogram will look falsely negative)

Mammogram can’t be right!

Page 18: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Spectrum of Disease, Nondisease and Test Results

Disease is often easier to diagnose if severe

“Nondisease” is easier to diagnose if patient is well than if the patient has other diseases

Test results will be more reproducible if ambiguous results excluded

Page 19: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Spectrum Bias

Sensitivity depends on the spectrum of disease in the population being tested.

Specificity depends on the spectrum of non-disease in the population being tested.

Example: Absence of Nasal Bone (on 13-week ultrasound) as a Test for Chromosomal Abnormality

Page 20: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Spectrum Bias Example: Absence of Nasal Bone as a Test for Chromosomal Abnormality*

Sensitivity = 229/333 = 69%BUT the D+ group only included fetuses with

Trisomy 21

Cicero et al., Ultrasound Obstet Gynecol 2004; 23: 218-23

Nasal Bone Absent D+ D- Total   LR

Yes 229 129 358   27.8

No 104 5094 5198   0.32

Total 333 5223 5556    

Page 21: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

The D+ group excluded 295 fetuses with other chromosomal abnormalities (mainly Trisomy 18)

Among these fetuses, the sensitivity of nasal bone absence was 32% (not 69%)

What decision is this test supposed to help with?– If it is whether to test chromosomes using chorionic

villus sampling or amniocentesis, these 295 fetuses should be included!

Spectrum Bias: Absence of Nasal Bone as a Test for Chromosomal Abnormality

Page 22: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Sensitivity = 324/628 = 52%vs. 69% obtained when the D+ group only included

fetuses with Trisomy 21

Spectrum Bias:Absence of Nasal Bone as a Test for Chromosomal Abnormality, effect of including other trisomies in D+ group

Nasal Bone Absent D+ D- LR Yes 229 + 95 =324 129 20.4No 104 + 200=304 5094 0.50Total 333 + 295=628 5223

Page 23: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Quiz: What if we considered the nasal bone absence as a test for Trisomy 21 (only)?

Then instead of excluding subjects with other chromosomal abnormalities or including them as D+, we should count them as D-. Compared with excluding them,

What would happen to sensitivity? What would happen to positive

predictive value?

Page 24: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Quiz: What if we considered the nasal bone absence as a test for Trisomy 21?

Nasal Bone Absent D+ D-

Yes 229 129 + 95 = 224 No 104 5094 + 200=5294

Total 333 5223 + 295=5518

What would happen to sensitivity? What would happen to positive predictive value?

Sensitivity unchanged. PPV would decrease (95 more false positives)

from 64% to 51%.

Compared with excluding patients with other trisomies,

Page 25: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Bias Description Sensitivity is falsely …

Specificity is falsely …

Incorporation

Gold standard incorporates index test.

Spectrum

D+ only includes “sickest of the sick”

D- only includes “wellest of the well:

Verification

Positive index test makes gold standard more likely.

Double Gold Standard

Disease resolves spontaneously

Disease become sdetectable during follow-up

Page 26: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Prevalence, spectrum and nonindependence

Prevalence (prior probability) of disease may be related to disease severity

One mechanism is different spectra of disease or nondisease

Another is that whatever is causing the high prior probability is related to the same aspect of the disease as the test

Page 27: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Prior probability, spectrum and nonindependence: examples

Diseases identified by screening or incidentally – higher prevalence assoc with lower severity– Prostate cancer– Thyroid cancer

Diseases where higher prevalence associated with greater severity– Fe deficiency– Higher prevalence of TB where HIV is more

prevalent; TB also more severe there

Page 28: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Prior probability, spectrum and nonindependence: examples Symptoms of disease associated with the

aspect of disease being tested: Urinalysis as a test for UTI in women with more and fewer symptoms (high and low prior probability)*

*EBD Table 5.3, from Lachs, Ann Int Med 1992; 117:135-40

Page 29: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Overfitting

Page 30: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Overfitting

Choosing best cutoff based on the data (small problem)

Choosing best cutoffs on best combination of multiple tests (big problem; 2 weeks)

Page 31: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Meta-analyses of Diagnostic Tests

Systematic and reproducible approach to finding studies

Summary of results of each study Investigation into heterogeneity Summary estimate of results, if appropriate Unlike other meta-analyses (risk factors,

treatments), results aren’t summarized with a single number (e.g., RR), but with two related numbers (sensitivity and specificity)

These can be plotted on an ROC plane

Page 32: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

MRI for the diagnosis of MS

Whiting et al. BMJ 2006;332:875-84

Page 33: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Figure 1. Graph showing the summary receiver operating characteristic curve (SROC) for the 25 stress echocardiography studies (closed

diamond) or the 50 stress nuclear scintigraphy studies (open squares).

Beattie W S et al. Anesth Analg 2006;102:8-16

©2006 by Lippincott Williams & Wilkins

SROC

Predicting post-op MI or death in elective noncardiac surgery patients

Page 34: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Dermoscopy vs Naked Eye for Diagnosis of Malignant Melanoma

Br J Dermatol. 2008 Sep;159(3):669-76

Dermoscopy performed unequivocally better in 7 of the 9 studies. Can you call out the coordinates for the 2 studies for which this was not the case?

Page 35: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Studies of Diagnostic Test Accuracy: Checklist Was there an independent, blind

comparison with a reference (“gold”) standard of diagnosis?

Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)?

Was the reference standard applied regardless of the diagnostic test result?

Was the test (or cluster of tests) validated in a second, independent group of patients?

From Sackett et al., Evidence-based Medicine,2nd ed. (NY: Churchill Livingstone), 2000. p 68

Page 36: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Systematic Approach Authors and funding source Research question Study design Study subjects Predictor variable Outcome variable Results & Analysis Conclusions

Page 37: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

A clinical decision rule to identify children at

low risk for appendicitis (Problem 5.6)* Study design: prospective cohort study Subjects

– 4140 patients 3-18 years presenting to Boston Children’s Hospital ED with abdominal pain

– 767 (19%) received surgical consultation for possible appendicitis

• 113 Excluded (chronic diseases, recent imaging)• 53 missed

– 601 included in the study (425 in derivation set)

*Kharbanda et al. Pediatrics 2005; 116(3): 709-16

Page 38: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

A clinical decision rule to identify children at low risk for appendicitis Predictor variables

– Standardized assessment by pediatric ED attending

– Focus on “Pain with percussion, hopping or cough” (complete data in N=381)

Outcome variable: – Pathologic diagnosis of appendicitis (or not) for

those who received surgery (37%)– Follow-up telephone call to family or pediatrician

2-4 weeks after the ED visit for those who did not receive surgery (63%)

Kharbanda et al. Pediatrics 116(3): 709-16

Page 39: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

A clinical decision rule to identify children at low risk for appendicitis Results: Pain with percussion, hopping or

cough

78% sensitivity and 83% NPV seem low to me. Are they valid for me in deciding whom to image?

Kharbanda et al. Pediatrics 116(3): 709-16

Page 40: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Checklist Was there an independent, blind

comparison with a reference (“gold”) standard of diagnosis?

Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)?

Was the reference standard applied regardless of the diagnostic test result?

Was the test (or cluster of tests) validated in a second, independent group of patients?

From Sackett et al., Evidence-based Medicine,2nd ed. (NY: Churchill Livingstone), 2000. p 68

Page 41: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

In what direction would these biases affect results? Sample not representative (population

referred to pedi surgery)? Verification bias? Double-gold standard bias? Spectrum bias

Page 42: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

For children presenting with abdominal pain to SFGH 6-M Sensitivity probably valid (not

falsely low)– But whether all of the kids in the study tried

to hop is not clear Specificity probably low PPV is too high NPV is too low Does not address surgical consultation

decision

Page 43: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Does this coughing patient have pertussis?* RQ (for us): what are LR for coughing fits,

whoop, and post-tussive vomiting in adults with persistent cough?

Design (for one study we reviewed**): Prospective cross-sectional study

Subjects: 217 adults ≥18 years with cough 7-21 days, no fever or other clear cause for cough enrolled by 80 French GPs.– In a subsample from 58 GPs, of 710 who met

inclusion criteria only 99 (14%) enrolled

* Cornia et al. JAMA 2010;304(8):890-896**Gilberg S et al. J Inf Dis 2002;186:415-8

Page 44: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Pertussis diagnosis

Predictor variables: “GPs interviewed patients using a standardized questionnaire.”

Outcome variable: Laboratory evidence of pertussis based on any of:– Culture (N=1)– PCR (N=36)– ≥ 2-fold change in anti-pertussis toxin IgG

(N=40)– Total N = 70/217 with evidence of pertussis

(32%)*Gilberg S et al. J Inf Dis 2002;186:415-8

Page 45: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Results

89% in both groups (with and without laboratory “evidence of pertussis”) met CDC criteria for pertussis*

*Gilberg S et al. J Inf Dis 2002;186:415-8

Page 46: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Issues

Verification bias: only 14% of eligible subjects included– Subjects with more pertussis symptoms

probably more likely to be included Questionable “gold standard”

Page 47: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

What is wrong with this picture? Outcome variable: Evidence of pertussis based

on any of:– Culture (N=1)– PCR (N=36)– ≥ 2-fold change in anti-pertussis toxin IgG (N=40)– Total N = 70/217 with evidence of pertussis (32%)

Protocol apparently included serologic tests and PCR on all, but culture only if it could be plated in < 4 hours

Not much overlap!

Page 48: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Issues Correlation between serologic and PCR pertussis

tests (derived from Table 1 of Gilberg et al.*)

. tab PT PCR [fw=pop]

PT_IGG_cha | PCR nge | POS NEG | Total-----------+----------------------+---------- POS | 6 30 | 36 NEG | 34 53 | 87 -----------+----------------------+---------- Total | 40 83 | 123

*Gilberg S et al. J Inf Dis 2002;186:415-8

Page 49: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Issues

Nice illustration of difficulty doing a systematic review! Important take-home message: you can’t judge study quality only

by looking at the methods! You need to look at results, too!

. kap PT PCR [fw=pop]

ExpectedAgreement Agreement Kappa Std. Err. Z Prob>Z----------------------------------------------------------------- 47.97% 57.25% -0.2171 0.0899 -2.41 0.9921

Page 50: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Table 1 from paper

*Gilberg S et al. J Inf Dis 2002;186:415-8

Page 51: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Questions?

Page 52: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Additional slides

Page 53: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Double Gold Standard Bias: effect of spontaneously resolving disease

PE + PE -

V/Q Scan + a b

V/Q Scan - c d

Sensitivity, a/(a+c) biased __Specificity, d/(b+d) biased __

Double gold standard compared with immediate invasive test for all

Double gold standard compared with follow-up for all

Page 54: Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Double Gold Standard Bias: effect of newly occurring cases

PE + PE -

V/Q Scan + a b

V/Q Scan - c d

Sensitivity, a/(a+c) biased __Specificity, d/(b+d) biased __

Double gold standard compared with PA-Gram for all

Double gold standard compared with follow-up for all