28
1 Chapter 6 Experimental Studies

1 Chapter 6 Experimental Studies. 2 Chapter 6 Outline 6.1Introduction 6.2 Historical perspective 6.3 General concepts 6.4 Data analysis

Embed Size (px)

Citation preview

1

Chapter 6

Experimental Studies

2

Chapter 6 Outline

6.1 Introduction6.2 Historical perspective6.3 General concepts 6.4 Data analysis

3

Epi Experiments (“Trials”)Trials - from the French trier (to try)• Clinical trial – test therapeutic

interventions applied to individuals (e.g., chemotherapy trial)

• Field trial – test preventive interventions applied to individuals (e.g., vaccine trial)

• Community trial – test interventions applied at the aggregate level (e.g., fluoridation of public water trial)

4

Illustrative Example 6.1WHI Clinical Trial

1

Risk of CHD in the exposed cohort

164 0.01928 =19.3 per 1000

8506R

0

Risk of CHD in the nonexposed cohort

1220.01506 = 15.1 per 1000

8102R

• 40 US clinical centers • Recruitment: 1993-1998• Exposure randomized, double blinded: estrogen +

progestin vs identical looking placebo • Average follow-up 5.2 years • 1˚ outcome = Coronary Heart Disease

Survival curves WHI estrogen trial

5

6

Illustrative Example 6.2 Vitamin A Community Trial

• 450 Sumatran villages w ith high childhood mortality rates

• Exposure = Vitamin A supplementation program vs. no intervention

• Random allocation of intervention: 229 treatment villages, 221 control villages

1

Childhood mortality rate in exposed village

53 4.9 per 1000

10,919R

0

Childhood mortality rate in control villages

757.3 per 1000

10,231R

7

Historical perspectiveRead in text • Biblical reference• Van Helmont’s

proposal (1662) • James Lind’s scurvy

experiment (1753)• Modern trials

– Polio trail (1954)– MRFIT (1982)– WHI (2002)

8

“Natural Experiments”• Natural conditions that

mimic an experiment • Example: French surgeon

Paré (1510–1590) ran out of boiling oil to treat wounds → forced to use an innocuous lotion for treatment → noticed vastly improved results Not a true experiment

because the intervention was not allocated by study protocol

9

Selected ConceptsConcepts Experimental Design

1. The control group (and the placebo effect)

2. Randomization & comparability

3. Follow-up and outcome ascertainment

4. Intention-to-treat vs. per-protocol analysis

10

The effects of an exposure can only be judged in comparison to what would happen in its absence

Treatment Group Exposed to the intervention

Control GroupNot exposed to intervention

11

Illustration: “MRFIT”• Multiple Risk Factor Intervention Trial (1982)• 12,855 high risk men, 35- to 57-years-old• Randomly assigned multi-factor Intervention (“special

intervention”) group or usual care group• Study endpoints: Coronary Heart Disease (CHD) mortality

and overall mortality• Results described here:

http://www.ncbi.nlm.nih.gov/pubmed/7050440 • No significant difference in endpoint rates • Also, lower than expected rates in both groups• Had no control group had been used, the intervention

might have unjustifiably been declared a success

12

Polio Field Trial (1954)

Polio rates (per 100,000)Placebo group 69Refusers 46Vaccinated group 28

Had Refusers been used as the control group effects of the intervention would have been underestimatedAm J Pub Health, 1957, 47: 283-7 Dr. Jonas Salk, 1953

13

The placebo effectImprovements attributed to an inert intervention

Despite popular belief, placebos have no real effect False impressions of placebo effects can be explained by spontaneous

improvement, fluctuation of symptoms, regression to the mean, additional treatment, conditional switching of placebo treatment, scaling bias, irrelevant

response variables, answers of politeness, experimental subordination, conditioned answers, neurotic or psychotic misjudgment, psychosomatic

phenomena, misquotation, etc (Kienle & Kiene, 1997 )

14

The Hawthorne Effect

Improvements in behavior because subjects know they are being observed effects unrelated to the intervention

Initially observed in industrial psychology experiments in the 1930

A comparable attention bias effect is seen in trials

15

Randomization and Randomization and ComparabilityComparability

Randomization works by balancing potential confounding factors in the treatment & control group

→ “like-to-like” comparisons

→ differences observed at completion of trial due to the treatment or to “chance”

16

Checking Group ComparabilityWHI Trial

17

Follow-up & Outcome Follow-up & Outcome AscertainmentAscertainment

• Follow-up screeningscreening for study outcomes and confirming the outcomes as true (adjudicationadjudication)

• Study outcomes based on case definitions (uniform case definitions (uniform and valid criteria for case and valid criteria for case ascertainments)ascertainments)

• The importance of blindingblinding– Single blinding– Double blinding– Triple blinding

Intention-to-treat vs. per-protocol Intention-to-treat vs. per-protocol analysisanalysis

• Intention-to-treat (ITT)Intention-to-treat (ITT) = “analyze as randomized” (regardless of compliance)

• Per protocol (PP)Per protocol (PP) = analyze only those that completed the protocol

• Effectiveness = “real world” effectiveness (including non-compliance)

• Efficacy = effect under ideal conditions (e.g., complete compliance)

19

Human Subjects Ethicsnow covered in Ch 5

• The Belmont Report – Respect for

individuals– Beneficence– Justice

• IRB oversight• Data Safety Monitoring

Board (DSMB)• Informed consent• Equipoise

20

Equipoise • Equipoise ≡

balanced doubt• Cannot knowingly

expose a participant to harm

• Cannot withhold known benefit to study subjects

• What’s left? (ANS: equipoise)

Is equipoise the over-riding principles of trial ethics?

21

Advocacy vs. Scientific Ethics

• Advocacy, partisan, corporate, Advocacy, partisan, corporate, advertising, and political ethicsadvertising, and political ethics: “Plan with the end result in mind.”

• Scientific ethicsScientific ethics: A bending over backwards to prove oneself wrong.

“I cannot give any scientist of any age any better advice than this: The intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

Sir Peter Medewar

22

Simple Analysis: Relative EffectRelative Effect• Data = WHI trial • E = HRT vs. placebo• D = CHD (yes or no)• Average follow-up: 5.2 years

1

164 0.01928 19.28 per 1000

8506R

0

1220.01506 15.06 per 1000

8102R

1

0

19.28 per 1000RRR

R

15.06 per 1000 1.28

How to say it: HRT increased the risk of CHD by 28% in relative terms.

23

Simple Analysis: Absolute EffectAbsolute Effect• Data = WHI trial • E = HRT vs. placebo• D = CHD (yes or no)• Average follow-up: 5.2 years

1

164 0.01928 = 19.28 per 1000 women

8506R

0

1220.01506 = 15.06 per 1000 women

8102R

1 0 19.28 /1000 15.06 /1000 4.22 per 1000RD R R

How to say it: In absolute terms, there was an additional 4.22 CHD cases for every thousand women using HRT over 5.2 years.

24

Simple Analysis: EfficacyEfficacysame as RRD but without the minus signsame as RRD but without the minus sign

1

53.004853 4.853 per 1000

10,919R

0

75.0007329 7.329 per 1000

10,231R

4.853 per 1000RR

7.329 per 10000.66

Efficacy = 1 1 0.66 0.34RR

This provides a suitable taking-off point for the discussion of Rothman, K. J., Adami, H. O., & Trichopoulos, D. (1998). Should the mission of epidemiology include the eradication of poverty? Lancet, 352(9130), 810-813.

450 Sumatra villages randomly assigned to either a vitamin A supplementation or not

How to say it: Vitamin A supplementation was 34% effective in preventing childhood mortality.

25

Simple Analysis: Absolute EffectAbsolute Effect

4.85 per 1000 7.33 per 1000

1 0RD R R

450 Sumatra villages randomly assigned to either a vitamin A or control

1

53 deaths4.85 per 1000

10,919childrenR 0

75 deaths7.33 per 1000

10,231childrenR

2.47 per 1000

How to say it:The effect was to reduce mortality by 2.47 deaths per 1000 children over the period of observation.

26

OpenEpi.com for data analysis

• “Counts” menu for incidence proportions, prevalences, and case-control data

• “Person Time” menu for rate data

• Descriptive and inferentialinferential (confidence intervalsconfidence intervals and PP-values-values) statistics

• Can be used as a learning tool

27

6.1 Bicycle helmet campaignYou want to test whether a public awareness campaign about bicycle safety at elementary schools will increase bicycle helmets use among school-aged children. To test this intervention, you identify 12 elementary schools, half of which will be randomly assigned to participate in a school-wide bicycle helmet awareness program. The other 6 schools will serve as controls and will receive no special intervention. Research assistants will determine the percentage of bicyclists wearing helmets at standard locations in neighborhoods of each of the schools before and after the intervention.

(A) What is the unit of intervention in this study? (The ‘‘unit of intervention’’ refers to the level at which the intervention is randomized. This may differ from the ‘‘unit of observation,’’ which is the unit upon which the outcome is measured.)

(B) What is the unit of observation in this study?

(C) Even though the intervention was randomized in this study, there were only 6 treatment schools and 6 controls schools. Therefore, there is a good chance that treatment and control schools will differ with respect to important characteristics such as socioeconomic status. Can you think of a way to control for socioeconomic status through a randomization or study design approach?