2
STATISTICS IN MEDICINE, VOL. 12, 191-192 (1993) EDITORIAL In 1946 a small, wartime mosquito-eradication effort in World War I1 was turned into what is today the Centers for Disease Control. This effort eventually became the agency of the Public Health Service that spearheaded attempts to prevent diseases such as malaria, polio, smallpox, toxic shock syndrome, Legionnaires’ disease, and more recently, AIDS. Early on, most efforts focused on prevention and control of unnecessary morbidity and mortality from infectious diseases of public health importance. These responsibilities have expanded over the years to include contemporary threats to health, such as injury, environmental and occupational hazards, and behavioural risks. Demonstration of the efficacy of a public health programme is essential before we can implement prevention or intervention strategies on a broad basis. It was surely less complicated to design and carry out programmes to test the effectiveness of a vaccine to prevent measles or polio than it was to assess the long term health effects of environmental and occupational hazards. Determining the long term effects of smoking, lack of exercise, stress and workplace hazards often requires years of study. The evaluation of a prevention or intervention strategy, is a complex undertaking, often confounded by many uncontrollable, external factors. Administrative, political and economic obstacles are usually present. In most cases, it is critical to distinguish two levels of evaluation. The first addresses control/change/prevention of disease in individuals while the second concerns prevention or control in the general population. The two approaches require different study designs and analyses. At the first level we may need a trial to determine the efficacy of a drug, vaccine or other intervention at the individual level. The second level is to show that we can control or prevent disease, alcohol or other drug misuse, or other risk behaviour in a community. While this second stage of evaluation is important if we expend large sums of public funds to implement new programmes, it is also more complex and usually costly. It requires attention to surveillance measures that are culturally appropriate to the subgroups of the population at risk. It requires decisions on whether to use sampling or total reporting. It requires attention to sensitivity and specificity of diagnostic procedures and tests. It requires attention and assurance that the intervention does not affect the surveillance system artificially. It requires assurance that the surveillance system is impartial and unbiased in reaching all subgroups of the population. And it requires knowledge of those intervention or prevention strategies most appropriate for different subgroups at risk. Finally, it requires that the community that receives the intervention has trust in those who deliver the interventions. Integrity of science is crucial to successful intervention-this includes the study design, collection and management of data, analysis and interpretation of results. Because information on methods for evaluating the effectiveness of intervention and pre- vention strategies has not been readily available, the Centers for Disease Control elected to sponsor this Symposium on Statistical Methods for Evaluation of Intervention and Prevention Strategies. The aim of this Symposium was to provide a forum for presenting current research in statistical methods for evaluation and in innovative applications of statistical methods to the evaluation of public health programmes. The proceedings from this Symposium reflect a wide range of applications in diverse subject areas, including mental health, sexual behaviour, vaccine efficacy, burn-care management, prevention of tobacco use, colon polyps, etc. The interventions run the gamut from classic interventions (for example, enhanced antenatal care and antihypertensive medication), to behavioural and psychosocial (for example, sexual

Editorial

Embed Size (px)

Citation preview

STATISTICS IN MEDICINE, VOL. 12, 191-192 (1993)

EDITORIAL In 1946 a small, wartime mosquito-eradication effort in World War I1 was turned into what is today the Centers for Disease Control. This effort eventually became the agency of the Public Health Service that spearheaded attempts to prevent diseases such as malaria, polio, smallpox, toxic shock syndrome, Legionnaires’ disease, and more recently, AIDS. Early on, most efforts focused on prevention and control of unnecessary morbidity and mortality from infectious diseases of public health importance. These responsibilities have expanded over the years to include contemporary threats to health, such as injury, environmental and occupational hazards, and behavioural risks.

Demonstration of the efficacy of a public health programme is essential before we can implement prevention or intervention strategies on a broad basis. It was surely less complicated to design and carry out programmes to test the effectiveness of a vaccine to prevent measles or polio than it was to assess the long term health effects of environmental and occupational hazards. Determining the long term effects of smoking, lack of exercise, stress and workplace hazards often requires years of study.

The evaluation of a prevention or intervention strategy, is a complex undertaking, often confounded by many uncontrollable, external factors. Administrative, political and economic obstacles are usually present. In most cases, it is critical to distinguish two levels of evaluation. The first addresses control/change/prevention of disease in individuals while the second concerns prevention or control in the general population. The two approaches require different study designs and analyses. At the first level we may need a trial to determine the efficacy of a drug, vaccine or other intervention at the individual level. The second level is to show that we can control or prevent disease, alcohol or other drug misuse, or other risk behaviour in a community.

While this second stage of evaluation is important if we expend large sums of public funds to implement new programmes, it is also more complex and usually costly. It requires attention to surveillance measures that are culturally appropriate to the subgroups of the population at risk. It requires decisions on whether to use sampling or total reporting. It requires attention to sensitivity and specificity of diagnostic procedures and tests. It requires attention and assurance that the intervention does not affect the surveillance system artificially. It requires assurance that the surveillance system is impartial and unbiased in reaching all subgroups of the population. And it requires knowledge of those intervention or prevention strategies most appropriate for different subgroups at risk. Finally, it requires that the community that receives the intervention has trust in those who deliver the interventions. Integrity of science is crucial to successful intervention-this includes the study design, collection and management of data, analysis and interpretation of results.

Because information on methods for evaluating the effectiveness of intervention and pre- vention strategies has not been readily available, the Centers for Disease Control elected to sponsor this Symposium on Statistical Methods for Evaluation of Intervention and Prevention Strategies. The aim of this Symposium was to provide a forum for presenting current research in statistical methods for evaluation and in innovative applications of statistical methods to the evaluation of public health programmes. The proceedings from this Symposium reflect a wide range of applications in diverse subject areas, including mental health, sexual behaviour, vaccine efficacy, burn-care management, prevention of tobacco use, colon polyps, etc.

The interventions run the gamut from classic interventions (for example, enhanced antenatal care and antihypertensive medication), to behavioural and psychosocial (for example, sexual

192 EDITORIAL

partner notification, maintenance of safe-sex behaviour, use of seat belts and diet), environ- mental (for example, lead abatement in water and paint), those that deal with training and education (counselling in conjunction with Methadone maintenance, drug education in schools and training of servers in licensed liquor establishments). One can group the interventions as single-minded (such as vaccinations, sexual partner notification or drug education) or multifaceted (such as a statewide diabetes control programme). Clearly the multifaceted intervention poses challenging statistical problems if one wishes to segregate the effects of each of the various components of the intervention plan.

The outcomes of concern in this Symposium also span a wide range. At one extreme, we have mortality and morbidity endpoints (infectious diseases such as measles, syphilis, HIV infection, burn wound infection as well as noninfectious diseases such as cardiovascular diseases, diabetes, motor vehicle accident injury). Other endpoints concern reproductive events (preterm births, stillbirths, low-birthweight infants, birth defects). Some outcomes involve non-healthy , morbid conditions (elevated blood lead levels, visual impairment, colon polyps, irregular menstrual function), while others pertain to non-healthy life style (substance use). We also have social behavioural conditions (adolescent psychopathology, delinquency, learning problems, shy and aggressive behaviour).

We thank our Symposium participants for their work in presenting material and in providing a manuscript describing their work. We thank the Organizing Committee for their hard work in planning this Symposium and the reviewers for their peer review of the manuscripts submitted. We hope that publication of the proceedings of this Symposium will stimulate readers to embark on further statistical research in this most important and fascinating area of public health.

GLADYS REYNOLDS THEODORE COLTON