2
Prevention Science, Vol. 2, No. 4, December 2001 ( C 2002) Commentary Building Capacity for Prevention’s Next Generation William B. Hansen 1,2 and Linda Dusenbury 1 Federal agencies that fund prevention practice— the Center for Substance Abuse Prevention (CSAP) and the U.S. Department of Education—now actively promote the adoption of research-based programs with proven or promising track records. This repre- sents a significant change in policy and was clearly influenced by the availability of expert reviews of programs including Making the Grade: A Guide to School Drug Prevention Programs (Drug Strategies, 1996, 1999), the Blue Prints for Violence Prevention (Elliot & Mihalic, 1997) of the Office of Juvenile Justice and Delinquency Prevention (OJJDP), and Preventing Drug Use Among Children and Adoles- cents: A Research-Based Guide by the National Insti- tute on Drug Abuse (NIDA, 1997). Policies promoting the adoption and implemen- tation of research-based programs are justified and are likely to increase the credibility of preven- tion practice. At the same time, the dissemination of research-based programs brings new challenges, many of which were not anticipated in the previous era when the field was focused on proving that pre- vention works. ESTABLISH UNIVERSAL STANDARDS FOR EVALUATION The Institute of Medicine (Mrazek & Haggerty, 1994) has established a structure for grading evidence of effectiveness. At the highest level—multiple in- dependent replications of longitudinal randomized studies—inconsistencies in findings are to be ex- pected. Currently, methods vary across studies so as to make direct comparisons of studies nearly impossible. When methods and findings are confounded, it is chal- lenging to make sense of results. The group most af- 1 Tanglewood Research, Greensboro, North Carolina. 2 Correspondence should be directed to William Hansen, Tanglewood Research, 7017 Albert Pick Road, Suite D, Greensboro, North Carolina 27409. fected by this is consumers. Practitioners who are now required to evaluate programs need clear standards for designing evaluations and for interpreting results. At a minimum, standard measures of behavior and mediators, and standard methods for analysis and reporting need to be established. We know how to measure behaviors and mediators that are targeted by programs. Measures of adherence and quality have become increasingly important, but are not yet stan- dardized and have been given insufficient attention in published research. There are now a wide variety of complex analytic approaches and tools. However, ba- sic analytic strategies that provide minimal evidence of effectiveness need to be specified and standard methodologies need to be agreed upon and used. For example, a number of organizations are de- veloping evaluation systems for wide scale use. These systems use standardized measures of behaviors and mediators and include evaluation designs that incor- porate appropriate control groups. Analyses are stan- dardized to include similar cross-group comparison strategies. These systems are attempting to establish minimum standards for evaluation. However, inde- pendent decisions by different organizations about minimum standards for evaluation may not be the best approach. The Society for Prevention Research should be at the forefront in establishing universal standards for evaluation. INTEGRATE PREVENTION INTO EXISTING SYSTEMS Fidelity of implementation has recently received increased attention. Research strongly suggests that programs are more likely to be effective when they are implemented as planned (Abbott et al., 1998; Battistich et al., 1996; Botvin et al., 1990). Unfortu- nately, we have also learned that prevention programs are rarely implemented as planned. Much of our effort to address poor fidelity has centered on how best to promote strict adherence to 207 1389-4986/01/1200-0207/1 C 2002 Society for Prevention Research

Commentary: Building Capacity for Prevention's Next Generation

Embed Size (px)

Citation preview

Page 1: Commentary: Building Capacity for Prevention's Next Generation

P1: GYQ

Prevention Science [PREV] pp336-prev-364349 January 4, 2002 7:56 Style file version Nov. 04, 2000

Prevention Science, Vol. 2, No. 4, December 2001 ( C© 2002)

Commentary

Building Capacity for Prevention’s Next Generation

William B. Hansen1,2 and Linda Dusenbury1

Federal agencies that fund prevention practice—the Center for Substance Abuse Prevention (CSAP)and the U.S. Department of Education—now activelypromote the adoption of research-based programswith proven or promising track records. This repre-sents a significant change in policy and was clearlyinfluenced by the availability of expert reviews ofprograms including Making the Grade: A Guide toSchool Drug Prevention Programs (Drug Strategies,1996, 1999), the Blue Prints for Violence Prevention(Elliot & Mihalic, 1997) of the Office of JuvenileJustice and Delinquency Prevention (OJJDP), andPreventing Drug Use Among Children and Adoles-cents: A Research-Based Guide by the National Insti-tute on Drug Abuse (NIDA, 1997).

Policies promoting the adoption and implemen-tation of research-based programs are justified andare likely to increase the credibility of preven-tion practice. At the same time, the disseminationof research-based programs brings new challenges,many of which were not anticipated in the previousera when the field was focused on proving that pre-vention works.

ESTABLISH UNIVERSAL STANDARDSFOR EVALUATION

The Institute of Medicine (Mrazek & Haggerty,1994) has established a structure for grading evidenceof effectiveness. At the highest level—multiple in-dependent replications of longitudinal randomizedstudies—inconsistencies in findings are to be ex-pected. Currently, methods vary across studies so as tomake direct comparisons of studies nearly impossible.When methods and findings are confounded, it is chal-lenging to make sense of results. The group most af-

1Tanglewood Research, Greensboro, North Carolina.2Correspondence should be directed to William Hansen,Tanglewood Research, 7017 Albert Pick Road, Suite D,Greensboro, North Carolina 27409.

fected by this is consumers. Practitioners who are nowrequired to evaluate programs need clear standardsfor designing evaluations and for interpreting results.

At a minimum, standard measures of behaviorand mediators, and standard methods for analysis andreporting need to be established. We know how tomeasure behaviors and mediators that are targetedby programs. Measures of adherence and quality havebecome increasingly important, but are not yet stan-dardized and have been given insufficient attention inpublished research. There are now a wide variety ofcomplex analytic approaches and tools. However, ba-sic analytic strategies that provide minimal evidenceof effectiveness need to be specified and standardmethodologies need to be agreed upon and used.

For example, a number of organizations are de-veloping evaluation systems for wide scale use. Thesesystems use standardized measures of behaviors andmediators and include evaluation designs that incor-porate appropriate control groups. Analyses are stan-dardized to include similar cross-group comparisonstrategies. These systems are attempting to establishminimum standards for evaluation. However, inde-pendent decisions by different organizations aboutminimum standards for evaluation may not be thebest approach. The Society for Prevention Researchshould be at the forefront in establishing universalstandards for evaluation.

INTEGRATE PREVENTION INTOEXISTING SYSTEMS

Fidelity of implementation has recently receivedincreased attention. Research strongly suggests thatprograms are more likely to be effective when theyare implemented as planned (Abbott et al., 1998;Battistich et al., 1996; Botvin et al., 1990). Unfortu-nately, we have also learned that prevention programsare rarely implemented as planned.

Much of our effort to address poor fidelity hascentered on how best to promote strict adherence to

2071389-4986/01/1200-0207/1 C© 2002 Society for Prevention Research

Page 2: Commentary: Building Capacity for Prevention's Next Generation

P1: GYQ

Prevention Science [PREV] pp336-prev-364349 January 4, 2002 7:56 Style file version Nov. 04, 2000

208 Commentary

programs. The problem with this approach is that ef-forts to increase adherence and eliminate reinventionmay reduce the likelihood that any give program willbe institutionalized (Blakely et al., 1987). Most pre-vention approaches have been developed as stand-alone programs. Given an educational climate thatstresses accountablity, it is unreasonable and unre-alistic to expect teachers to devote time to discreteprevention programming.

Research has identified the critical elements ofprevention. Current efforts focus on how to persuadeteachers to implement programs as planned. We rec-ommend that program developers redesign programsto meet teachers’ needs concerning academic goals atthe same time they cover critical prevention elements.We need to become much more creative in how wepackage and deliver prevention and how we integrateprevention into existing systems.

TAKING PREVENTION TO SCALE

There is an increasing desire by policymakers andpractitioners alike to promote research-based preven-tion. A major task of bringing prevention to scale hasalready begun through the active commercializationof existing products—an activity that current policiesof the U.S. Department of Education and CSAP en-courage and support. The Society for Prevention Re-search needs to establish standards for dealing withthe commercialization of products by its members.Scientific integrity needs to be maintained as com-mercialization proceeds. The rules of the road needto be specified.

However, there is a larger issue that the field mustalso address. Any enterprise that succeeds in reachinglarge numbers of people requires the development ofsystems to sustain it. When Thomas Edison inventedthe light bulb, he also invented an entire system tomanufacture and distribute light bulbs and provideelectricity to homes—generators, light bulb recepta-cles, and wires that traveled down the street.

Prevention researchers have often limited theirfocus to thinking about “the program.” To bring pre-vention to scale, there will need to be equal effortgiven to developing a system that supports and sus-tains it. Researchers need to expand their activitiesto include developing infrastructure components, in-cluding a broad variety of resources. This will includeundergraduate training for teachers, diagnostic tools,and materials packaged in a variety of forms and dis-tributed through a variety of means, including the

Internet. Prevention research needs to become prod-uct oriented.

CONCLUSION

The next decade is likely to bring significantprogress in the development and distribution of effec-tive prevention practices. Immediate challenges forthe field are (1) to develop universal standards forevaluation, (2) to adapt programs to fit the market-place, and (3) to take programs to scale. Preventionresearchers must play an important role in meet-ing each of these challenges. The role of preven-tion researchers in establishing standards for evalu-ation is obvious and natural. Our role in adaptingprograms to meet the needs of the marketplaceand bringing programs to scale is both new andnecessary. Ultimately, the success of prevention de-pends on our willingness to become partners withpractitioners and advocates for advancing effectiveapproaches.

REFERENCES

Abbott, R. D., O’Donnell, J., Hawkins, J. D., Hill, K. G., Kosterman,R., & Catalano, R. F. (1998). Changing teaching practices topromote achievement and bonding to school. American Jour-nal of Orthopsychiatry, 68, 542–552.

Battistich, V., Schaps, E., Watson, M., & Solomon, D. (1996). Pre-vention effects of the Child Development Project: Early find-ings from an ongoing multisite demonstration trial. SpecialIssue: Preventing adolescent substance abuse. Journal of Ado-lescent Research, 11, 12–35.

Blakely, C. H., Mayer, J. P., Gottschalk, R. G., Schmitt, N., Davidson,W., Roitman, D. B., & Emshoff, J. G. (1987). The fidelity-adaptation debate: Implications for the implementation ofpublic sector social programs. American Journal of CommunityPsychology, 15, 253–268.

Botvin, G. J., Baker, E., Dusenbury, L., Tortu, S., & Botvin, E. M.(1990). Preventing adolescent drug abuse through a multi-modal cognitive-behavioral approach: Results of a 3-YearStudy. Journal of Consulting and Clinical Psychology, 58, 437–446.

Drug Strategies (1996). Making the grade: A guide to school drugprevention programs. Washington, DC: Author.

Drug Strategies (1999). Making the grade: A guide to school drugprevention programs (2nd Ed.). Washington, DC: Author.

Elliott, D., & Mihalic, S. (1997). Blueprints for violence preven-tion and reduction. The identification and documentation ofsuccessful programs. Boulder, CO: Center for the Study andPrevention of Violence.

Mrazek, P. J., & Haggerty, R. J. (Eds.) (1994). Reducing risks formental disorders: Frontiers for preventive intervention research.Washington, DC: National AcademyPress.

National Institute on Drug Abuse (1997). Preventing drug useamong children and adolescents: A research-based guide (NIHPublication No. 97-4212). Washington, DC: U.S. Departmentof Health and Human Services.