39
v Contents PREFACE xi 1 Introduction to Research 1 Chapter Outline 1 Learning Objectives 2 Introduction 3 Importance of Research in Communication Disorders 3 Historical Evolution of Research in Communication Disorders 4 Sources of Knowledge 6 Types of Research 6 Descriptive Research 7 Exploratory Research 11 Experimental Research 13 Survey Research 16 Summary 18 Discussion Questions 18 References 19 2 Research Ethics in Speech-Language Pathology and Audiology 23 Chapter Outline 23 Learning Objectives 24 Need for Ethical Guidelines 25 Historical Background 25 Research Misconduct 26 Issues in Research Ethics 30 Planning Research 30 Confidentiality 31 Informed Consent 31 Deception 32 Institutional Approval 32 Control Groups 32 Conflicts of Interest 35 Mentoring 36 Maintenance of Research Records 36 Referencing of Sources 36 Authorship 37 Peer Review 38 Publication Correction 39 Evidence-Based Practice 40 AAA Code of Ethics 40 00_Irwin_i-xiiFM 6/21/07 8:14 AM Page v

00 Irwin i-xiiFM - Plural Publishing, Inc. · 2010-02-08 · Confidentiality 31 Informed Consent 31 Deception 32 Institutional Approval 32 ... Institutional Review Board 40 Teaching

  • Upload
    vuthien

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

v

Contents

PREFACE xi

1 Introduction to Research 1Chapter Outline 1Learning Objectives 2Introduction 3Importance of Research in Communication Disorders 3Historical Evolution of Research in Communication Disorders 4Sources of Knowledge 6Types of Research 6

Descriptive Research 7Exploratory Research 11Experimental Research 13Survey Research 16

Summary 18Discussion Questions 18References 19

2 Research Ethics in Speech-Language Pathology andAudiology 23Chapter Outline 23Learning Objectives 24Need for Ethical Guidelines 25Historical Background 25Research Misconduct 26Issues in Research Ethics 30

Planning Research 30Confidentiality 31Informed Consent 31Deception 32Institutional Approval 32Control Groups 32Conflicts of Interest 35Mentoring 36Maintenance of Research Records 36Referencing of Sources 36Authorship 37Peer Review 38Publication Correction 39Evidence-Based Practice 40

AAA Code of Ethics 40

00_Irwin_i-xiiFM 6/21/07 8:14 AM Page v

ASHA Code of Ethics 40Institutional Review Board 40Teaching Research Ethics 43

Content 44Methods 44

Summary 48Discussion Questions 48References 50

3 Research Problems 55Chapter Outline 55Learning Objectives 56Basic Concepts and Terms 57Control of Variables 58Selecting a Topic 59Selecting a Research Problem 60Hypotheses and Theories 61Feasibility of a Research Project 62Budget Considerations and Preparation 63Summary 65Discussion Questions 66References 66

4 Locating, Accessing, and Assessing Information 67Chapter Outline 67Learning Objectives 68Introduction 69Locating Information 69Databases 69Online Journals 70Use of the World Wide Web 72Interlibrary Loan 73Manual Searches 73Evaluating Research 73Rating the Evidence 74Levels of Evidence 75Critical Appraisal 76Summary 79Discussion Questions 79References 79

5 Literature Reviews 81Chapter Outline 81Learning Objectives 82Organizing Literature Reviews 83

vi CLINICAL RESEARCH METHODS

00_Irwin_i-xiiFM 6/21/07 8:14 AM Page vi

Narrative Reviews 86Systematic Reviews 86

Differences Between Narrative and Systematic Reviews 91Meta-Analysis 91Best-Evidence Synthesis 93Practice Guidelines 93Summary 96Discussion Questions 96References 97

6 Measurement 101Chapter Outline 101Learning Objectives 102Scales of Measurement 103

Nominal Level of Measurement 103Ordinal Level of Measurement 104Interval Level of Measurement 104Ratio Level of Measurement 104

Validity of Measurement 105Reliability of Measurement 108Summary 109Discussion Questions 110References 110

7 Research Design and Strategy 111Chapter Outline 111Learning Objectives 112Introduction 113Characteristics of a Good Design 113Group Designs 115

Between-Subjects Design 115Within-Subjects Design 116Mixed-Group Design 117Advantages and Disadvantages of Group Designs 118

Single-Subject Designs 119Sequential Clinical Trials 121Technologic Applications and Research Designs 122Summary 124Discussion Questions 124References 124

8 Quantitative Research 127Chapter Outline 127Learning Objectives 128Characteristics of Quantitative Research 129

CONTENTS vii

00_Irwin_i-xiiFM 6/21/07 8:14 AM Page vii

Quantitative Research Designs 129Nonexperimental Designs 129Pre-experimental Designs 129Quasiexperimental Designs 133Single-Subject Designs 134Experimental Designs 134

Quantitative Analysis 134Descriptive Statistics 136Inferential Statistics 143Multivariate Statistics 149Meta-Analysis 149

Summary 151Discussion Questions 151References 152

9 Qualitative Research Methods 153Chapter Outline 153Learning Objectives 154Characteristics of Qualitative Research 157Advantages and Disadvantages of Qualitative Research 157Myths About Qualitative Research 159Qualitative Research Designs 159Data Collection 160

Self-Reports 160Observation 160Interviewing 162Focus Groups 163Document Analysis 163

Sampling 164Analyzing Qualitative Data 165Evaluation of Qualitative Research 165Summary 166Discussion Questions 167References 167

10 Multimethod Research 171Chapter Outline 171Learning Objectives 172Sequencing 174Research Designs 174Advantages and Disadvantages 178Data Analysis 180Evaluating Multimethod Research 180Summary 181Discussion Questions 181References 181

viii CLINICAL RESEARCH METHODS

00_Irwin_i-xiiFM 6/21/07 8:14 AM Page viii

11 Reporting and Disseminating Research 185Chapter Outline 185Learning Objectives 186Reasons for Reporting Research 187Myths About Research Reports 188Time Management for Reporting Research 188

Procrastination 192Format of Research Reports 192

Abstracts 193Key Words (Indexing Terms) 194Author Contributions 194Tables and Figures 195

Writing Style 195APA Format 196References 197Personal Pronouns 197Avoiding Bias 197

Permissions 198Rewriting and Revising 200Types of Research Reports 201

Journal Articles 201Theses and Dissertations 204Textbooks 205Presentations at Professional Meetings 206

Evaluating and Reviewing Research Reports 209Summary 210Discussion Questions 210References 211

12 Evaluating Tests and Treatments 215Chapter Outline 215Learning Objectives 216Evaluation Issues 218Tests 218Treatment 225Summary 228Discussion Questions 228References 228

13 Evidence-Based Practice: Application of Research to Clinical Practice 231Chapter Outline 231Learning Objectives 232Defining Evidence-Based Practice and Research Utilization 233Myths About Evidence-Based Practice 235

CONTENTS ix

00_Irwin_i-xiiFM 6/21/07 8:14 AM Page ix

Research Utilization in Speech-Language Pathology and Audiology 236Efforts to Implement Evidence-Based Practice 237Barriers to Evidence-Based Practice 245Strategies for Implementing Evidence-Based Practice 247Communicating the Evidence 249Summary 249Discussion Questions 250References 250

14 Research Grants 257Chapter Outline 257Learning Objectives 258Introduction 259Types of Awards 259The Grants Acquisition Process 260Grant Seeking 260

General Principles of Grant Seeking 265Grant Proposal Writing 265

Preliminary Considerations 265The Grant Proposal 266Budget 266The Idea/Problem 267Unsolicited and Solicited Proposals 267Basic Principles of Grant Proposal Writing 267Suggestions for Grant Proposal Writing 268Characteristics of a Fundable Research Grant Proposal 268The Proposal Review Process 269

Grant Management 269Summary 270Discussion Questions 270Study Exercises 271References 272

GLOSSARY 273INDEX 285

x CLINICAL RESEARCH METHODS

00_Irwin_i-xiiFM 6/21/07 8:14 AM Page x

127

8

Quantitative Research

CHAPTER OUTLINE

Characteristics of QuantitativeResearch

Quantitative Research DesignsNonexperimental DesignsPre-experimental DesignsQuasiexperimental DesignsSingle-Subject DesignsExperimental Designs

Quantitative AnalysisDescriptive StatisticsInferential StatisticsMultivariate StatisticsMeta-Analysis

SummaryDiscussion QuestionsReferences

08_Irwin_127-152 6/21/07 8:18 AM Page 127

128 CLINICAL RESEARCH METHODS

LEARNING OBJECTIVES

Upon completion of this chapter the reader will be able to:

■ Describe characteristics of quantitative research

■ Identify various types of quantitative research

■ Select an appropriate statistic

■ Explain different methods of descriptive statistics

■ Describe methods used for inferential statistics

■ Apply special methods such as meta-analysis

08_Irwin_127-152 6/21/07 8:18 AM Page 128

Research can be classified as either qual-itative or quantitative.The latter involvesinvestigation of phenomena that lendthemselves to precise measurement andquantification, often under controlledconditions that can be subjected to statistical analysis (Maxwell & Satake,2006; Polit & Beck, 2004). This chapterdescribes various aspects of quantitativeresearch including descriptive and infer-ential statistics.

Characteristics ofQuantitative Research

Quantitative research stresses numbers,measurement, deductive logic, control,and experiments (McMillan, 2004). Fig-ure 8–1 summarizes the various charac-teristics of quantitative research. A greatstrength of quantitative research is itscredibility, that is, the extent to whichthe data, data analysis, and conclusionsare believable and trustworthy. French,Reynolds, and Swain (2001) describedseveral advantages and disadvantages ofquantitative research. These advantagesand disadvantages are summarized inTable 8–1.

Quantitative Research Designs

There are major differences in credibilitybetween nonexperimental and experi-mental designs. Quantitative designs maybe viewed on a continuum. True experi-mental designs are at one end of the con-tinuum and nonexperimental studies areat the other end, as shown in Figure 8–2.

Four major categories of research designshave been identified: nonexperimental;pre-experimental; quasiexperimental; andexperimental (true experimental); andsingle-subject.

Nonexperimental Designs

Nonexperimental designs are the weak-est of all designs because they do notinvolve randomization, manipulation, oruse of control groups. Furthermore, casualrelations cannot be established (Maxwell& Satake, 2006). This type of research isundertaken when: (1) a number of inde-pendent variables such as gender andheight are not amenable to randomiza-tion; (2) some variables cannot be ethi-cally manipulated; (3) there are practicalconstraints to manipulating variables;and (4) avoiding manipulation achieves amore realistic understanding (Polit &Beck, 2004). There are a variety of non-experimental research designs, whichare summarized in Table 8–2.

Pre-experimental Designs

Pre-experimental designs are sometimesreferred to as pseudoexperimental designsbecause they do not meet at least two ofthe three criteria for true experiments:randomization; manipulation; or control.Furthermore, pre-experimental studies arelimited to describing outcomes becauseappropriate statistical analysis cannot beperformed.Thus, the goal of such studiesis to explore or describe new phenom-ena rather than explain their causes. Cur-rently, the use of pre-experimentaldesigns is limited due to inadequate con-trol of numerous extraneous variables.Maxwell and Satake (2006) noted that

QUANTITATIVE RESEARCH 129

08_Irwin_127-152 6/21/07 8:18 AM Page 129

130

Figure 8–1. Characteristics of quantitative research. (*Positivist or post-positivismis based on the assumption that phenomena should be studied objectively; the goalis obtaining a single true reality, or at least reality within known probabilities.) Adaptedfrom Educational Research, by J. H. McMillan, 2004. Boston: Pearson.

Associated terms or phrases

Key concepts

Goals

StructurePredeterminedFormalSpecific

Sample

Data

Techniques or methods

Data analysis

Positivist*ExperimentalHard dataStatistical

VariableOperationalizedControlledReliableStatistically significantReplicatedHypothesized

Test theoryEstablish factsShow relationshipsPredictStatistically describe

LargeRepresentativeRandom selectionControl groupStratified

QuantitiesCountMeasures/instrumentsNumbersStatistics

ExperimentsQuasiexperimentsStructured observationsStructured interviewsSurveys

DeductionStatistical

Design

08_Irwin_127-152 6/21/07 8:18 AM Page 130

pre-experimental designs “ . . . give theimpression of constituting credible sci-entific studies but are characterized bynumerous sources of invalidity” (p. 204).These designs were described as “fools

gold” because of the “ . . . misleading im-pression of belonging to more rigorousand powerful kinds of experimentalmethodologies . . . ” (p. 203). Schiavettiand Metz (2002) pointed out that thesedesigns are weak in both internal andexternal validity.

There are four types of pre-experimen-tal designs: single group pretest only;single group pretest-posttest; time seriesdesign; and nonequivalent groups post-test only. These designs are summarizedin Figure 8–3.

The weakest of the pre-experimentaldesigns is the one-group postdesign,which is also known as the single groupposttest only or one-shot case study. Thisdesign involves study of the presumedeffect of an independent variable in onegroup of subjects by administering aposttest after some treatment.

The single or one group pretest-posttest design also known as a before-after design compares pretest and post-test data subsequent to treatment. Thereis inadequate control for internal andexternal validity because there is no control group.

QUANTITATIVE RESEARCH 131

Table 8–1. Quantitative Research: Advantages and Disadvantages

Advantages Disadvantages

Economy: Less time to collect and analyze data

Statistical analysis: Available

Comparing groups and individuals:easier to compare

Comparing with norms: Norms may be available against which current data can be compared

Source: Adapted from Practical Research: A Guide for Therapists, by S. French, F. Reynolds,and J. Swain, 2001. Boston: Butterworth-Heinemann.

Oversimplification: May not reallycapture required information

Artificial: Little relation to real world

Restricts focus: Predetermines natureof data

Figure 8–2. Continuum of quantitativeresearch designs.

True Experimental

Single Subject

Quasiexperimental

Pre-experimental

Nonexperimental

08_Irwin_127-152 6/21/07 8:18 AM Page 131

132

Table 8–2. Nonexperimental Quantitative Research Designs

Name of Design Description

Case study

Case control

Case series

Cohort

Correlational

Descriptive

Ex post facto

Retrospective

Prospective

Natural experiments

Path analysis

Causal comparison

Structural equation modeling

Surveys

Cross-sectional survey

Longitudinal survey

Source: From Research and Statistical Methods, by D. L. Maxwell and E. Satake, 2006. CliftonPark, NY: Thomson-Delmar Learning; Educational Research by J. H. McMillan, 2004. Boston:Pearson; and Nursing Research: Principles and Methods, by D. F. Polit and C.T. Beck, 2004.Philadelphia: Lippincott, Williams, & Wilkins.

In-depth analysis of single individual, a few individuals,or an institution.

Retrospective comparison of case and a matchedcontrol.

Three or more cases in a case study.

Focuses on specific subpopulation from which differentsamples are selected at different points in time.

Study of interrelationships among variables of interestwithout any active intervention or inferring causality.

Summarizes status of phenomena, that is, characteristicand/or frequency with which certain phenomena occur.

Presumed cause that occurred in the past. Also calledcausal comparison.

Begins with outcome and looks back in time forantecedent causes.

Begins with examination of presumed causes and thengoes forward in time for its effect.

Comparison of groups in which one group is affected bya seemingly random event.

Uses correlation of several variables to study causalpatterns.

Dependent variable already occurred so its relationshipto other variables can only be studied. Also calledex post facto.

Relatively new method, more powerful than path analysisfor studying correlations to identify causal patterns.

Focuses on obtaining information about activities,Beliefs, preferences, attitudes, by direct questioning.

Survey given at one time.

Same or similar subjects surveyed over time.

08_Irwin_127-152 6/21/07 8:18 AM Page 132

The time series design involves re-peated measures before and after treat-ment using the same instruments withthe same or similar subjects over anextended period of time. McMillan (2004)believes this is “ . . . a good design for frequently occurring measures of thedependent variable at regular intervals”(p. 218). There is some confusion aboutthe classification of time series designs.This design is classified by some as pre-experimental (McMillan, 2004; Mertens,2005) and by others as quasiexperimen-tal (DePoy & Gitlin, 2005; Maxwell &Satake, 2006; Meline, 2006; Polit & Beck,2004). The next section on quasiexperi-

mental designs provides further informa-tion about time service designs.

Nonequivalent groups posttest-onlydesign has a comparison control groupboth of whom are tested after treatmentand not before treatment. Another typeof pre-experimental design is the staticgroup comparison in which the perform-ance of two groups is compared. Onegroup receives treatment and the otherdoes not (Maxwell & Satake, 2004).

Quasiexperimental Designs

Quasiexperimental designs are commonlyused in speech-language pathology andaudiology research (Maxwell & Satake,2004; Meline, 2006). These designs arealmost true experiments but not quitebecause of the lack of randomization anda control group, that is, subjects are notrandomly assigned to groups. Quasi- orsemiexperimental designs combine someof the characteristics of both experimen-tal and nonexperimental research (Kumar,1996).These designs are used when trueexperiments are impractical or impossi-ble. Quasiexperiments are not as power-ful as true experiments in establishingrelationships between treatments andoutcomes (Polit & Beck, 2004).

The nonequivalent group pretest-posttest design involves the use of twononrandomized comparison groupsboth of whom are tested before and aftertreatment. The most serious threat tovalidity of this design is that subjects arenot randomly assigned to these groups(DePoy & Gitlin, 2005). This design issimilar to the nonequivalent groupsposttest-only design and static groupcomparison except for the addition of apretest (Maxwell & Satake, 2006; McMil-lan, 2004).

QUANTITATIVE RESEARCH 133

Figure 8–3. Nonexperimental researchdesigns from highest to lowest (O = obser-vation; X = treatment). Adapted fromEducational Research, by J. H. McMillan,2004. Boston: Pearson, and Researchand Statistical Methods by D. L. Maxwelland E. Satake, 2006. Clifton Park, NY:Thomson-Delmar Learning.

Nonequivalent Groups Posttest Only X O 1 X O 2

Static Group Comparison X O X

Single Group Pretest-Posttest O, X, O 2

Single Group Posttest Only XO

08_Irwin_127-152 6/21/07 8:18 AM Page 133

Another type of quasiexperimentalresearch are single time series designs,which are sometimes referred to as inter-rupted time series designs.These designsinvolve repeated data collection over timefor a single group (DePoy & Gitlin, 2005).

The advantage of this design over thenonequivalent comparison group design,which has single pretest and posttestmeasures, is the use of several pretestand posttest measures (Maxwell & Satake,2006). This, however, could threatenvalidity because administration of severalpretest and posttest measures couldresult in a greater degree of test sensiti-zation. Another disadvantage is the lackof a control group.

Single-Subject Designs

Single-subject designs can be used tostudy one subject or a small number ofsubjects for an extended period of timebefore and after treatment. This designinvolves a variation of several groupdesigns (repeated measures; time series).Single-subject designs are classified asquasiexperimental (Maxwell & Satake,2006) or experimental (McMillan, 2004).They have been described as a specialapplication of experimental research.These designs are described in moredetail in Chapter 7.

Experimental Designs

True experimental designs are consid-ered by many as the gold standard, thatis, the strongest of the research designs(Maxwell & Satake, 2006; Polit & Beck,2004). These designs can provide infor-mation about cause-and-effect relation-

ships among independent and dependentvariables.

A true experimental design is charac-terized by manipulation (treatment orintervention), randomizations or randomassignment (equal chance of being repre-sented), and use of a control group. If aresearch design does not meet these threecriteria, it is probably a quasiexperimentaldesign. There are several types of exper-imental designs. Some of the more com-mon designs are summarized in Table 8–3.Additional information about experimentaldesigns is available in Isaac and Michael(1987), Leedy and Ormrod (2005), Schia-vetti and Metz (2002), or Vogt (2006).

Quantitative Analysis

Quantitative, or statistical analysis, is theorganization and integration of quantita-tive data according to systematic mathe-matical rules and procedures. Statisticalanalysis is guided by and dependent onall previous steps of the research processincluding the research problem, studydesign, number of study variables, sam-pling procedures, and number of subjects.Each of these steps should lead to theselection of appropriate statistical analy-sis (DePoy & Gitlin, 2005). Statisticalanalysis and statistic warrant definition.Statistical analysis is concerned with sum-marizing numerical data, assessing its reli-ability and validity, determines the natureand magnitude and relationships amongsets of data, and makes generalizationsfrom current to future events (Nicolosi,Harryman, & Kresheck, 2004). Statisticaccording to Maxwell and Satake (2006)“ . . . is a number derived by counting or measuring sample observations drawn

134 CLINICAL RESEARCH METHODS

08_Irwin_127-152 6/21/07 8:18 AM Page 134

135

Table 8–3. Experimental Designs

Name of Design Description

Between-subjects

Counterbalance

Factorial (ANOVA)

Latin square

Parallel

Posttest only (after only)

Pretest-posttest (before–after)

Randomized block

Repeated measures(crossover)

Randomized clinical trial (RCT)

Solomon four group

Split plot

Within-subjects

Sources: From Introduction to Research, by E. DePoy and L. N. Gitlin, 2005. Philadelphia:Elsevier Mosby; Nursing Research: Principles and Methods, by D. F. Polit and C. T. Beck, 2004.Philadelphia: Lippincott, Williams, & Wilkins; and Research and Statistical Methods in Communi-cation Sciences and Disorders, by D. L Maxwell and E. Satake, 2006. Clifton Park, NY: Thomson-Delmar Learning.

Comparison of groups treated differently on someindependent variable

Variation of experimental design in which more than onetreatment is tested and the order of participation in eachtreatment is manipulated

Any design in which more than one treatment factor isinvestigated

Repeated measures design in which presentation ofconditions is counter balanced so that each occurs ineach sequential position of a block

Experiments that generally have at least two randomlyassigned independent groups of participants, each ofwhom receive only one of the treatments (independentvariables) under investigation

Data collected from subjects only after treatment. Mostbasic of the experimental designs

Data collected from subjects both before and aftertreatment. Most commonly used true experimentaldesign

Involves two or more factors (independent variables),only one of which is experimentally manipulated

One group of subjects is exposed to more than onecondition or treatment in random order

Experimental test of a new treatment, involving randomassignment to treatment groups; typically a large anddiverse sample. Also known as a phase III clinical trial

Uses a before-after design for one pair of experimentaland control groups, and an after only design for asecond pair

Uses both within-subject and between-subject designelements of statistical analysis of treatment effects

Comparison within same subjects under circumstancesin which they were exposed to two or more treatmentconditions

08_Irwin_127-152 6/21/07 8:18 AM Page 135

from a population that is used in estimat-ing a population parameter” (p. 529).DePoy and Gitlin (2005) described sta-tistic as a “ . . . number derived from amathematical procedure as part of theanalytic process in experimental typeresearch” (p. 324).

Descriptive Statistics

Descriptive statistics are used to describeand synthesize quantitative data.Table 8–4summarizes the various types of descrip-tive statistics. The most basic descriptivestatistic is the frequency distribution,

136 CLINICAL RESEARCH METHODS

Table 8–4. Descriptive Statistics

Type of Analysis Method Description

Shape of Skeweddistribution Positively skewed

Negatively skewed

Modality

Unimodal

Biimodal

Multimodal

Leptokurtic

Platykurtic

Central Modetendency Median

Mean

Variability Range

Interquartile range

Semiquartile

Sum of squares

Variance

Standard deviation

Asymmetrical distribution

Asymmetric distribution; tail points to right(positive) side because frequency of lowscores greatly outnumbers high scores

Asymmetric distribution; tail points to left(negative) side because frequency of highscores greatly outnumbers low scores

Describes number of peaks, values withhigh frequencies

Distribution of values with one peak (highfrequency)

Distribution of values with two peaks

Distribution of values with more than one peak

Too peaked

Too flat

Value that occurs most frequently

Point in distribution where 50% above and50% below; mid-score

Arithmetic average

Difference between lowest and highest score

Range of middle 50% of scores

Same as interquartile range

Difference between each score and the mean

Mean of squared deviation from the mean

Square root of the variances; indicatesaverage deviation of scores around the mean

08_Irwin_127-152 6/21/07 8:18 AM Page 136

137

Bivariable Contingency tabledesign

Correlation

Pearson’s product moment correlation

Spearman’s rho

Kendall’s tau

Point biserial

Phi coefficient

Tetrachoric correlation coefficient

Cramer’s V

Contingency coefficient

Multiple correlation

Partial correlation

Sources: From Foundations of Clinical Research, by L. G. Portney and M. B. Watkins, 2000.Upper Saddle River, NJ: Prentice-Hall Health; Handbook in Research and Evaluation, by S.Isaac and W. B. Michael, 1987. San Diego, CA: Edits Publishers; Nursing Research: Principlesand Methods, by D. F. Polit and C. T. Beck, 2004. Philadelphia: Lippincott, Williams & Wilkins;Reading Statistics and Research, by S. W. Huck, 2004. Boston: Pearson; and Research andStatistical Methods, by D. L. Maxwell and E. Satake, 2006. Clifton Park, NY: Thomson-DelmarLearning.

Table 8–4. continued

Type of Analysis Method Description

Two-dimensional table which illustratesfrequencies of responses for two or morenominal or quantitative variables

Describes relationship between two or morevariables

Uses interval or ratio data; yields a scorebetween −1 and +1

Uses ordinal data; yields a score between −1 and +1; preferable to product moment fornumbers under 20 nonparametric equivalentof Pearson r

Used with ordinal data; yields a scorebetween −1 and +1; preferable to rho fornumbers under 10

Used to examine relationship between anominal variable and an internal levelmeasure; yields lower correlation than r andmuch lower than r big

Describes the relationship between twodichotomous variables

Used when both variables are artificialdichotomies

Describes relationship between nominallevel data; used when contingency table towhich it is applied is larger than 242

Two dichotomous variables on a nominalscale; closely related to chi square

One single variable and some combinationof two or more other variables

Two variables studied; influence of third orseveral other variables held constant. Alsocalled first order-correlation

08_Irwin_127-152 6/21/07 8:18 AM Page 137

which is a systematic arrangement of values from lowest to highest with acount of the number of times each valuewas obtained.

Graphs can be used to display fre-quency or relative frequency of percent-ages. The most frequently used graphsare histographs (bar graphs) and fre-quency polygons (line graphs). Othergraphs for displaying distributions includepie graphs and trend charts.

Pie charts are used to represent theproportion of data falling into certain cat-egories in the form of a circle containingsegments which are also called slices,sections, or wedges.These graphs are use-ful in illustrating percentages in relationto each other and to the whole (Nicol &Pexman, 2003). They are also referred toas pie charts, pie diagrams, cake charts,circle graphs, percentage graphs, or100% graphs. An example of a pie chartis in Figure 8–4. The pie to the right is anexploded pie chart which emphasizes theproportion of time devoted to research(Bordens & Abbott, 1988).

Trend charts also can be used to illus-trate frequencies or percentages of change

in a data set which is organized in a devel-opmental or temporal order. A trendchart is shown in Figure 8–5.

Shapes of Distributions

Data can be described relative to its shape.Several shapes have been described andare illustrated in Figure 8–6 and definedin Table 8–4. Some distributions occur sofrequently that they have special names.The normal curve or normal distribu-tion is a symmetrical bell-shaped curvethat has a concentration of scores in themiddle of the distribution with fewerscores to the right and left sides of thedistribution (Figure 8–7). An importantcharacteristic of the normal curve is thatpredictable percentages of the popula-tion are within any given portion of thecurve (Leedy & Ormrod, 2005). Abouttwo-thirds (68.2%) of the population fallwithin plus or minus one standard devi-ation of the mean, 95% fall within plus or minus two standard deviations of themean, and 99% of the population fallwithin plus or minus three deviations ofthe mean.

138 CLINICAL RESEARCH METHODS

Figure 8–4. Pie chart and exploded pie chart.

Other

Teaching

Service

Research

Other

Teaching

Service

Research

08_Irwin_127-152 6/21/07 8:18 AM Page 138

139

Figure 8–5. Trend chart showing distribution of average hearing levels for childrenwith OME in years 1, 2, and 3. Data are presented according to hearing level (HL)categories in decibels (dB). Four-frequency (500, 1000, 2000, and 4000 Hz) averagevalues displayed were derived by categorizing each participant’s mean hearing levelsacross each study year. Reprinted by permission from “Effects of Otitis Media withEffusion on Hearing in the First 3 Years of Life,” by J. S. Gravel and I. F. Wallace, 2000,p. 638, Journal of Speech, Language, and Hearing Research, 43. Copyright 2000 byAmerican Speech-Language-Hearing Association. All rights reserved.

Figure 8–6. Fre-quency distributions.

From EducationalResearch: Fundamen-tals for the Consumer

(4th ed.), by JamesH. McMillan, 2004,

p. 129. Published byAllyn and Bacon,

Boston, MA. Copy-right 2004 by PearsonEducation. Reprinted

by permission of the publisher.

08_Irwin_127-152 6/21/07 8:18 AM Page 139

140

Fig

ure

8–7

.N

orm

al d

istr

ibut

ion

curv

e.Fr

omTe

st S

ervi

ce N

oteb

ook

#148

.Cop

yrig

ht 1

980

by H

arco

urt

Ass

essm

ent,

Inc.

Rep

rodu

ced

with

per

mis

sion

.All

right

s re

serv

ed.

08_Irwin_127-152 6/21/07 8:18 AM Page 140

231

13

Evidence-Based Practice:Application of Research to

Clinical Practice

CHAPTER OUTLINE

Defining Evidence-Based Practiceand Research Utilization

Myths About Evidence-BasedPractice

Research Utilization in Speech-Language Pathology andAudiology

Efforts to Implement Evidence-Based Practice

Barriers to Evidence-BasedPractice

Strategies for ImplementingEvidence-Based Practice

Communicating the EvidenceSummaryDiscussion QuestionsReferences

13_Irwin_231-256 6/21/07 8:21 AM Page 231

232 CLINICAL RESEARCH METHODS

LEARNING OBJECTIVES

Upon completion of this chapter the reader will be able to:

■ Define evidence-based practice

■ Understand myths surrounding evidence-based practice

■ Describe efforts to implement evidence-based practice

■ Identify barriers to evidence-based practice

■ Implement evidence-based practice.

13_Irwin_231-256 6/21/07 8:21 AM Page 232

Evidence-based practice (EBP) is theapplication of research data to clinicaldecisions. Speech-language pathologistsand audiologists who base their practiceon the best available evidence use a sys-tematic approach to selecting assessmentand treatment procedures (Cornett,2001). The most compelling reason forspeech-language pathologists and audiol-ogists to be evidence-based practitionersis to ensure that clients receive the bestpossible services (Johnson, 2006).

Speech-language pathologists and audi-ologists in all work settings should beaware of the advantages of evidence-basedpractice or research-based practice.Meline(2006) believes evaluating research forits application to clinical practice is oneof the most important resources avail-able for ensuring best clinical practice inspeech-language pathology and audiol-ogy.There is growing interest in develop-ing an evidence-based practice throughresearch which is used in making clinicaldecisions.

During the past 10 years, changes ineducation and research have led to anawareness of the need for a better evi-dence base for the clinical practice ofspeech-language pathology and audiol-ogy. Most training programs have modi-fied their research curricula to includeevidence-based practice and critical analy-sis of the research literature. Moreover,standards for the Certificate of ClinicalCompetence (CCC) in Speech-LanguagePathology (ASHA, 2004a) require “ . . .knowledge of processes used in researchand integration of research principlesinto evidence-based clinical practice”(p. 7). Standards for the CCC-Audiology(ASHA, 2005a) also require research andits association to make clinical decisions.There has also been an increased focus

on research related to clinical practice.Both students and practicing profession-als need hands-on, interactive, practicalexperience in translating research intoclinical practice (Gallagher, 2001).

There are other factors which may ac-count for the increasing use of evidence-based practice. Among these factors are:the development of strategies for efficientlyfinding evidence; systematic clinical prac-tice guidelines; electronic databases, andcontinuing education (Sackett, Strass,Richardson, Rosenberg, & Haynes, 2001).

In spite of these developments, therehave not been major changes in researchutilization among clinicians; underutiliza-tion of research continues to be a seriousproblem. Research evidence is not rou-tinely used in making clinical decisionsabout assessment and treatment. Unfor-tunately, too many clinicians continue tomake clinical decisions in the absence ofevidence, and on the basis of opinion,testimony, and/or advertisements. Opin-ions, singly or in groups, even fromrespected authorities should be viewedwith skepticism (Dollaghan, 2003a). Inthis chapter, various aspects of researchutilization are discussed.

Defining Evidence-Based Practice and Research

Utilization

Evidence-based practice is utilization ofthe best available research to make clini-cal decisions about patient care. It isbased on critical appraisal of researchfindings and applying what is known toclinical practice (Frattali & Worrall, 2001).Research utilization is the applicationof some aspect of research to clinical

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 233

13_Irwin_231-256 6/21/07 8:21 AM Page 233

practice. Polit and Beck (2004) describedresearch utilization as a continuum rang-ing from direct application of researchfindings to clinical practice (instrumentalutilization) through situations in whichresearch findings are ignored or not used.Research utilization is a matter of degree;clinicians even with limited effort can ac-complish some degree of evidence-basedpractice (Schlosser, 2004). In essence,evidence-based practice is the utilizationof research findings to make decisionsabout patient care.

The major steps in using evidence-based practice are:

■ selecting a topic or problem (Schlos-ser & O’Neil-Pirozzi, 2006)

■ assembling and evaluating evidence(Dennis & Abbott, 2006; Law & Plun-kett, 2006; Meline, 2006)

■ assessing for potential implementa-tion (Nye & Harvey, 2006)

■ developing or identifying evidence-based guidelines or protocols (Law &Plunkett, 2006)

■ implementing the treatment■ evaluating outcomes (Herder, Howard,

Nye, & Vanryckeghem, 2006; Bernstein-Ratner 2006)

■ deciding to adopt or modify the treat-ment or revert to prior practice (Kon-nerup & Schwartz, 2006).

An important aspect of evidence-basedpractice is rating the quality and credibil-ity of evidence. Speech-language pathol-ogists and audiologists should identifythe level of evidence and consider itsusefulness in making clinical decisions(Meline, 2006.The hierarchy of evidenceranges from low to high (ASHA, 2004d).These levels of evidence are described inFigure 13–1. Lower levels are less credi-

ble and include opinions, committeereports, and case studies. Higher levelsare more credible: these levels includesystematic reviews or meta-analyses, and

234 CLINICAL RESEARCH METHODS

Figure 13–1. Levels of evidence. Adaptedwith permission from Evidence-BasedPractice in Communication Disorders: AnIntroduction [Technical report], by theAmerican Speech-Language-Hearing As-sociation. Available from www.asha.org/policy. Copyright 2004 by ASHA. All rightsreserved.

Ia

Systematic ReviewsMeta-Analyses

Ib

Randomized Controlled Study

IIa

Well-Designed Controlled StudyWithout Randomization

IIb

Well-Designed Quasiexperimental Study

III

Well-Designed Nonexperimental Study(Correlational; Case Studies

IV

Committee Reports, Opinions

13_Irwin_231-256 6/21/07 8:21 AM Page 234

randomized controlled studies. In otherwords, there is good peer-review evi-dence. Unfortunately, high levels of evi-dence do not exist for many clinicalproblems. Another related issue is thelack of research data to use for some clin-ical decisions.

A second classification is based on clin-ical service, that is, separate categoriesfor diagnosis or treatment (ASHA, 2005b).There are differences in the number oflevels for diagnosis and treatment, fourand seven, respectively. Another classifi-cation of evidence levels comes fromthe U.S. Prevention Services Task Force’s(1989) grades and recommendations for outcome from systematic reviews.This classification system is presented inFigure 13–2.

Zipoli and Kennedy (2005) found thatthe first year of clinical practice in speech-language pathology was especially impor-tant in determining attitude toward anduse of evidence-based resources. Clinicalexperience and opinion of colleagueswere used more frequently than researchstudies or clinical practice guidelines.

Myths About Evidence-Based Practice

There are several myths surrounding evi-dence-based practice that should be con-sidered. Table 13–1 illustrates some ofthese myths. These myths contribute tomisunderstanding evidence-based prac-tice and may make it difficult if notimpossible to implement evidence-basedpractice. Eliminating or reducing thesemisconceptions increases the likelihoodof using evidence-based practice to sup-port clinical decision-making.

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 235

Figure 13–2. Outcomes from systematicreview. Reprinted with permission from“Evidence-Based Practice in Schools:Integrating Craft and Theory with Scienceand Data,” by L. M. Justice and M. E. Fey.ASHA Leader, 9, 4–5, 30–32. Copyright2004 by American Speech-Language-Hearing Association. All rights reserved.

A

Good evidence for inclusionPeer-review evidenceSupporting use for treatment

B

Fair evidence for inclusionPeer-review evidence supports consideration of use for treatment

C

Insufficient evidenceLack of peer-review evidence; however recommendation(s) for use possible on other grounds

D

Fair evidence for exclusionPeer-review evidence supports treatment should be excluded

E

Good evidence for exclusionPeer-review evidence supports treatment should be excluded

13_Irwin_231-256 6/21/07 8:21 AM Page 235

Research Utilization in Speech-Language

Pathology and Audiology

Information about research utilization byspeech-language pathologists and audiol-ogists to make clinical decisions is lim-ited to speech-language pathologists.Pain, Magill-Evans, Darrah, Hagler, andWarren (2004) studied the effects of theprofession on research utilization by 105randomly selected Canadian occupationaltherapists, physical therapists, and speech-language pathologists. Speech-languagepathologists had the most education andthe highest ratings for use of research.However, Zipoli and Kennedy (2005)found that speech-language pathologistsmore often used traditional rather thanevidence-based sources in making clini-cal decisions. Clinical experience and

opinions were used more frequentlythan research studies or clinical practiceguidelines. Apparently, speech-languagepathologists have failed to use researchfindings for making clinical decisions.

There is considerable potential forintegrating research and clinical practicein speech-language pathology and audi-ology. In providing clinical services,speech-language pathologists and audiol-ogists collect relevant information, assessand diagnose speech-language-hearingproblems, develop treatment and dis-charge plans, and evaluate the outcomeof treatment. Golper, Wertz, and Brown(2006) pointed out that “the methodspresented in single-case research reportsare essentially those employed every dayby clinicians as they seek to determinethe outcome of an intervention” (p. 10).Research conducted by speech-languagepathologists and audiologists can improve

236 CLINICAL RESEARCH METHODS

Table 13–1. Myths of Evidence-Based Practice

Myth Reality

• Already exists — Many clinicians take little or no time toreview current research findings

• Impossible to initiate — Possible through little work

• Clinical experience irrelevant — Requires extensive clinical experience

• Cost minimization — Emphasizes best available evidence

• Expert opinion — Consider evidence

• All research is related to clinical — Basic research does not advance practice decisions evidence for clinical decisions

• Evidence is authority — Clinical experience and patient may preference also be considered

Sources: From “Introduction to Evidence-Based Practice,” by M. Law, 2002. In M. Law (Ed.).Evidence-Based Rehabilitation (pp. 2–12).Thorofare, NJ: Slack; and “Evidence-Based Medicine:What It Is and What It Isn’t,” by D. L. Sackett, W. M Rosenberg, Gray, J., R. B. Haynes, and W. S. Richardson, 1996. British Medical Journal, 312(7023), 71–72.

13_Irwin_231-256 6/21/07 8:21 AM Page 236

the quality and efficiency of clinical serv-ices and reduce costs of these services.

There is great potential for usingresearch findings as a basis for makingclinical decisions and for developingclinical services. There is concern, how-ever, that speech-language pathologistsand audiologists are not always aware of research results and do not integratethese results into their clinical practice(s).Some clinicians do not read researchjournals, nor do they attend professionalprograms where research results arereported. On the other hand, researchersmay ignore the research needs identifiedby clinicians. For research utilization toimprove, there must be communicationbetween researchers and clinicians.

The gap between research and clinicalpractice is both long-standing and well-known (Hamre, 1972; Ringel, 1972;Silverman, 1998). There is a differencebetween research production and uti-lization in clinical practice. Obviously,there is a need to reduce this dichotomy.Costello (1979) believes “ . . . it should be clinicians themselves who conductresearch, and that researchers should beclinicians as well” (p. 7). Lum (2002)emphasized the commonalties betweenclinicians and researchers and that bothshould be “ . . . informed consumers ofpublished research” (p. 137). Logemann(1998) indicated that every clinician hasa responsibility to participate in researchand, at the very least, to collect system-atic data on the effects of treatment foreach client. Frattali and Worrall (2001)pointed out that “ . . . it may be difficultto change the ways of some experiencedclinicians, many of whom are victims of the clinical-research separation area”and “ . . . there is promise in shaping theways of students and clinicians to invest ina research-integrated clinical future” (p. x).

Efforts to Implement Evidence-Based Practice

Efforts to facilitate implementation ofevidence-based practice have been under-taken by the American Speech-Language-Hearing Association (ASHA) and theAcademy of Neurologic CommunicationDisorders and Sciences (ANCDS). ASHAhas devoted considerable effort to increaseresearch utilization through evidence-based practice. The major activitiesinclude a Web site for evidence-basedpractice (ASHA, 2004b), a position state-ment (ASHA, 2004c), a technical report(ASHA, 2004d), and establishment of astanding committee, the Advisory Com-mittee on Evidence-Based Practice (ACEBP;Mullen, 2005a). In addition to ACEBP,ASHA has established the National Cen-ter for Evidence-Based Practice in Com-munication Disorders. The Center has aregistry of clinical practice guidelinesand systematic reviews (Mullen, 2006).Only guidelines and reviews with anoverall rating of highly recommended or recommended with provisions areincluded in the registry. ASHA also hasseveral practice guidelines, which pro-vide information to assist clinicians inmaking decisions based on availableresearch evidence and prevailing expertopinion.The purposes of such guidelinesare to improve the quality of service,identify the most cost-effective treatment,prevent unfounded practices, and stimu-late research (Golper et al., 2001). Theseguidelines are listed in Table 13–2.

ASHA also established the Communi-cation Sciences and Disorders ClinicalTrials Research Group (CSDRG), whichis devoted solely to the development andconduct of clinical trials by audiologists,speech-language pathologists, and speech,

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 237

13_Irwin_231-256 6/21/07 8:21 AM Page 237

238

Table 13–2. ASHA’s Practice Guidelines

Audiologic Assessment of Children from Birth to 5 Years of Age (2004)

Audiologic Screening (1996)

Gender Equality in Language Use (1992)

Meeting the Communication Needs of Persons With Severe Disabilities (1991)

Role of the Speech-Language Pathologist in the Performance and Interpretationof Endoscopic Evaluation of Swallowing: Guidelines (2004).

Clinical Indicators for Instrumental Assessment of Dysphagia (2000)

Guidelines for Practice in Stuttering Treatment (1994)

Guidelines for Speech-Language Pathologists Serving Persons with Language,Socio-Communication, and/or Cognitive-Communication Impairments (1990)

Guidelines for Structure and Function of an Interdisciplinary Team for PersonsWith Brain Injury (1994)

Guidelines for the Roles and Responsibilities of the School-Based Speech-Language Pathologist (1999)

Guidelines for the Training, Use, and Supervision of Speech-LanguagePathology Assistants (2004)

Instrumental Diagnostic Procedures for Swallowing (1992)

Oral and Oropharyngeal Prostheses: Guidelines (1992)

Orofacial Myofunctional Disorders: Knowledge and Skills (1992)

Roles and Responsibilities of Speech-Language Pathologists in the NeonatalIntensive Care Unit (2005)

Roles and Responsibilities of Speech-Language Pathologists with Respect toReading and Writing for Children and Adolescents: Practice Guidelines (2000)

Terminology Pertaining to Fluency and Fluency Disorders (1998)

Training Guidelines for Laryngeal Videoendoscopy/Stroboscopy (1997)

Use of Voice Prostheses in Tracheotomized Persons With and WithoutVentilatory Dependence (1992)

Workload Analysis Approach for Establishing Speech-Language CaseloadStandards In the Schools (2002)

Acoustics in Educational Settings (2004)

Audiologic Assessment of Children from Birth Through 36 Months of Age (1990)

Audiologic Assessment of Children from Birth to 5 Years of Age (2004)

Audiologic Management of Individuals Receiving Cochleotoxic Drug Therapy(1993)

Audiology Service Delivery in Nursing Homes (1996)

Audiology Service Provision in and for Schools (2002)

13_Irwin_231-256 6/21/07 8:21 AM Page 238

language, and hearing scientists (Baum,Logemann, & Lilenfield, 1998; Logemann,2004; Logemann & Gardner, 2005). Cur-rently, CSDRG’s clinical trials involvedysarthria, dysphagia, and language stim-ulation. Clinical trials are a type ofresearch design that provides evidencewhether one evaluation or treatment ismore effective than another, or which is more appropriate for a specific type ofclient. Clinical trials occur in five phases:(1) case-studies, single-subject designstudies, small-group pre-post studies, andretrospective studies; (2) expansion ofphase 1; (3) test efficacy; (4) field research;and (5) cost-benefit analysis (Robey, 2004,2005).These five phases are described inTable 13–3. An important aspect of clin-ical trials is that they are blinded ormasked so that the outcome measuresare recorded and analyzed independentof participating clinicians.

ASHA’s Research and Scientific AffairsCommittee recently introduced the “Pre-mieres in Research,” a series of articles

about research concepts and their appli-cation to speech-language pathology andaudiology (Feeney, 2006). Topics to dateare “Back to the Basics: Reading ResearchLiterature” (Golper,Wertz, & Brown, 2006)and “Developing a Research Question”(Nelson & Goffman, 2006), and “Bias andBlinding” (Finn, 2006), and “Interpretationof Correlation” (Oller, 2006).

In addition, two ASHA journals focusedon EBP. Communication Issues in Com-munication Services and Disorders(CICSD) (2006) described the specifictask(s) needed to conduct a systematicreview and meta-analysis that wouldfacilitate EBP. The other journal, Lan-guage, Speech, and Hearing Services inSchools (LSHSS), discussed making evi-dence-based decisions for speech sounddisorders, reading problems, and childlanguage intervention. These papers arelisted in Table 13–4.

The American Academy of Audiologyemphasized EBP in a special 2005 issueof the Academy’s Journal, the Journal of

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 239

Table 13–2. continued

Audiometric Symbols (1989)

Competencies in Auditory Evoked Potential Measurement and ClinicalApplications (1998)

Determining Threshold Level for Speech (1987)

Education in Audiology Practice Management (1994)

Fitting and Monitoring FM Systems (1999)

Graduate Education in Amplification (1999)

Hearing Aid Fitting for Adults (1997)

Joint Audiology Committee Clinical Practice Statements and Algorithms (1999)

Manual Pure-Tone Threshold Audiometry (1977)

Role of Audiologist in Vestibular and Balance Rehabilitation: Guidelines (1998)

Note: Available from http://www.asha.org/members/deskref-journals/deskref/

13_Irwin_231-256 6/21/07 8:21 AM Page 239

240

Table 13–3. Five-phase Outcome Research Model

Phase I– develop hypothesis to be tested in later phases– establish safety of treatment– demonstrate if treatment active (patients improve)– brief, small sample size– does not require external controls– case studies, single subject studies– small group pre-post studies, retrospective studies

Phase II– refine primary research hypothesis– develop explanation for efficacy and effectiveness– specify treatment protocol– determine discharge criteria– demonstrate validity and reliability of outcome measures– case studies, single-subject studies, small, single group studies

Phase III– efficacy of treatment under ideal conditions (ideal patients, ideal clinicians,

ideal treatment,ideal intensity and duration, ideal outcome measures)– typically randomized control trial; (random assigned treatment or no treatment)– large samples– multiple sites– parallel group designs; single subject

Phase IV– test treatment effectiveness– average conditions– day to day practice– typical conditions (patients, clinicians, intensity, and duration)– large samples– external control (no-treatment) not required– a analysis of efficacy studies– single subject studies with multiple replication; large single group designs– examine variations in populations, intensity, and duration of treatment, level of

clinician training

Phase V– study of efficiency, cost-effectiveness, cost-benefit, cost utility– patient and family satisfaction; quality of life– large samples

Sources: “A Five-Phase Model for Clinical-Outcome Research,” by R. R. Robey, 2004, Journalof Communication Disorders, 37(5), 401–411; “A Model for Conducting Clinical OutcomeResearch: An Adaptation of the Standard Protocol for Use in Aphasiology,” by R. R. Robey,1998, Aphasiology, 12, 787–810; and ”An Introduction to Clinical Trials,” by R. Robey, 2005,May 24. The ASHA Leader, pp. 6–7, 22–23.

13_Irwin_231-256 6/21/07 8:21 AM Page 240

the American Academy of Audiology.These papers on the effectiveness of hear-ing rehabilitation are listed in Table 13–5.

ANCDS applied the principles of evi-dence-based practice to the develop-

ment of practice guidelines with supportof ASHA’s Special Interest Division, Neu-rophysiologic and Neurogenic Speechand Language Disorders (Golper et al.,2001).“The purpose of such guidelines is

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 241

Table 13–4. CICSD and LSHSS Special Issues on EBP

Author(s) Title

CISD Dennis & Abbott

Herder, Howard, Nye, &Vanryekaghem

Konnerup & Schwartz

Law & Plunkett

Meline

Nye & Harvey

Owens, Baez, & Tillman

Schlosser & O’Niel-Pirozzi

Swartz & Wilson

Turner & Bernard

LSHSS Fey

Gillam & Gillam

Justice

Kamhi

Kamhi

Kamhi

Kent

Ratner

Tyler

Ukrainetz

Information retrieved

Effectiveness of behavioral stutteringtreatment

Translating systematic reviews into policyand procedure

Grading study quality in systematic reviews

Selecting studies for systematic reviews

Interpreting and maintaining the evidence

Lessons learned: The student experience

Problem formulation

The art and science of building an evidenceportfolio

Calculating and synthesizing effect sizes

Commentary on Gillam and Gillam

Making evidence-based decisions aboutchild language intervention in schools

Evidence-based practice, response tointervention and the presentation of andreading difficulty

Combining research and reason to maketreatment decisions

Some final thoughts on EBP

Treatment decisions for children withspeech sound disorders

EBP in communication disorders: Progressnot perfection

EBP: An examination of its ramifications forthe practice of speech-language pathology

Commentary to Kamhi

Commentary on Justice

13_Irwin_231-256 6/21/07 8:21 AM Page 241

to improve and assure the quality of careby reducing unacceptable variation in itsprovision” (p. 2).The practice guidelineswere completed when research evidencewas available in the literature and fol-lowed by systematic reviews by writingcommittees within ANCDS (Golper et al.,2001; Yorkston et al., 2001a). Theseguidelines reflected a moderate degreeof clinical certainty and are usually basedon Class II evidence or strong consensusfrom Class III evidence (Golper et al.,2001). Table 13–6 presents the practiceguidelines completed by ANCDS.

Reilly, Oakes, and Douglas (2004)identified several advantages of practiceguidelines which are presented in Table13–7. The advantages of practice guide-lines should not be ignored.

Despite the overwhelming advantagesof practice guidelines, there are limitations.

First, some clinicians are reluctant toimplement practice guidelines. Severalreasons for not using practice guidelineswere identified by Nicholson (2002).These reasons are listed in Table 13–8.Practice guidelines can be methodologi-cally strong or weak, thus yielding eithervalid or invalid recommendations (Golperet al., 2001).Wertz (2002) pointed out that“ . . . not all evidence is created equalbecause for some treatments there may beunassailable level I evidence but for othertreatments the evidence may not surpasslevel III” (p. xi). Furthermore, how theevidence was found, reviewed, rated, orthe diversity of sources may not be re-ported (Cook & Giacomini, 1999). Sheney-felt, Mayo-Smith, and Rothwangl (1999)believe that there is need for improvementin the identification, evaluation, and syn-thesis of evidence in practice guidelines.

242 CLINICAL RESEARCH METHODS

Table 13–5. Journal of the American Academy of Audiology (2005): EBP in Audiology

Author Title

Bentler Effectiveness of directional microphones and noisereduction schemes in hearing aids

Cox EBP in provision of amplification

Fabry Creating the evidence: Lessons from cochlear implants

Hawkins Effectiveness of counseling based adult group auralrehabilitations programs

Killon & Gudmundsen Using clinical prefitting speech measures

Mueller & Bentler Fitting hearing aids using clinical measuring orloudness discomfort levels

Palmer & Grimes Effectiveness of signal processing strategies for thepediatric population

Sweetow & Palmer Efficiency of individual auditory training in adults

Van Vilet Current status of hearing care

13_Irwin_231-256 6/21/07 8:21 AM Page 242

243

Table 13–6. Practice Guidelines by ANCDS

Topic Source

Alzheimer’s dementia Bayles et al. (2005)

Apraxia of speech Wambaugh et al. (2006a, 2006b)

Dementia: Spaced retrieval training Hooper et al. (2005)

Dementia: Computer-assisted interventions Mahendra et al. (2005)

Dementia: Montessori-based interventions Mahendra et al., (2006)

Dysarthria: Respiratory/phonatory Spencer, Yorkston, & Duffy (2003)dysfunction Yorkston, Spencer, & Duffy (2003)

Direct attention training Sohlberg et al. (2003)

Spasmodic dysphonia Duffy and Yorkston (2003a, 2003b)

Speech supplementation Hanson, Yorkston, & Beukleman, (2004)

Traumatic brain injury Kennedy et al. (2002)

Traumatic brain injury: Standardized Turkstra et al. (2005)assessment

Unilateral vocal fold paralysis Baylor et al. (2006)

Table 13–7. Advantages of Practice Guidelines

• Reduce amount of time searching for evidence becauseevidence-based practice guidelines are based on best availableevidence following systematic review and critical appraisal

• Provide collection of evidence from diverse sources

• Provide advice when evidence is limited or unavailable, of poorquality, or conflicting

• Improve patient care by streamlining referral and discharge

• Encourage patient participation in making decisions

• Promote best practice by providing criteria or standards againstwhich care may be monitored by clinical or quality audit(s)

Source: Adapted from “Evidence-Based Practice in Speech Pathology: FutureDirections,” by S. Reilly, J. Oates, and J. Douglas, 2004. In S. Reilly, J. Douglas,& J. Oates (Eds.), Evidence-Based Practice in Speech-Language Pathology(pp. 330–352). London: Whurr Publishers.

13_Irwin_231-256 6/21/07 8:21 AM Page 243

Yorkston and associates (2001a) alsoidentified several limiting factors in devel-oping and implementing practice guide-lines. These factors included reliance on a single indicator or study quality, thatis, marked emphasis on one type ofresearch; randomized controlled trials;heterogeneous populations; incompleteevidence; and not all guidelines are welldone. Further limitations of practiceguidelines were described by Pring (2004)who indicated that many of the studieson which practice guidelines are basedhave methodological problems and fre-quently fail to adequately describe thetreatment used, which means it is impos-sible to explain or compare differenttypes of treatment. Furthermore, prac-tice guidelines should be evaluated andreviewed on a regular basis as relevantevidence emerges (Reilly, Oates, & Dou-glas, 2004).

There are instruments available forevaluating practice guidelines. Snowball

(2000) described a procedure and check-list for critical appraisal of clinical guide-lines.This evaluation instrument consistsof 37 items based on three dimensions:(1) rigor of development; (2) context/content; and (3) application.

The National Guideline Clearing-house (NGC) (n.d.) Website has severalformats for reviewing practice guide-lines: Brief Guideline Summary, Com-plete Guideline Summary, GuidelineComparisons, and Guideline Synthesis.Content of the Brief Summary includes asummary, recommendations, evidencesupporting the recommendations, andindicating information and availability.There are 49 items in the CompleteGuideline Summary.

There are several unanswered questionsabout practice guidelines, including: doguidelines influence clinical practice?how do clinicians make decisions aboutdiagnosis and treatment? to what extentare patients involved in clinical deci-

244 CLINICAL RESEARCH METHODS

Table 13–8. Reasons for Not Utilizing Practice Guidelines

• Development of guidelines is time consuming and labor intensive

• Current information and reporting systems are inadequate

• Intensive dissemination of guidelines is not followed by evaluationand follow-up reporting or variation from established practice(s)

• Suspicion because of conflicting guidelines

• Some guidelines are not executable, that is, too complicated,time consuming, or inconvenient

• Frequently no system for monitoring the impact of guidelines onpractice, no feedback mechanism, and no process for iterativerefinement

• Fear that guidelines may reduce individualized patient care

Source: Adapted with permission from SLACK Incorporated from “Practice Guide-lines, Algorithms and Clinical Pathways,” by D. Nicholson, 2002, pp. 195–220.In M. Law (Ed.), Evidence-Based Rehabilitation.

13_Irwin_231-256 6/21/07 8:21 AM Page 244

sions? and why do some clinicians ignorepractice guidelines?

Utilization of a decision-making processcan be used to implement and developevidence-based practice. Boswell (2005)and Kully and Langeven (2005) describedsimilar procedures for evidence-baseddecision making. The steps included:(1) asking a clear, focused question;(2) finding the best evidence; (3) criticallyappraising the evidence; (4) integratingthe evidence with clinical judgment andclinical values; and (5) evaluating thedecision-making process. According toSchlosser, Koul, and Costello (2006) thefirst step in EBP is asking well-built ques-tions but this is often difficult for clini-cians.To facilitate well-built questions, thePESICO template was proposed. PESICOstands for person (problem), environ-ments, stake holders, intervention, com-parison, and outcome. Recently, Johnson(2006) provided specific examples formaking evidence-based decisions aboutchildhood speech-language disorders.

Threats (2002) suggested using theWorld Health Organization’s InternationalClassification of Disorders for developingevidence-based practice which wouldbridge the gap between researchers andclinicians by providing a common frame-work and language. The purposes are to:(1) collect statistical data about function-ing and disability, such as in populationstudies; (2) conduct clinical research,such as measurement of outcomes, qual-ity of life, or impact of environmental fac-tors on disability; (3) use clinically forneeds assessment, matching treatmentswith specific conditions, program out-come evaluation, and rehabilitation doc-umentation; and (4) use for social policy,such as social security planning, govern-mental oversight of disability programs,and policy decision making.

The Canadian Association of Speech-Language Pathologists and Audiologists(CASLPA) is promoting the application ofevidence-based practice in classrooms,clinics, and research settings (Orange,2004). In 1996, CASLPA affiliated with theCanadian Cochrane Network and Center.Evidence-based practice has been infusedinto the academic coursework, clinicalpractice, and thesis research at the Uni-versity of Western Ontario.

The Australian Speech Pathology Asso-ciation is publishing evidence on topicsrelated to clinical practice (Baker, 2005).The first in this series of articles was“What Is the Evidence for Oral MotorTherapy” by Bowen (2005).

The American Speech-Language-Hear-ing Association (Mullen, 2007) recentlydeveloped levels of evidence for com-munication sciences and disorders. Thisclassification considers eight factors:(1) study design, (2) blinding, (3) sam-pling, (4) subjects, (5) outcomes, (6) sig-nificance, (7) precision, and (8) intentionto treat. There are four steps in this sys-tem: (1) appraisal of the quality of indi-vidual studies related to the topic beingreviewed; (2) identification of the study(s)’stage; (3) assessment of the quality of astudy within the context of its researchstage; and (4) synthesis of the informa-tion into a table of evidence relative toresearch stage and quality.

Barriers to Evidence-Based Practice

Barriers to evidence-based practice shouldbe considered so that strategies can bedeveloped to reduce or eliminate thesebarriers, thus facilitating research useand evidence-based practice. The major

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 245

13_Irwin_231-256 6/21/07 8:21 AM Page 245

barrier to evidence-based practice is fail-ure to integrate research and clinicalactivities. There have also been delaysbetween the completion of a researchstudy and the time the results werereported, which might make the resultsno longer applicable to clinical practice.

The evidence base itself is frequentlycited as a barrier (Mullen, 2005a, 2005b).Nonexistent, conflicting, or irrelevantevidence is considered to be a major ormoderate barrier. Overreliance on RCTsis a source of controversy (Elman, 2006).Sackett, Rosenberg, Gray, Haynes, andRichardson (1996) believe that evidence-based practice is not restricted to RCTsand meta-analysis, and it should involvefinding the best evidence to answer clin-ical questions.

Other barriers to EBP may be relatedto bias about funding, publication, con-sumer-research mismatch, and reducedclinical applicability (Elman, 2006).Thereare limited research funds and there is a trend to allocate funds on the basis ofevidence. There is a tendency for posi-tive trials to be published more thanonce and the possibility of subjectivepublication decisions. Also, there may bea mismatch between the treatments thatare researched and those that are desiredor prioritized by consumers. Reducedclinical applicability results from a trade-off between subject selection criteria andclinical applicability. Clients with severeproblems or comorbidities may be under-represented in clinical trials that have ahomogeneous subject selection.

The type of evidence also warrantsconsideration.The randomized controlledtrial (RCT) is considered the gold standardor highest level of evidence, although theinformation needed to understand humanproblems is not necessarily amenable to RCT (DePoy & Gitlin, 2005). A related

limitation is that RCTs are not the onlyvalid design.

There are other barriers to implement-ing evidence-based practice in speech-language pathology and audiology.Amongthese barriers are limited training in evidence-based practice and research uti-lization, negative attitudes about research,resistance to change, and limited collab-oration and communication betweenresearchers and clinicians. Some speech-language pathologists and audiologistsdo not read research journals or do notcritically review published research find-ings. Lack of knowledge about how toaccess and critically review research evi-dence has also been identified as a bar-rier to using research evidence in clinicalpractice. On the other hand, research issometimes reported in a way that makesfindings inaccessible for clinicians; com-plex statistical information and denseresearch jargon may pose barriers toacquiring knowledge about evidence-based practice (Polit & Beck, 2004). Somespeech-language pathologists and audi-ologists do not attend professional con-ferences where research is reported.Another consideration is that researcherssometimes ignore the research needs ofclinicians.

Other barriers are related to time andchange. Some clinicians are overwhelmedor overstretched because they believethey do not have enough professionaltime for evidence-based practice (Meline& Paradiso, 2003; Mullen, 2005a,b; Zipoli& Kennedy, 2005; Frattali & Worrell,2001). Some clinicians are resistant tochange because it requires retraining andreorganization. Still other clinicians maylack administrative support to imple-ment evidence-based practice (Rose &Baldac, 2004). Both time and money arereported as barriers to EBP by some

246 CLINICAL RESEARCH METHODS

13_Irwin_231-256 6/21/07 8:21 AM Page 246

speech-language pathologists (Upton &Upton, 2005).

Critical appraisal of research is funda-mental to evidence-based practice, yet itmay be difficult for some speech-languagepathologists and audiologists.Worrall andBennett (2001) identified six barriers toevidence-based practice, which are listedin Table 13–9. Elman (2006), recentlyidentified several sources of bias whichcould result in restricting EBP. Thesebiases are described in Table 13–10.

Other barriers to evidence-based prac-tice are related to the shortage of Ph.D.level researchers in speech-languagepathology and audiology (ASHA, 2002;Justice & Fey, 2004). In addition, thereare two associated problems: (1) almosthalf of new Ph.D.s chose nonacademicpositions, which are probably nonre-search positions, and (2) an aging facultyand imminent retirement (Meline & Par-adiso, 2005).

Last, there are barriers related toorganizations and the professions. Orga-nizations may be reluctant to expend

resources for teaching and/or using evi-dence-based practice. The professionalsmay also have barriers related to a short-age of role models for evidence-basedpractice or historical baggage that clini-cians perceive themselves as not beingcapable of doing research or recom-mending changes based on researchresults (Polit & Beck, 2004).

Strategies for Implementing

Evidence-Based Practice

To utilize EBP, speech-language patholo-gists and audiologists must: (1) discountthe opinions of authorities when there is counterevidence; (2) focus on theresearch that is relevant to clinical prac-tice, and (3) use rigorous criteria to eval-uate the quality of evidence includingvalidity, importance, and precision (Dol-laghan, 2004a).

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 247

Table 13–9. Barriers for Speech-Language Pathologists and Audiologists Undertaking Critical Appraisals

• Access and ability to use Web-based databases of scientificliterature

• All literature related to speech-language pathology andaudiology is not necessarily listed with on-line databases

• Often there is a lack of evidence in the topic area

• Low level of evidence

• Evidence does not always match the reality of clinical services

• No databases of published critically appraised topics fromspeech-language pathology and audiology

Source: Adapted from “Evidence-Based Practice: Barriers and Facilitators forSpeech-Language Pathologists,” by L. E. Worrall and S. Bennett, 2001. Journalof Medical Speech-Language Pathology, 9(2), xi–xvi.

13_Irwin_231-256 6/21/07 8:21 AM Page 247

There are a number of strategiesdesigned to improve the extent to whichresearch is used to make clinical deci-sions. Some strategies that could facilitateimplementation of evidence-based prac-tice are as follows:

■ Create a culture for evidence-basedpractice

■ Implement evidence-based practicetraining and experience for both stu-dents and practicing professions

■ Integrate evidence-based practice intospeech-language pathology and audiol-ogy curriculum (Forrest & Miller, 2001)

■ Eliminate dichotomy between researchand clinical practice

■ Awareness of misconceptions aboutevidence-based practice

■ Provide high levels of research evidence■ Collaboration of researchers and clini-

cians■ Use of systematic reviews to introduce

evidence-based practice (McCauley &Hargrove, 2004; Mullen, 2005b)

■ Communicate research results widelyand clearly

■ Specify clinical implications of research■ Expect evidence that a diagnosis or

treatment procedure is effective■ Participate in a journal group■ Collaborative work groups (Johnson,

2006)■ Use ASHA’s registry of evidence-based

practice guidelines and systematicreviews (Evidence-Based Practice ToolAvailable, 2006; Mullen, 2006)

■ Indicate level of evidence in profes-sional presentations and publications,and continue education

■ Seek a professional environment thatsupports evidence-based practice

■ Identify and eliminate sources of biasabout EBP (Elman, 2006)

■ Volunteer to participate in clinicalresearch trials (Logemann & Gardner,2005)

■ Audit the degree and extent of researchutilization

■ On-site or online journal clubs (Betz,Smith, Melnyk, & Rickey, 2005)

■ Use formats for case presentations andcritical reviews such as those suggestedby Dollaghan (2004a,b), Frattali and

248 CLINICAL RESEARCH METHODS

Table 13–10. Potential Biases in EBP

Funding bias: Lack of research in a particular area may be due toinsufficient research funding; tendency to allocateresources on basis of evidence.

Publication bias: Nonsignificant result bias: tendency for positive results tobe published more than negative. Publication processitself: editorial review.

Consumer/research Mismatch between treatments that are researched andthose that are desired or prioritized by consumers.

Reduced clinical Certain populations including those with severe problems applicability or co-morbidity may be under-represented in clinical trials

having homogeneous subject selection criteria.

Source: From “Evidence-Based Practice: What Evidence Is Missing,” by R. J. Elman, Aphasi-ology, 20, 103–109, 2006. Copyright 2006 Taylor & Francis. Reprinted by permission of thepublisher, Taylor & Francis Ltd, http://www.tandf.co.uk/journals.

13_Irwin_231-256 6/21/07 8:21 AM Page 248

Worrall (2001), Threats (2002), andWorrall and Bennett (2001).

Evidence-based practice is a process bywhich the current best evidence is criti-cally appraised, clinical expertise consid-ered, and a course of action selected.Several decisions may be made. For ex-ample, what is the best and most currentresearch evidence? How can the evi-dence be integrated with clinical expert-ise and client preferences?

Because of the relative newness of evidence-based practice in speech-languagepathology and audiology, little evidenceexists to guide identification of the beststrategies for implementing evidence-based practice. Some strategies are suc-cessful in some settings and with someprofessional groups but not with othersettings or groups (Clliska, DeCenso,Melynk, & Stetler, 2005).

Communicating the Evidence

Communication is an important as-pect for implementing and developing evidence-based practice. Communicatingevidence to clients and others is likely toimprove understanding, involvement indecisions, and outcomes (Epstein, Alper,& Quill, 2004). The speech-languagepathologist and audiologist should discusspossible options, including an explana-tion of how research evidence affectedthe recommendation(s). The aim is toprovide sufficient information so thatthe client and/or family can make aninformed decision (Johnson, 2006).

Methods for communicating evidenceinclude decision aids, graphic representa-tion, and available quantitative translationof clinical evidence. Systematic reviews

are evidence-based reviews which can beused to communicate evidence for clini-cal decision-making among professionals,third party payees, and policy makers.These reviews involve critical review andsynthesis of research in a specific area.

Another approach to communicatingevidence among professionals is thepeer-reviewed journal Evidence-BasedCommunication Assessment and Inter-vention (EBCAI) (Taylor & Francis Group,2006). The primary aims of EBCAI are:(1) promoting EBP in communicationassessment and treatment; (2) appraisingthe latest evidence in communicationevaluation and treatment; (3) providing aforum for discussions that advance EBP;and (4) disseminating EBP research.

Summary

There has been progress in implement-ing evidence-based practice in speech-language pathology and audiology, al-though this process has been slow,especially for those who advocate utiliza-tion of research for making clinical deci-sions. It does not seem that clinicianshave increased their awareness of theneed for evidence-based practice, nor itsimplementation into clinical practice.

Several organizations have undertakenefforts to facilitate implementation ofevidence-based practice. Among theseorganizations are ASHA, ANCDS, andCASLPA.

There are barriers to evidence-basedpractice.The primary barriers are relatedto the evidence base itself, time, andresistance to change. On the other hand,there are a number of strategies to imple-ment evidence-based practice which mayimprove the extent to which research isused for making clinical decisions.

EVIDENCE-BASED PRACTICE: APPLICATION OF RESEARCH 249

13_Irwin_231-256 6/21/07 8:21 AM Page 249