11
Hammill Institute on Disabilities Effects of Reading Comprehension Interventions for Students with Learning Disabilities Author(s): Elizabeth Talbott, John Wills Lloyd and Melody Tankersley Source: Learning Disability Quarterly, Vol. 17, No. 3, Academic Instruction (Summer, 1994), pp. 223-232 Published by: Sage Publications, Inc. Stable URL: http://www.jstor.org/stable/1511075 . Accessed: 17/06/2014 18:18 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Sage Publications, Inc. and Hammill Institute on Disabilities are collaborating with JSTOR to digitize, preserve and extend access to Learning Disability Quarterly. http://www.jstor.org This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PM All use subject to JSTOR Terms and Conditions

Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

Embed Size (px)

Citation preview

Page 1: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

Hammill Institute on Disabilities

Effects of Reading Comprehension Interventions for Students with Learning DisabilitiesAuthor(s): Elizabeth Talbott, John Wills Lloyd and Melody TankersleySource: Learning Disability Quarterly, Vol. 17, No. 3, Academic Instruction (Summer, 1994),pp. 223-232Published by: Sage Publications, Inc.Stable URL: http://www.jstor.org/stable/1511075 .

Accessed: 17/06/2014 18:18

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Sage Publications, Inc. and Hammill Institute on Disabilities are collaborating with JSTOR to digitize,preserve and extend access to Learning Disability Quarterly.

http://www.jstor.org

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 2: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

EFFECTS OF READING COMPREHENSION INTERVENTIONS FOR STUDENTS WITH

LEARNING DISABILITIES

Elizabeth Talbott, John Wills Lloyd, and Melody Tankersley

Abstract. We examined 48 studie of interventions designed to dlfeL the reading comptehension of students with leaiua g diabilitiee ancoded their characteristics and efful sizes. Our analyses reveaied that students who received the resecluders' experimental methods obtlined reading uiiupieheamsion scores that were higher than those of 87% of stude-nis in comparison groups. Strong effeIs were more likely when students' perfoiuicue was assessed on lower level conup-ehension measures, when the study author delivered the ieleventions, and when the experimental stu- dents' perfouiance was coonpaaed to that of students who did not receive any spe- cial iileivention. These results id;rle that althoug efflive reading compielen- sion interventions exist, there are gaps between their use in reseuch and practice.

Students must engage in several cognitive tasks as they read. First, they must recognize words and associate them with familiar concepts and meanings; second, they must recognize iso- lated sentences and sentence pairs and use those to build a representation of the larger text (Per- fetti, 1985). Poor readers often lack the skills necessary to recognize words and sentences; as a result, they are unable to understand text (Per- fetti, 1985). And, readers who are able to recog- nize words and sentences do not consistently use strategies for remembering and interpreting as they read (Ryan, 1981).

Researchers have taught diverse reading skills to students with learning disabilities to nilpiove their coiipirehension of words, sentences, and text. Further, they have taught students to iden- tify vocabulary (e.g., Pany, Jenkins, & Schreck, 1982); to remember phrases and facts (e.g., Scruggs & Mastropieri, 1989); and to think about the purpose of their reading (e.g., Wong & Jones, 1982).

Students typically improve their skills when they participate in such studies, but they do not always maintain those skills in the classroom. As Kavale (1990) argued: "the context within which intervention is delivered is just as important as

the intervention itself" (p. 22). But the experi- mental context in which students lear compre- hension skills does not resemble that of the classroom. Granted, some researchers work in the classroom; for example, Mastropieri and Scruggs (1988) employed special education teachers to teach mnemonics to students with leaxlilg disabilities during history class. Unfortu- nately, however, other researchers have failed to consider the role of teachers as they test ap- proaches to shaping comprehension.

In fact, teachers have rarely benefited from ad- vances in comprehension instruction that re- searchers have been developing since the 1970s. We know, for example, that vocabulary instruction (e.g., Pany & Jenkins, 1978) and metacognitive instruction (e.g., Chan, 1991) im-

FI I7ABETH TALBOTT, M.Ed., is a Ph.D. candidate, Curry School of Education, Univer- sity of Virginia. JOHN WILLS LLOYD, Ph.D., is Associate Professor, Curry School of Education, Univer- sity of Virginia. MELODY TANKERSLEY, Ph.D., is Assistant Professor, Kent State University.

Volume 17, Summer 1994 223

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 3: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

prove reading comprehension; yet, teachers spend virtually no time engaged in those tasks or others to help students understand and interpret texts (Durkin, 1978; Ysseldyke, Thurlow, O'Sul- livan, & Christenson, 1989).

Teachers who fail to teach students with learn- ing disabilities how to comprehend what they read often see disastrous results. Without explicit instruction, students with learning disabilities have no means of recognizing words, deriving meaning from sentences, or building a represen- tation of the larger text.

The failure of research to change classroom practice in reading comprehension is due, in part, to the complexity of the task, but it is also due to an absence of dialogue among re- searchers and teachers. Discussions among re- searchers have been theoretical, failing to em- phasize a purpose for practitioners (e.g., Bos & Anders, 1990b; Lenz, Bulgren, & Hudson, 1990; Palinscar & Brown, 1989); and the ma- jority of studies in reading comprehension do not involve teachers.

Granted, this is not the case with all re- searchers. Bos and Anders (1990b) displayed the theoretical background for teaching reading comprehension and tested whether their strate- gies improved adolescents' recall of science facts (Bos & Anders, 1990a). Although these authors (1990b) did not employ teachers to conduct their experiment, they were sensitive to the reading tasks in which junior high students must engage: They must read from a science text and recall facts for an exam. Bos and Anders' work highlights the difference between comprehen- sion instruction for adolescents versus instruction for elementary-aged students.

At the elementary level, students read fiction, and are not required to draw from prior knowl- edge in order to interpret text. As students ad- vance in grade level, however, they must use reading to distill facts and information, and must employ prior knowledge to interpret new text (Perfetti, 1985). Fitzgerald and Spiegel (1983) found that low-achieving fourth graders learned rapidly how to identify narrative following in- struction; by middle and high school, students no longer needed explicit instruction on the charac- teristics of narrative. Instead, older students need help integrating their knowledge of facts (of biol- ogy or history, for example) with the information they read in the text.

At every developmental level, students must decode in order to comprehend (Perfetti, 1985). Thus, comprehension instruction should not de- pend entirely upon a student's age or grade, but should also take into consideration her present reading abilities and present knowledge.

We examined studies designed to enhance reading comprehension of students with learning disabilities. Using the writings of diverse reading and special education researchers, we catego- rized studies along various dimensions (e.g., types of interventions, persons intervening, and comparison conditions), and then used meta- analysis to assess the contributions of these di- mensions to the effectiveness of the interven- tions. Our purpose was to present a coherent picture of current reading comprehension re- search for students with learning disabilities and to provide guidance for future research.

METHOD Selecting Studies

We searched the literature using conventional techniques: We reviewed ERIC and PSYCLit databases from January 1966 through May 1992, using reading comprehension, learning disabilities, and instruction as descriptors. We also examined the references of studies that we uncovered in those searches, as well as the refer- ences of literature reviews on reading compre- hension, the references of special education texts, and comprehension sections in reading reference books (e.g., Yearbooks of the National Reading Conference). By hand, we searched the 1992 spring issues of special education, general education, and reading journals to obtain those studies that were not included in the databases. Using these search methods, we identified 120 studies.

From these 120 studies, we selected those that met all the following criteria: (a) researchers examined the effects of an intervention designed to improve comprehension, (b) subjects had an identified learning disability (regardless of the means by which it was identified), and (c) re- searchers included a control group or a second experimental group against which the interven- tion was tested.

These criteria eliminated the following types of studies from our analysis: (a) studies employing single-subject or qualitative designs; (b) descrip- tions of intervention techniques that reported

224 Learning Disability Quarterly

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 4: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

anecdotal data; and (c) studies of remedial read- ers, students receiving Chapter I services, or stu- dents at risk. Employing these criteria, we re- tained a total of 48 studies for analysis. Coding Studies

We developed a guide for coding the studies based on Glass, McGaw, and Smith's (1981) recommendations as well as Stock's (1994) guide to coding for research syntheses. We asked two special education researchers to as- sess the content validity of the coding guide and revised it according to their recommendations.

Coders recorded the following substantive in- formation from each study: (a) subject character- istics, (b) experimental methods, (c) type of inter- vention, and (d) characteristics of the interventions.

Classifying student characteristics. Coders classified several characteristics of the students who participated in the studies. First, we recorded the total number of students in each study and the number of students identified as having learning disabilities, making note of the means by which they were identified (e.g., ability-achievement discrepancy, federal or state guidelines, specific deficits). Second, we classi- fied students' gender, age, and ethnicity. Third, we coded students' grade level as elementary (grades K-5), middle (grades 6-8), or high school (grades 9-12); and their socioeconomic level as low, middle, or high, according to authors' re- ports. Fourth, we coded the average IQ of stu- dents in a study as (a) less than or equal to 90, (b) greater than 90 but less than 110, or (c) greater than or equal to 110; and we noted stu- dents' present levels of reading performance.

Classifying study methods. After identify- ing the means by which researchers assigned students to groups (i.e., random, matched, un- controlled, or within subjects), we coded the con- trol groups as follows: (a) active control (control students experienced an alternative reading in- tervention that was explicitly defined); (b) no- treatment (control students received experimen- tal materials but no instructions, or they received unrelated instructions and materials); and (c) nor- mal instruction (control students experienced their normal classroom routines).

We classified the nature of the outcome vari- ables as well. Coders identified 13 types of de- pendent variables: (a) total reading or subtest scores from standardized tests, (b) inferential

questions, (c) factual questions, (d) retell assess- ments, (e) student-generated questions, (f) strat- egy assessments, (g) vocabulary, (h) identification of errors and nonexamples, (i) reading rate, (j) in- formal reading inventories, (k) memory tests, (1) combination of factual and inferential questions, and (m) other. In addition, we noted whether au- thors reported the reliability of outcome mea- sures, and if so, the means by which they com- puted reliability.

Classifying interventions. Coders classified interventions by comparing the procedures cited in each study to descriptions of interventions from (a) special education texts (e.g., Mastropieri & Scruggs, 1987); (b) chapters (e.g., Palinscar & Brown, 1989); and (c) review papers (e.g., Weis- berg, 1988). The categories are not mutually ex- clusive due to an overlap among these types of intervention in the extant literature. Neverthe- less, we obtained an agreement of 82% on their presence in the 48 target studies.

We categorized interventions into subgroups and large groups. The subgroups were: (a) strat- egy training, (b) schema, (c) information chunk- ing, (d) mnemonic, (e) reciprocal teaching, (f) re- peated readings, (g) cognitive-behavioral, (h) vocabulary training, (i) Direct Instruction, (j) pre- and mid-reading, (k) computer-assisted, and (1) other. These 12 subgroups were collapsed into seven larger groups: cognitive, cognitive-behav- ioral, vocabulary, pre- and mid-reading, Direct Instruction, computer-assisted, and other.

First, we categorized studies by comparing fea- tures of a study's intervention to features of models in the literature. Next, we grouped stud- ies in the large groups according to their degree of fit with the general definition of the large groups. For example, Bos and Anders (1990a) created a detailed outline in the form of a matrix from a science text for students to use as they read. We labeled this a schematic intervention (subgroup), which in turn fit the criteria for a cognitive intervention (large group).

Cognitive interventions were the most preva- lent among the large groups; they included au- thors' attempts to (a) teach specific problem- solving skills (e.g., Williams, 1990); (b) teach specific ways to approach a text, such as using specific schema or rules (e.g., Bos, Anders, Filip, & Jaffe, 1989); (c) provide students with instruc- tional aids such as advance organizers and out- lines (e.g., Billingsley & Wildman, 1988); (d)

Volume 17, Summer 1994 225

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 5: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

teach students means to remember facts from a text, such as history, science, or social studies (e.g., Scruggs & Mastropieri, 1989); and (e) ad- just and fine-tune instruction according to stu- dents' abilities and present performance (e.g., Bos & Vaughn, 1988).

Cognitive-behavioral interventions included authors' attempts to teach students to be aware of or to regulate their thinking and behavior dur- ing reading (metacognition). Such interventions included, but were not restricted to, (a) monitor- ing one's own behavior during reading (e.g., Graves, 1986); (b) asking oneself questions about the text or about one's performance (e.g., Wong & Jones, 1982); and (c) evaluating one's own performance after reading.

Vocabulary interventions included those fo- cusing on individual words that students read, rather than on concepts, ideas, or sentence and paragraph meaning. They included corrections of oral reading errors (e.g., Jenkins, Larson, & Fleisher, 1983) or instruction about the mean- ings and pronunciations of words in isolation and in context (e.g., Pany et al., 1982).

Pre- and mid-reading interventions were those that required students to engage in an ac- tivity (that did not fit any of the previous cate- gories) prior to or in the midst of reading with the goal of facilitating comprehension. Such in- terventions tended to be brief and not as intru- sive as cognitive, cognitive-behavioral, or Direct Instruction interventions. Questions about the story or brief previews of the story, such as those cited in teacher manuals accompanying basal readers (e.g., Sachs, 1983), were exam- ples of mid- and pre-reading interventions.

Direct Instruction interventions were those drawn directly from the work of Engelmann and colleagues (see Engelmann, Becker, Carnine, & Gersten, 1988). They included (a) frequent teacher praise for student attending and re- sponding, (b) signals to get and keep student at- tention, (c) programmed materials developed by Engelmann and colleagues, (d) rapid rate of oral responding, and (e) continuous evaluation and correction of inaccurate responding.

Computer-assisted interventions were those that tested the effectiveness of the computer to facilitate instruction. When researchers used computers for instruction, their approaches were not automatically considered computer interven- tions; for example, if students in one group re-

ceived mnemonic instruction via the computer and students in another group received non- mnemonic computer instruction, the researcher was clearly testing the effects of mnemonic in- struction. On the other hand, if researchers used the computer to present conventionally orga- nized material in a novel way (e.g., Horton, Lovitt, Givens, & Nelson, 1989), then we classi- fied the intervention as computer-assisted.

Other interventions included those that did not fit the previous criteria. Three studies fell into this category: a study of cooperative learn- ing (Cosden, Pearl, & Bryan, 1985); a study of personal or social problem solving with high school students (Williams, 1990); and a study of processing skills from subtests of the Kaufman Assessment Battery (Brailsford, Snart & Das, 1984).

Classifying intervention characteristics. We identified and recorded two characteristics of the intervention: the characteristics of the per- son who delivered the treatment and the length of the treatment in hours. Coders classified the experimenter as one of the following: (a) the au- thor(s) of the study, (b) teachers, or (c) trained personnel (typically research assistants) who were not the authors of the study or teachers.

We established the length of treatment by mul- tiplying the number of minutes for each experi- mental session by the number of experimental sessions. In many cases (n=21 studies), the au- thors did not provide sufficient information for calculating length of treatment. In those in- stances, we recorded the length of treatment as "not given." Reliability of the Coding System

One of us coded information for all the stud- ies. To permit us to assess intercoder agree- ment, a second person coded information for 25% of the studies. Each coder had more than four years of experience in special education teaching and research. Coders received a clean copy of each study and written instructions for how to code it. After discussing definitions of codes, coders worked independently.

We counted an agreement when coders se- lected the same category for an item on the cod- ing sheet (i.e., type of intervention; method of assignment to groups), or when coders recorded the same information for an item on the coding sheet (i.e., length of treatment; number of sub- jects). Conversely, we counted a disagreement if

226 Learning Disability Quarterly

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 6: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

coders selected different categories or recorded different information for an item. Total agree- ment ranged from 82% to 100%, with an aver- age of 95%. Outcome Quantification

For each study, we recorded data from which we could calculate effect sizes: means, standard deviations, alpha levels, and degrees of freedom for F tests. Using this information, we computed effect sizes for each study according to the pro- cedures suggested by Glass et al. (1981). Meta- analytic techniques rely on the effect size statistic (ES), which exposes the magnitude of experi- mental effect in standard deviation units (Glass et al., 1981; Kavale & Glass, 1981). Using the ef- fect size allows one to transform individual study findings to a standardized, common metric and then compare findings across studies.

To calculate effect sizes, we chose the formula that Glass et al. (1981) and other authorities on integrative reviews (e.g., Rosenthal, 1991) have recognized as valid; namely, the difference be- tween the mean of the experimental group and the mean of the control group divided by the standard deviation of the control group. In cases where authors did not cite means and standard deviations, but provided other statistical informa- tion (mean squares, degrees of freedom, and probability levels of F tests), we calculated effect sizes using the F test data (Glass et al., 1981; Rosenthal, 1991). Intercoder agreement on ef- fect size was 100%.

RESULTS Study Characteristics

We examined 48 studies-47 had been pub- lished in journals, one had been published in a book (see Williams, 1990). (Readers may obtain a list of the studies by writing to the authors.) Seventy-eight percent of the studies (n=37) had been published during the years 1986-1992, 22% (n=11) between 1978 and 1985.

Studies had been published in the following special education, general education, and read- ing research journals: American Educational Research Journal (1 study), British Journal of Educational Technology (1 study), Exceptional Children (5 studies), Journal of Educational Psychology (2 studies), Journal of Educational Research (3 studies), Journal of Educational Technology Systems (1 study), Journal of Learning Disabilities (7 studies), Learning Dis-

abilities Focus (1 study), Learning Disability Quarterly (15 studies), Learning Disabilities Research and Practice (1 study), Learning Dis- abilities Research (5 studies), Remedial and Special Education (2 studies), Rural Special Education Quarterly (1 study), and Reading Re- search Quarterly (2 studies).

We used means and standard deviations to cal- culate effect sizes for 80% of the studies (n=38); for 20% of the studies (n=10) we used data from F tests. We calculated a total of 255 effect sizes from the 48 studies; the number of effect sizes per study ranged from 1-36, with an average number of 5 effect sizes per study. Effect sizes ranged from -1.3 to 15.1, with an average effect size across studies of 1.13 (standard deviation of 1.79). Thus, the average student who partici- pated in a comprehension intervention in one of these studies scored at the 87th percentile on outcome measures. That is, her score on the outcome measure was higher than that of 87% of the students who participated in the control condition. Subject Characteristics

Together, these studies assessed the skills of 1,500 students; the number of students per study ranged from 4 to 70, with an average of 31 students per study (standard deviation of 15.7). In 29 of the 48 studies (60%), researchers reported the students' gender: these researchers worked with 288 females and 653 males. Thus, 63% of students (n=941) were identified as fe- male or male.

The average age of students was 13 years (ages ranged from 9 to 17 years with a standard deviation of 2.5 years). This figure was based on just over half of the studies, however, because only 58% (n=28) reported the age of participat- ing students. We found a significant positive cor- relation between students' age and effect size (r=.263, p<.01).

Likewise, we found significant differences in effect size among grade levels, F(2, 252)= 12.47, p<.001. All studies identified the grade level of participating students: 52% (n=25) ad- dressed elementary school students, 25% (n=12) addressed middle school students, and 23% (n=11) addressed high school students. The Stu- dent Newman-Keuls post-hoc test revealed that the mean effect size for high school students (M=2.26) was significantly greater than the mean effect size for students in elementary

Volume 17, Summer 1994 227

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 7: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

(M=.756) and middle school (M=.995). In 35% of the studies (n=17), researchers de-

scribed the ethnic backgrounds of 569 students. Among those studies, 67% (n=383) of the stu- dents were Caucasian, 25% (n=144) were African-American, 7% (n=39) were Hispanic, and 1% (n=3) were Native American or Asian.

Thirty-eight percent of the studies (n=18) re- ported information about the students' socioeco- nomic level, accounting for the socioeconomic status of 598 students. Among these studies, 56% (n=10) reported socioeconomic level as middle, 6% (n=l) reported socioeconomic level as high and 38% (n=7) reported socioeconomic level as low. We found significant differences in effect size among the three socioeconomic lev- els, F(2, 113)=9.84, p<.001. The Student New- man-Keuls post-hoc test revealed that the mean effect size was significantly higher for students at the middle and upper socioeconomic levels (mean effect sizes of 1.41 and .912, respec- tively) than for students at the lower socioeco- nomic level (M=-.324).

Data from intelligence tests were reported for students in 83% (n=38) of the studies. Of these studies, 18% (n=7) reported that the average IQs of participating students were less than or equal to 90, 79% (n=30) reported IQs between 90 and 110, and 3% (n=l) reported IQs greater than or equal to 110. We did not find significant differences in effect sizes among studies based on participants' intelligence test scores, F(3, 208)=1.0, p<.40. Study Methods

Researchers compared the effects of their reading comprehension interventions with the activities of control groups. In 45% (n=22) of the studies, researchers employed active con- trols; in 40% (n=19), they employed no-treat- ment controls; and in 15% (n=7) of the studies, they employed normal instruction controls.

We found significant differences in effect size depending upon the type of control group against which the intervention was measured, F(2, 252) = 5.47, p<.01. Specifically, the post- hoc Student Newman-Keuls test revealed that studies in which members of control groups ex- perienced no treatment yielded the greatest effect sizes (M=1.57); the mean effect size for these studies was significantly greater than for studies in which students experienced an active control (M=.845) or normal instruction (M=.769).

In addition, we examined the means by which researchers assigned students to groups. Fifty-six percent (n=27) of studies employed random as- signment to groups, 27% (n=13) employed within-subject designs, 11% (n=5) did not con- trol assignment to groups, and 6% (n=3) em- ployed matching procedures to assign subjects to groups.

We found significant differences in effect sizes depending upon the means by which students were assigned to groups, F(3, 251)=9.23. p<.001. The post-hoc Student Newman-Keuls test revealed that when students were assigned to groups by matching procedures, effect sizes were significantly greater (M=3.14) than when assignment was (a) random (M=.824), (b) not controlled (M=1.67), or (c) based on a within- subjects design (M=1.62). Measures

Researchers used diverse means to measure study outcomes. We obtained 255 effect sizes, the majority of which were measured by factual questions (n=130 effects), retell assessments (n=71 effects), or strategy assessments (n=13 ef- fects). The remaining effects (n=41) were mea- sured by standardized reading tests, inferential questions, student-generated questions, assess- ment of vocabulary, detection of errors and nonexamples, reading rate, informal reading in- ventories, combinations of informal and factual questions, memory tests, and other methods.

We did not find significant differences in effect size among the three most common types of outcome measures: factual questions, recall assessments, and strategy assessments, F(2,209)=1.64, p<.20. Forty percent (n=19) of the studies reported assessing the reliability of the outcome measures; the average reliability was .87 (standard deviation of .125); it was as- sessed in a variety of ways, including coefficient Alpha, Phi, Pearson r, and Kuder-Richardson. Intervention Characteristics

Forty-three percent of studies (n=21) em- ployed cognitive interventions to improve read- ing comprehension; 17% (n=8) employed pre- or mid-reading interventions; 16% (n=8) em- ployed computer interventions; 9% (n=4) em- ployed cognitive-behavioral interventions; 7% (n=4) employed vocabulary interventions; 4% (n=2) employed other interventions; and 2% (n=l) employed Direct Instruction interventions.

We found significant differences for effect size

228 Learning Disability Quarterly

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 8: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

among type of intervention, F(6,248)=3.09, p<.01. The Student Newman-Keuls post-hoc test revealed that the mean effect size for other interventions (M=3.08) was significantly higher than the mean effect size for cognitive-behav- ioral (M=1.6), pre- or mid-reading (M=1.18), cognitive (M=1.0), computer-assisted (M=.876), vocabulary (M=.697), or Direct Instruction (M=.67) interventions.

Interventions varied widely in terms of dura- tion of treatment. Fifty-four percent (n=26) of studies provided detailed information about dura- tion of treatment; the average intervention re- quired 30 hours to implement, with great vari- ability in length of treatment (range of 2-400 hours; standard deviation of 76 hours). The rela- tionship between length of treatment and effect size was not significant (r=-.012, p<.90).

In fifty-six percent of studies (n=27), re- searchers did not report sufficient information for us to determine who delivered treatment. In 19% (n=9) of studies, teachers delivered treat- ment; in 17% (n=8), trained assistants delivered treatment; and in 8% (n=4) of studies, authors delivered treatment.

We found significant differences in effect sizes depending upon the person who delivered treat- ment, F(3, 251)=14.55, p<.001. The Student Newman-Keuls post-hoc test revealed that stud- ies in which authors delivered treatment yielded significantly greater effect sizes (M=3.75) than studies in which research assistants (M=1.16), teachers (M=.51), or undefined experimenters (M=1.0) delivered treatment.

DISCUSSION The average effect size for students with learn-

ing disabilities who experienced an intervention in reading comprehension (M=1.1) was impres- sive. Specifically, the average student who re- ceived a reading comprehension intervention performed better than 87% of students in the control group(s). However, we must temper our enthusiasm about the effectiveness of interven- tions for the following reasons:

(a) effect sizes were significantly higher in stud- ies where control students experienced no treat- ment;

(b) effect sizes were significantly higher in stud- ies where students in the treatment and control groups were matched on some variable before being assigned to groups;

(c) effect sizes were significantly higher in stud- ies where the authors delivered treatment to stu- dents; and

(d) the majority of experimental effects were measured by factual questions and retell assess- ments.

In other words, researchers can produce clear and substantial benefits from their reading com- prehension interventions under specific circum- stances. To produce such effects, they must: (a) compare their experimental treatment to a non- treatment, (b) assign students to experimental and control conditions using methods other than the most rigorous, (c) deliver the treatment themselves rather than have classroom teachers deliver it, and (d) assess the effects of the treat- ment on lower level measures of comprehen- sion.

The finding that researchers obtained stronger effects when experimental groups experienced no treatment should not surprise us; instead, it should prompt us to employ active comparisons in our research. We are accomplishing little when we prove that our interventions work bet- ter than nothing. Indeed, only when we com- pare one intervention to another can we discern the characteristics of effective interventions. And, we advance the field of learning disabilities as we develop effective interventions (Lloyd, Tankersley, & Talbott, 1994).

Future research in reading comprehension for students with learning disabilities must include studies that are more rigorous, both in their method and in their reporting. Studies in which researchers employed random assignment to groups and active control conditions tended to yield lower effect sizes. And, too few studies pre- sented detailed information about subjects, in- cluding gender, ethnicity, and socioeconomic status. For example, 29 studies provided infor- mation about gender (accounting for 941 of the 1,500 students); 17 studies provided information about ethnicity (accounting for 569 of the 1,500 students); and 18 studies provided information about socioeconomic status (accounting for 598 of the 1,500 students).

As a result, we do not know whether reading comprehension interventions are effective for di- verse groups of learners: for girls and boys, or for students of diverse ethnic and economic backgrounds. Do we have interventions that are potent enough to work for students from diverse

Volume 17, Summer 1994 229

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 9: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

ethnic backgrounds, and at all economic levels? Do we have interventions for females with

learning disabilities? We need to address these questions and describe the characteristics of our samples, so that we can develop effective inter- ventions for all learners.

Researchers did provide us with information about students' grade and age. We found that studies involving older students yielded signifi- cantly greater effect sizes. Perhaps this is due to the increased emphasis on comprehension at higher grade levels, particularly literal compre- hension in content area classes. Or, it may be because there are more opportunities for stu- dents to improve their skills as they progress through school.

Others have found that comprehension in- struction is more effective in elementary school than in middle or high school (e.g., Neville & Searls, 1991; Searls & Neville, 1988). But, Fitzgerald and Spiegel's (1983) work indicates that the effectiveness of comprehension instruc- tion depends upon its applicability to the de- mands of the grade levels. Clearly, we need to ascertain which interventions work best for ele- mentary, middle, and high school students.

We did not find significantly different effect sizes in studies that employed the six major types of intervention: cognitive, cognitive-behav- ioral, vocabulary, pre- and mid-reading, Direct Instruction, and computer-assisted. Although we did not find differences (and we do not want to interpret null results), it does not mean that dif- ferences among these interventions do not exist. Our findings are limited because our intervention categories were not mutually exclusive and be- cause we had a small number of studies per cate- gory. Thus, we must call for research in which researchers carefully define their interventions, and we must look for studies that compare two or more types of intervention (e.g., Graves, 1986).

Furthermore, when teachers conducted the in- terventions, we found weaker effects. Under these conditions, the mean effect size was .51; it was not as high as the overall average effect (M= 1.1), nor as high as the effect when subjects were randomly assigned to groups (M=.824), or when experimenters used active controls (M=.845). Granted, students who experienced teachers' interventions performed better than 70% of students in control groups. But, teachers

provided interventions for only 243 (16%) of the 1,500 students in these studies. Clearly, we need more investigations in which teachers de- liver interventions to students; otherwise, our in- terventions are of limited use. Teacher-delivered interventions that are highly effective will obvi- ously have greater external validity. Although in- terventions delivered by study authors produce large effects, they may be impossible for teach- ers to apply.

In the majority of studies (regardless of stu- dents' age), researchers assessed student recall of factual information from text. Sachs (1983) examined the effects of the intervention on di- verse levels of reading comprehension (i.e., lit- eral, inferential, evaluative, and appreciative). Sachs applied Barrett's (1976) reading compre- hension taxonomy to design test questions that addressed the four levels of comprehension. The average effect size for Sachs' study (M=.690) was much smaller than the average across all studies (M=1.12); but, Sachs assessed the de- gree to which students improved their compre- hension at all levels of difficulty (a greater chal- lenge than improving literal comprehension alone). We found few studies (n=4 studies, 5 ef- fect sizes) that reported the effects of interven- tions on students' higher level comprehension skills. This is another area for future research.

The most unusual result was the finding that studies of "other" interventions yielded signifi- cantly greater effect sizes than studies of inter- ventions in the six major categories. We exam- ined the characteristics of the three studies in the "other" category: (a) a study of processing skills from the Kaufman Assessment Battery for Children (Brailsford et al., 1984); (b) a study of cooperative learning (Cosden et al., 1985); and (c) a study of problem solving and social skills (Williams, 1990). Together, effect sizes from these studies (n=11) accounted for 4% of all ef- fect sizes. They ranged from .266 to 15.1, with an average effect size of 3.08 and a median ef- fect size of .668.

The Williams study clearly influenced the large mean effect size for this category: 6.2, com- pared to mean effect sizes for the other two studies of .403 (Cosden et al., 1985) and .658 (Brailsford et al., 1984). The median effect size in the three studies (.668) appears to be a more accurate reflection of the studies' effectiveness than the mean. Williams' results clearly skewed

230 Learning Disability Quarterly

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 10: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

the average effect size for studies in the other category.

Williams (1990) asked high school students to read stories about adolescents with problems and then asked questions about the passages. Thus, Williams used materials (i.e., stories about peers' problems) that were distinctly different from typical academic materials used in other studies with adolescents (e.g., social studies, sci- ence, and history passages), materials requiring students to draw from their prior knowledge and to distill facts and information from their read- ing. In contrast, Williams' materials were novel and personal. When Williams compared the in- tervention to normal classroom instruction, she obtained extremely large effect sizes (M=6.19; standard deviation=5.36; median=5.64; range=1.85 to 15.1).

We note the following limitations of our meta- analysis: (a) we included only published studies, which tended to yield higher effect sizes than those that were not published (e.g., Searls & Neville, 1988); (b) we excluded studies with sin- gle-subject designs, yet recognize that excep- tional work in comprehension uses these designs (e.g., Schumaker, Deshler, Alley, Warner, & Denton, 1982); (c) we included only studies of students with identified learning disabilities; yet there is exceptional work in comprehension for poor readers who have not been so identified (e.g., Paris, Cross, & Lipson, 1984).

Although researchers have developed effective methods for teaching comprehension to students with learning disabilities, there is much work to be done. For example, we need to compare our interventions with active controls to discern the best among them; we need to employ rigorous methods (e.g., random assignment to groups and detailed subject description); we need to em- ploy teachers to conduct interventions; and we need to develop means of teaching sophisticated comprehension skills, such as evaluation and ap- preciation (Smith & Barrett, 1974).

Teaching comprehension to students is more complex than teaching decoding, not only be- cause the task is more abstract, but also because individuals with comprehension problems tend to demonstrate decoding fluency problems (Per- fetti, 1985). Nevertheless, we can teach students with learning disabilities to comprehend what they read; we can teach so that students will out- perform their peers who receive normal instruc-

tion. But, we must find methods for teachers to use, or our students will not continue to im- prove, nor will they maintain the levels of perfor- mance that they show in our experiments.

REFERENCES Billingsley, B.S., & Wildman, T.M. (1988). The ef-

fects of pre-reading activities on the comprehension monitoring of learning disabled adolescents. Learn- ing Disabilities Research, 4, 36-44.

Bos, C.S., & Anders, P.L. (1990a). Effects of interac- tive vocabulary instruction on the vocabulary learn- ing and reading comprehension of junior-high learn- ing disabled students. Learning Disability Quarterly, 13, 31-42.

Bos, C.S., & Anders, P.L. (1990b). Interactive prac- tices for teaching content and strategic knowledge. In T.E. Scruggs & B.Y.L. Wong (Eds.), Interven- tion research in learning disabilities (pp. 116- 185). New York: Springer-Verlag.

Bos, C.S., Anders, P.L., Filip, D., & Jaffe, L.E. (1989). Effects of an interactive instructional strat- egy for enhancing reading comprehension and con- tent area learning for students with learning disabili- ties. Journal of Learning Disabilities, 22, 384-390.

Bos, C.S., & Vaughn, S. (1988). Strategies for teach- ing students with learning and behavior prob- lems. Boston: Allyn and Bacon.

Brailsford, A., Snart, F., & Das, J.P. (1984). Strategy training and reading comprehension. Journal of Learning Disabilities, 17, 287-290.

Chan, L.K.S. (1991). Promoting strategy generaliza- tion through self-instructional training in students with reading disabilities. Journal of Learning Dis- abilities, 24, 427-433.

Cosden, M., Pearl, R., & Bryan, T. (1985). The ef- fects of cooperative and individual goal structures on learning disabled and nondisabled students. Ex- ceptional Children, 52(2), 103-114.

Durkin, D. (1978). What classroom observations re- veal about reading comprehension instruction. Reading Research Quarterly, 4, 482-533.

Engelmann, S., Becker, W.C., Carnine, D., & Ger- sten, R. (1988). The Direct Instruction follow- through model: Designs and outcomes. Education and Treatment of Children, 11, 303-317.

Fitzgerald, J., & Spiegel, D.L. (1983). Enhancing chil- dren's reading comprehension through instruction in narrative structure. Journal of Reading Behav- ior, 14, 1-18.

Glass, G.V., McGaw, B., & Smith, M.L. (1981). Meta-analysis in social research. Newbury Park, CA: Sage Publications.

Graves, A.W. (1986). Effects of direct instruction and metacomprehension training on finding main ideas. Learning Disabilities Research, 1(2), 90-100.

Horton, S.V., Lovitt, T.C., Givens, A., & Nelson, R. (1989). Teaching social studies to high school stu- dents with academic handicaps in a mainstreamed

Volume 17, Summer 1994 231

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions

Page 11: Academic Instruction || Effects of Reading Comprehension Interventions for Students with Learning Disabilities

setting: Effects of a computerized study guide. Jour- nal of Learning Disabilities, 22, 102-107.

Jenkins, J.R., Larson, K., & Fleisher, L.A. (1983). Ef- fects of oral reading error corrections on word recognition and reading comprehension. Learning Disability Quarterly, 6, 139-145.

Kavale, K.A. (1990). Variances and verities in learning disability interventions. In T.E. Scruggs & B.Y.L. Wong (Eds.), Intervention research in learning dis- abilities (pp. 3-33). New York: Springer-Verlag.

Kavale, K.A., & Glass, G.V. (1981). Meta-analysis and the integration of research in special education. Journal of Learning Disabilities, 9, 531-538.

Lenz, B.K., Bulgren, J., & Hudson, P. (1990). Con- tent enhancement: A model for promoting the ac- quisition of content by individuals with learning dis- abilities. In T.E. Scruggs & B.Y.L. Wong (Eds.), Intervention research in learning disabilities (pp. 122-165). New York: Springer-Verlag.

Lloyd, J.W., Tankersley, M., & Talbott, E. (1994). Us- ing single-subject research methodology to study learning disabilities. In S. Vaughn & C. Bos (Eds.), Research issues in learning disabilities: Theory, methodology, assessment, and ethics (pp. 163- 177). New York: Springer-Verlag.

Mastropieri, M.A., & Scruggs, T.E. (1987). Effective instruction for special education. Boston: Little- Brown.

Mastropieri, M.A., & Scruggs, T.E. (1988). Increasing content area learning of learning disabled students: Research implementation. Learning Disabilities Research, 4, 17-25.

Neville, D.D., & Searls, E.F. (1991). A meta-analytic review of the effect of sentence combining on read- ing comprehension. Reading Research and In- struction, 31, 63-76.

Palinscar, A.S., & Brown, A.L. (1989). Classroom di- alogues to promote self-regulation comprehension. In J. Brophy (Ed.), Advances in research on teach- ing (Vol. 1, pp. 35-71). Greenwich, CT: JAI Press.

Pany, D., & Jenkins, J.R. (1978). Learning word meanings: A comparison of instructional proce- dures and effects on measures of reading compre- hension with learning disabled students. Learning Disability Quarterly, 1, 21-32.

Pany, D., Jenkins, J.R., & Schreck, J. (1982). Vocab- ulary instruction: Effects on word knowledge and reading comprehension. Learning Disability Quar- terly, 5, 202-215.

Paris, S.G., Cross, D.R., & Lipson, M.Y. (1984). In- formed strategies for learning: A program to im- prove children's reading awareness and comprehen- sion. Journal of Educational Psychology, 76, 1239-1252.

Perfetti, C.A. (1985). Reading ability. New York: Ox- ford University Press.

Rosenthal, R. (1991). Meta-analytic procedures for social research. Newbury Park, CA: Sage Publica- tions.

Ryan, E.B. (1981). Identifying and remediating failures in reading comprehension: Toward an instructional approach for poor comprehenders. In G.E. Mac Kinnon & T.G. Waller (Eds.), Reading research: Advances in theory and practice (Vol. 3, pp. 240- 265). New York: Academic Press.

Sachs, A. (1983). The effects of three pre-reading ac- tivities on learning disabled students' reading com- prehension. Learning Disability Quarterly, 6(3), 248-251.

Schumaker, J.B., Deshler, D.D., Alley, G.R., Warer, M.M., & Denton, P. (1982). Multipass: A learning strategy for improving reading comprehension. Learning Disability Quarterly, 5, 409-414.

Scruggs, T.E., & Mastropieri, M.A. (1989). Mnemonic instruction of learning disabled students: A field- based evaluation. Learning Disability Quarterly, 12, 119-125.

Searls, E.F., & Neville, D.D. (1988). An exploratory review of sentence-combining research related-to reading. Journal of Research and Development in Education, 21, 1-15.

Smith, R.J., & Barrett, T.C. (1974). Teaching read- ing in the middle grades. Reading, MA: Addison- Wesley.

Stock, W.A. (1994). Systematic coding for research synthesis. In L.V. Hedges & H.M. Cooper (Eds.), The handbook of research synthesis (pp. 125-138). New York: Russell Sage Foundation.

Weisberg, R. (1988). 1980s: A change in the focus of reading comprehension research: A review of read- ing/learning disabilities research based on an inter- active model of reading. Learning Disability Quar- terly, 11, 149-159.

Williams, J.A. (1990). The use of schema in research on the problem solving of learning disabled adoles- cents. In T.E. Scruggs & B.Y.L. Wong (Eds.), Inter- vention research in learning disabilities (pp. 304- 321). New York: Springer-Verlag.

Wong, B.Y.L., & Jones, W. (1982). Increasing meta- comprehension in learning disabled-and normally achieving students through self-questioning training. Learning Disability Quarterly, 5, 228-240.

Ysseldyke, J.E., Thurlow, M.L., O'Sullivan, P., & Christenson, S.L. (1989). Teaching structures and tasks in reading instruction for students with mild handicaps. Learning Disabilities Research, 4, 78- 86.

FOOTNOTES 1We did not try to obtain nonpublished work, but fol- lowing the advice of an experienced meta-analyst (Kavale, personal communication, April 1992), we ob- tained as much as possible of the published work in reading comprehension for students with learning dis- abilities.

Requests for reprints should be addressed to: Elizabeth Talbott, Curry School of Education, University of Vir- ginia, 405 Emmet Street, Charlottesville, VA 22903.

232 Learning Disability Quarterly

This content downloaded from 91.229.229.129 on Tue, 17 Jun 2014 18:18:25 PMAll use subject to JSTOR Terms and Conditions