21
SREE ANNUAL CONFERENCE MARCH 6, 2010 Using RTCs to determine the impact of reading interventions on struggling readers Newark Public Schools Jennifer Hamilton, Senior Study Director [email protected] Matthew Carr, Analyst [email protected]

SREE Annual Conference March 6, 2010

Embed Size (px)

DESCRIPTION

SREE Annual Conference March 6, 2010. Using RTCs to determine the impact of reading interventions on struggling readers Newark Public Schools Jennifer Hamilton, Senior Study Director [email protected] Matthew Carr, Analyst [email protected]. Overview of Presentation. - PowerPoint PPT Presentation

Citation preview

Page 1: SREE Annual Conference March  6, 2010

SREE ANNUAL CONFERENCEMARCH 6, 2010

Using RTCs to determine the impact of reading interventions on

struggling readers

Newark Public Schools

Jennifer Hamilton, Senior Study Director

[email protected]

Matthew Carr, Analyst

[email protected]

Page 2: SREE Annual Conference March  6, 2010

Overview of Presentation Context – Striving Readers in Newark, NJ Fidelity of implementation

Adherence Exposure

Discussion For more information…

2

Page 3: SREE Annual Conference March  6, 2010

Context – Newark, NJ

35% children living in poverty (compared to 18% nationally) Largest school district in the state of NJ

A ‘district in need of improvement’ for last 4 years State took over the district in 1995 (limited control given back in

2008) Only ~ 50% of students in grades 6, 7, & 8 are proficient readers

3

Page 4: SREE Annual Conference March  6, 2010

Importance of Fidelity

Fidelity is the extent to which the intervention as implemented is faithful to the pre-stated

model.

Little black dress is in; little black box is out Internal validity - Helps to explain failure External validity - Helps to make treatment

more stable and replicable (treatment has to be well defined)

Helps ensure treatment is absent from control condition

4

Page 5: SREE Annual Conference March  6, 2010

Components of Fidelity - Theory of Change

Adherence

Exposure

5

Page 6: SREE Annual Conference March  6, 2010

Establishing Fidelity (Adherence)

4-steps: (1) identify, (2) measure, (3) score, (4) analyze

Step 1: Identify critical components Adaptation issue

Step 2: Measure Multiple sources of data, range of methodologies

Extant data (training receipt, class size, SRI, computer use)

Classroom observations Practical considerations - $$$$

Qualifications of data collection staff Number of points in time (cost)

6

Page 7: SREE Annual Conference March  6, 2010

Establishing Fidelity (Adherence)

Step 3: Score Assign sub-scores Number of sessions per week using instructional software

Combine to a single score Equal weighting

7

Fidelity

% of classrooms

High 18.2%

Adequate

36.4%

Low 18.2%

Very Low

27.3%

Page 8: SREE Annual Conference March  6, 2010

Newark - Single Adherence Score

Year 1 = 88% Year 2 = 82% Year 3 = 89%

8

Page 9: SREE Annual Conference March  6, 2010

Establishing Fidelity (Adherence)

Step 4: Analysis Descriptive

But profoundly unsatisfying, given all the effort and expense

Generally, should not be used as a mediating variable Fidelity usually related to error term as well as outcome

Error term contains unmeasured factors, such as teacher quality/charisma and student engagement

Non-experimental/exploratory Fidelity as a predictor (with lots of covariates) Correlational

9

Page 10: SREE Annual Conference March  6, 2010

Newark- Descriptive Adherence Data

10

Page 11: SREE Annual Conference March  6, 2010

Exposure

You are here

11

Page 12: SREE Annual Conference March  6, 2010

Exposure

Student Receipt of Intervention -- Components

Attrition Attendance No-Shows

12

Page 13: SREE Annual Conference March  6, 2010

Exposure - Attrition

WWC (2008) Benchmarks for attrition tolerance

13

Newark

19.6% overall

5.6% differential

Page 14: SREE Annual Conference March  6, 2010

Exposure - Attendance

Number of unexcused absences by analytic group

14

Group 1

Group 2

Group 3

Group 4

Group 5

Treatment

24.20 23.25 21.57 27.09 23.90

Control 24.15 23.30 21.40 27.12 23.80Group 1 = 1 year of potential exposure (6,7, 8 year 1; 6 year 2)Group 2 = 1 year of potential exposure - 6th graders only (years 1,2)Group 3 = 2 years of potential exposure – 7th graders only (year 2)Group 4 = 2 years of potential exposure – 8th graders only (year 2)Group 5 = 2 years of potential exposure – 7th + 8th graders (7,8 year 2)

No significant differences b/t Treatment and Control students

Page 15: SREE Annual Conference March  6, 2010

Exposure – No-Shows

Intention to Treat (ITT) vs. Treatment on the Treated (TOT) Removing T students who didn’t receive T would bias

the data But keeping them in underestimates effects Issue of real world implementation vs. ideal

implementation

Policymakers want to know TOT, Researchers need to report ITT

Solution – The Bloom Adjustment

15

Page 16: SREE Annual Conference March  6, 2010

The Bloom Adjustment

Adjusts the effects of an intervention upwards by the treatment group no-show rate

AllSubjectEffect = γ*NoShowEffect + (1-γ)TreatSubjectEffect Assuming the effect

per no-show is zero, then:

AS = γ * 0 + (1- γ)TS

AS = (1- γ)TS Therefore:

TS = AS / (1- γ)

16

Page 17: SREE Annual Conference March  6, 2010

17

Example: Striving Readers

Student sample divided into 5 analytic groups

Year 1 (06-07)

Year 2 (07-08)

6 7 8 6 7 8

(1) 1 year of exposure (n=1,772)

X X X X

(2) 1 year of exposure (n=904)

X X

(3) 2 years of exposure (n=444)

X

(4) 2 years of exposure (n=373)

X

(5) 2 years of exposure (n=817)

X X

Page 18: SREE Annual Conference March  6, 2010

Striving Readers Example

ITT effect sizes compared to Bloom Adjusted (year 2)

Subgroup Test DomainNo-Show

RateITT

Effect Size

Bloom Effect Size

Male (2 yrs 7th) Vocabulary 9% 0.34 0.37 Male (2 yrs 7+8) Vocabulary 8% 0.23 0.25 Hispanic (2 yrs 8th) Language arts 8% 0.47 0.51 Special Education (2 yrs 7+8)

Comprehension 7% 0.24 0.25

Special Education (2 yrs 7th)

Comprehension 5% 0.37 0.39

18

Page 19: SREE Annual Conference March  6, 2010

Review

Adherence = receipt of materials + accurate delivery 4 steps: identify, measure, score, analyze

Receipt Attrition Attendance No Shows – Bloom Adjustment

19

Page 20: SREE Annual Conference March  6, 2010

For more information…

Bloom, H. (1984). Accounting for No-Shows in experimental evaluation designs. Evaluation Review, 8, 225-246.

Durlak, J.A., & DuPre, E.P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350.

Hill, L.G., Maucione, K,. & Hood, B.K. (2007). A focused approach to assessing program fidelity. Prevention Science, 8, 25-34.

Mowbray, C. Holter, M. Teague, G., & Bybee, D. (2003). Fidelity Criteria: Development, Measurement, and Validation. American Journal of Evaluation, 24, 315-340.

What Works Clearinghouse. (2008). WWC Procedures and Standards Handbook. Available online at http://ies.gov/ncee/wwc/references

20

Page 21: SREE Annual Conference March  6, 2010

21

On the Web

Department of Elementary and Secondary Education Striving Readers webpage

http://www.ed.gov/programs/strivingreaders/index.html