Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 1
A Cluster Randomized Control Field Trial of the ABRACADABRA Web-based Reading Technology: Replication and Extension of Basic
Findings.
McGill University Digital Literacy for Preschoolers Conference June 2015
http://abralite.concordia.ca
*
Student Module: Instruction
Choose from skill type OR story Four Main Skills: Alphabetics, Fluency, Comprehension and Writing 32 Activities
Digital Stories: 17 Stories + 15 students’ stories
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 2
Built-in Scaffolding and Support
Models of strategies are consistent
Visual and audio support
Skill Development
Choose a specific skill: Letter sounds
Reading
Understanding the story or
Writing
Multiple Levels
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 3
Teacher Module
Teacher GuideSee also: http://grover.concordia.ca/abracadabra/resources
Stories
Printed Resources
Technical Resources (FAQs)
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 4
*Prototype (BLTK) built in 1998
*ABRACADABRA in 2002 (usability testing)
*Pilot Study 2004 (small effect sizes)
*Larger study 2005-2006 K/ 1st Grade (Savage et al., 2009)
*Pan-Canadian external trial RCT 2007- 2009 (Savage et al., 2013)
*Australian studies 2008-2010 (Wolgemuth et al 2011, 2013, 2014)
*ABRA-ePEARL Connection 2010- 2012
929496
98100102
104106
108110
Pre-test Post-test
Phoneme
Rime
Control
CTOPP blending standard score GRADE listening comprehension (stanine)
0
1
2
3
4
5
6
Pre-test Post-test
Phoneme
Rime
Control
8485
868788
8990
91
92
93
Post-test
Phoneme
Rime
Control
GRADE reading comprehension scores
0
5
10
15
20
Pre-test Post-test Follow up
Phoneme
Rime
Control
Woodcock-Johnson Fluency (raw scores)
*Foci: To learn how teachers use ABRA technology in their ELA lessons and to see if ABRA has an impact on students’literacy development.
*Pan-Canadian focus* Alberta, Ontario and Quebec participants
*Pre- and post-tests administered*Randomized controlled trial in 76 classrooms (36
experimental and 36 control)* Kindergarten, Grade 1 students* Over 1000 children participated* 10 – 12 weeks of intervention
*Significant beneficial effects on childrens’:
Letter/sound knowledge
Word reading
Phonological awareness
*A look at implementation suggests:
Teachers use ABRA as a resource to teach phonics (consistent with effect patterns above) only.
Table1.
ResearchonABRACADABRA:BestEvidenceonImpacts
Reading Skill k (# ofcomparisons)
AverageEffectSize
PercentileAdvantage
Alphabetics 21 +0.396 15.39Fluency 19 +0.187 7.42Comprehension 11 +0.340 13.31
Overall 51 +0.306 12.02
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 5
*Aim: Replicate RCT of teacher-delivered intervention to test effectiveness
*One whole (remote) school board (external validity)
*107 kindergarten and 96 grade 1 children in 24 classrooms
*10-12 hrs of teaching (close training and support), school board monitoring
*Treatment integrity and testing by board
Classroom-level randomization means need to run analysis at this level too:
HLM of results (pre-test as covariate) revealed significant main effects of letter-sound knowledge (p < .01) favoring ABRA
This analysis is highly conservative
*Effect size analyses (value-added d)
*Letter Sounds = +.66*Phonological Blending = +.52*Word Reading = +.52
*Effectiveness trial also a efficiency validity trial –the study accepted by teachers and officials
Four recent systematic reviews were chosen
• Slavin, Cheung, Groff & Lake (2008)
• Slavin, Lake, Chambers, Cheung, & Davis (2009)
• Torgenson & Zhu (2003)
• Andrews, Freeman, Hou, McGinn, Robinson & Zhu (2007)
• Used comparable review criteria• Use of randomized or matched control groups
• Study duration of at least 12 weeks
• Valid achievement measures
• Effect sizes, means or mean gain scores
• Found small to modest effect sizes
• Very little evidence in ICT effectiveness
“… the effects of supplementary computer-assisted instruction were small.”
Slavin, Cheung, Groff & Lake (2008)
“…instructional process programs designed to change daily teaching practices have substantially greater research support than programs that focus on curriculum or technology alone. ”
Slavin, Lake, Chambers, Cheung, & Davis (2009)
“These data would suggest that there is little evidence to support the widespread use of ICT in literacy learning in English.”
Torgenson & Zhu (2003)
“… we are thus unable to make confident comparisons between the effectiveness of different ICTs on learning in English for 5- to 16-year-olds.”
Andrews, Freeman, Hou, McGinn, Robinson, & Zhu (2007)
Archer et al. (2014) review reanalyzed 28 of the ICT effectiveness studies from the 4 systematic reviews deemed as high quality studies (all studies available).
2 variables were examined:
1. Reported quality of training and support that teachers received on implementing the ICT innovations
2. Reported quality and fidelity of implementation of the ICT innovations by teachers
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 6
• overall impression on process (were the procedures followed up to ensure correct implementation) and measuring tools
• 8 Questions
• Description of Implementation Training
• Details of the Intervention Implementation
Coding Scheme
0 = not present
1 = mentioned but NO information
2 = mentioned with limited detail
3 = mentioned with enough detail to roughly replicate
• 11 Questions
• Implementation Fidelity Process
• Implementation Fidelity Measurement Tool
• Results
• overall impression on the training section
Coding Scheme
0 = not present
1 = mentioned but NO information
2 = mentioned with limited detail
3 = mentioned with enough detail to roughly replicate
0
5
10
15
20
25
0 1 2 3
Training Support
0
5
10
15
20
25
30
0 1 2 3
Implementation Fidelity
0 = 53.8% 0 = 69.2%
N = 28
Kappa Implementation Fidelity = .65
p < .001
N = 28
Kappa Training Support = .88
p < .001
Digital Literacy for Preschoolers 6/26/2015
Conference at McGill University 7
• Calculations used a main dataset
• Dataset reflected how the review papers structured the analysis e.g. if distinct ES for several studies in meta-analysis then several scores used here (n = 38 effect sizes)
• 16-teacher –led and 21 research student-led interventions, 1 parent-led intervention
*
00.20.40.60.8
11.2
Scor
e 0
Scor
e 1
Scor
e 2
Scor
e 3
Train/ Support
Train/Support
00.10.20.30.40.50.6
Fidelity
Fidelity
IV: training support DV: effect size
F(3,34) = 3.77, p=.019, =.249p2
Contrast Analysis
Code 1 vs. 0 p=.74
Code 2 vs. 0 p=.007
Code 3 vs. 0 p=.30
Training Support
IV: IF DV: effect size
F(2,35) = .89, p=.42, =.048
Contrast Analysis
Code 1 vs. 0 p=.21
Code 2 vs. 0 p=.80
Implementation Fidelity
p2
Reported Training and Support has big influence on Technology Effect Sizes for Literacy outcomes (explaining 26% of variance)
With n = 5 highly-ranked, ES ≈ 1 for rest ≈ 0.
Reported Treatment Integrity does not do so.
Why no effect for ‘Score 3’ studies?
All ‘score 3’ studies are reported in Campuzano et al. analysis of 10 technology products – training and support provided by ‘vendors’ of commercial products (as reported by teachers)
A single generic training and support paragraph is provided for all 10 programs, with wide variation (2 – 18 hours of training, over 50% with no additional support or training but not tied to specific ES scores)
The ‘implementation science’ of technology interventions adds a fresh perspective to (otherwise) pessimistic findings from existing meta-analyses:
There are grounds for optimism in well-trained and supported trials but data base of well-supported interventions here is small
Some recent studies show similar effects (Chambers et al., 2008; Ecalle et al., 2009; Savage et al., 2012; Wolgemuth et al., 2012;
We have not considered the absolute quality of technology here:
Grant et al. (2012) found few current technologies that conformed to best-evidence
Results need to be confirmed in formal meta-analysis.