38
T raining in E xperimental D esign: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida Magaro, Junlei Li Carnegie Mellon University & University of Pittsburgh TED

Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

Embed Size (px)

Citation preview

Page 1: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

Training in Experimental Design:Developing scalable and adaptive computer-based

science instruction

Mari Strand Cary, David Klahr

Stephanie Siler, Cressida Magaro, Junlei LiCarnegie Mellon University & University of Pittsburgh

TED

Page 2: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

2

Overview of the TED projectCurriculum: Experimental design, evaluation, and

interpretation

Age: 5th-8th grade students

Schools: 6 inner city– 4 low SES & challenging classroom environments– 2 mid-high SES

End goal: Computer-based adaptive tutor– 1 student : 1 computer in classroom environment– Provides individualized, adaptive instruction– Supplements (does not replace!) teacher

Page 3: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

3

What do we mean by “Experimental design?”

CVS: Control of Variables Strategy

1. Simple procedure for designing unconfounded experiments

(Vary one thing at a time)

2. Conceptual basis for making valid inferences from data

(Isolating the causal path)

Page 4: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

4

CVS and RampsTest whether the ramp surface affects the

distance that a ball travels.

Variable Ramp 1 Ramp 2

Confounded Unconfounded

Surface Smooth Rough Rough

Track length Short Long Short

Height High Low High

Ball Golf Rubber Golf

Page 5: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

5

Why do we need to teach CVS?

• Core topic in science instruction– State standards – High stakes assessments– Science component of NCLB

• Has real-world applications– Essential to evaluating product claims, and news

reports

• Students do not always learn CVS “on their own” (low SES students, in particular)

Page 6: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

6

What do students do wrong?

Common errors:• Vary everything• Hold target variable constant and vary other variables• Partially confounded• Nothing varied (identical)

Their justifications:• “I don’t know”• You told me to test x!• Describe their set-up• Want to see if x happens• Want to see if this setup is better than that setup

Page 7: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

7

Why do they take these approaches?

• By accident – misread question– working carelessly

• Are led astray– by saliency of physical apparatus (e.g., ramps) – don’t understand written representations (e.g., tables)

• On purpose – different goals (e.g., “engineering”)– misconception of experimental logic– think other variable(s) don’t matter

• Just guessing

Page 8: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

8

What’s the best way to teach CVS?

• As a society (educators, researchers, and legislators), we don’t know

• Our research team knows of one effective way…

Page 9: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

9

Our basic CVS instruction:

• Students design experiments

• Students answer questions

• Instructor provides explicit instruction about CVS

• One domain

• Short instructional period

Page 10: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

10

Effective in the lab and in classrooms of high SES and

achievement levels

• One-on-one: Chen & Klahr (1999); Klahr & Nigam (2004), Strand Cary & Klahr (in preparation)

• Full class: Toth, Klahr & Chen (2000)

• Physical and virtual materials: Triona & Klahr (2003)

Page 11: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

11

Would it work for lower-achieving students

in low-SES schools?

Page 12: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

12

Effective in low-achievement classrooms (Li, Klahr & Jabbour, 2006)

• Raises item-scores above national norms• Enables students to “catch up” with untrained

peers from high-SES schools

• BUT, repeated and varied forms of instruction are required for generalized CVS understanding– Many days– Multiple domains

Page 13: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

13

Thus, our starting point:

Brief, focused CVS instruction is differentially efficient and effective for different student populations, settings, and transfer tasks.

We want to reach ALL students!

To improve our instruction for the entire student population, we must engage in modification & individualization

Page 14: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

14

A computer tutor could facilitate differentiated instruction

• Computer-based instruction– Individualized & self-paced– Provides instruction, practice, and

feedback

• Teacher freed to provide coaching as needed

Page 15: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

15

How are we building our tutor?

4 development phases

&

Iterative design process

Page 16: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

16

4 development phases:1. Information gathering

• What are the novice models students hold and how can we address those?

2. Refining the basic instruction and “going virtual”

3. Building a computer tutor with a few “paths”

4. Building an adaptive computer tutor with a “web” of paths

Page 17: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

17

An evolving CVS computer tutor

Version 1 Version 2 Version 3 Version 4

Instructional mode

Class (teacher) Class (teacher) Class (teacher)

Individual (computer)

Class (teacher)

Individual (computer)

InflexibleFlexibility Limited flexibility(differentiation points)

Flexible (multiple paths)

Adaptive (“web” of paths)

Stimuli Simulations

Computer interface

Physical apparatus

Overhead transparencies

Simulations

Computer interface

Simulations

Computer interface

Instructional components

(domain)

Procedural & Conceptual (Ramps)

Prereq. skills (Auto sales)

Procedural (Study habits)

Conceptual (Ramps)

TBD TBD

DiscussionFeedback Discussion, paper exchange, researchers

Discussion, Computer, researchers TBD

Page 18: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

18

Improve current version &

Inform next version

Compare against previous version

Our iterative design process:

Version n

Pilot testing

Delayed post assessment

One-on-one human tutoring

Classroom validation study(+ pre, post, and formative

assessments)

Page 19: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

19

What are we learning from each version that will help us design the final, adaptive tutor?

VERSION 1 (Completed)

• Database of student biases, misconceptions, errors & areas of difficulty

• Inventory of successful tutoring approaches

• familiar domains

• instruction in prerequisite skills

• step-by-step approach

• Student-friendly terminology, definitions, and phrasing

• Requiring explicit articulation by student

Page 20: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

20

What are we learning from each version that will help us design the final, adaptive tutor?

VERSION 2 (Ongoing)

Information regarding:

• classwide implementation of successful tutoring approaches

• feasibility of multiple domains

• effect of emphasizing domain-generality

• interface usability

• worksheet usability

Page 21: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

21

What are we learning from each version that will help us design the final, adaptive tutor?

VERSION 3 (being developed)

Information regarding:

• individual tutor usability and pitfalls

• comparative efficacy of set learning paths

• efficacy of immediate computer feedback

Page 22: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

22

The adaptive tutor will include:

• Pre-testing and ongoing monitoring of student knowledge

• Self-paced instruction

• Diverse topics matching student’s interests

• An interactive and engaging interface

• Teacher-controlled and/or computer-controlled levels of difficulty

• Level of scaffolding, feedback, and help aligned with student’s needs

• Computerized assessments

• Logging capability

Page 23: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

23

Beyond our classroom instruction…

• Where on the contextual / abstract continuum should this type of instruction be focused? When?

• Single vs. multiple domains?

• Static pictures vs. simulations vs. tabular representations

• Best mix of explicit instruction, exploration, help, feedback, etc.

Page 24: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

Questions? Comments?

[email protected]

[email protected]

Many thanks to the Institute of Education Sciences for supporting our work

Page 25: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida
Page 26: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

26

V1 learning examples:

VERSION 1

• Database of student biases, misconceptions & areas of difficulty

• Inventory of successful tutoring approaches

• familiar domains

• instruction in prerequisite skills

• step-by-step approach

• Student-friendly terminology, definitions, and phrasing

• Requiring explicit articulation of understanding and reasoning

Ignore the data or Biased by expectations

Create “best” outcome or Most dramatic difference

Learn about all variables at once

Pets, Sports drinks, Cars, Study habits, Running races

Variable vs. Value

Experiment

Result vs. ConclusionRead carefully, Identify question, Identify variables…

Good vs. Fair vs. Informative vs. True

“Variable” = something that can change

Table format

Remembering the target variable

Drawing conclusions based on the experiment

Page 27: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

27

What IS an “intelligent tutor?”

• Computer-based instructional system• Contains an artificial intelligence component

– Encodes cognitive objectives of the instruction– Tracks students’ state of knowledge– Compares student performance to expert

performance– Tailors multiple features of instruction to the

student (Anderson, Boyle, Corbett, & Lewis, 1990; Anderson, Conrad, & Corbett, 1989; Corbett & Anderson, 1995; Greeno, 1976; Klahr & Carver, 1988).

Page 28: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

28

Ramp apparatus

Page 29: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

29

CVS and RampsA completely confounded test for

determining the effect of ramp surface on the distance that a ball travels.

A

B

Variable Ramp 1 Ramp 2

Surface Smooth Rough

Track length Short Long

Height High Low

Ball Golf Rubber

Page 30: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

30

Classroom CVS with urban 5th & 6th graders

CVS Training (Ramps, 2 days)

CVS Probe-based retraining (Pendulum, 2 days)

0%

20%

40%

60%

80%

100%

% C

orre

ct

(Klahr, Li & Jabbour, 2006)

Page 31: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

31

“Low” training vs. “high” comparison group

Training group (5/6th grade, low achieving school)

Comparison group (5-8th grade, high achieving school)

Page 32: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

32

Stand-alone, detailed lesson plan with

visual aids

Examples of exp. designs

(good and bad)

Assessments (formative and

summative)

Students designing experiments

Asks students to explain, justify,

and infer

Feedback

Every version

Page 33: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

33

Increasing complexity and adaptiveness

Physical apparatus Virtual simulations

Full class Full class & individual computer use

Inflexible Individually-adaptive & self-paced

One domain Multiple domains

Page 34: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

34

Why SES differences?

• Found them in our previous studies

• Classroom environment

• Reading comprehension

• Experience with this type of thinking (expectations, appropriate challenge and/or scaffolding, amount of practice)

Page 35: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

35

What if later versions are less effective than earlier versions?• “Stop the presses!”

• Look for obvious reasons

• Examine lesson components individually

• Consider what is missing

Page 36: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

36

“Prerequisites”

• “Science mindset• Problem decomposition

– Vocabulary!– Identify and understand question – Identify key variables– Notice and complete component steps

• Analogical reasoning• Reading & listening carefully

Page 37: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

37

“Procedures”

Test one variable at a time

1. Make the values for the variable you’re testing be DIFFERENT across groups.

2. Make the values for the variables you’re not testing be the SAME across groups.

Page 38: Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida

38

“Concepts”

• You need to use different values for the variable you’re testing in order to know what effect those different values have.

• You need to use the same value for all the other variables (hold all the other variables constant; “control” the other variables) so that they can’t cause difference in the outcome.

• If you use CVS, you can know that only the variable you’re testing is causing the outcome/result/effect.