Upload
janina
View
49
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Gathering Feedback for Teaching. January 10, 2012. Moderator. Circe Stumbo President, West Wind Education Policy, Inc., and consultant to CCSSO/SCEE. Welcome. Janice Poda, CCSSO Initiative Director Education Workforce. Purposes. - PowerPoint PPT Presentation
Citation preview
January 10, 2012
Gathering Feedback for Teaching
Moderator
Circe Stumbo President, West Wind Education Policy, Inc.,
and consultant to CCSSO/SCEE
2
Welcome
Janice Poda, CCSSO Initiative Director Education Workforce
3
PurposesTo provide SCEE members with an overview of MET Project, inform them of the latest findings, and provide a forum for discussing implications for the educator workforce
4
Objectives will:
Become familiar with the MET project’s goals and how the study was designed and conducted
Hear about the latest research findings
5
Objectives
Learn about the implications for important policy issues that influence teacher and leader evaluation systems
Be able to ask questions about these findings and the implications
6
Presenter
Steve Cantrell, Senior Program Officer (Research & Data)
Bill & Melinda Gates Foundation
7
Gathering Feedback for Teaching
Combining High-Quality Observations with Student Surveys and Achievement Gains
The Measures of Effective Teaching Project
Two school years: 2009–10 and 2010–11 >100,000 students Grades 4–8: ELA and Math High School: ELA I, Algebra I and Biology
Participating Teachers
9
10
MET Logical Sequence
11
Measures predict
Measures fairly reflect teacher
Measures combine
Effective Teaching Index
Teaching Effectiveness Dashboard
Research Use
Measures reliable
Measures improve
effectiveness
?Measures stable under pressure
Measures communicated
effectively
12
The MET project is unique … in its scale,
3,000 teachers
22,500 observation scores (7,500 lesson videos x 3 scores)
900 + trained observers
44,500 students completing surveys and supplemental assessments
in the number of indicators tested,5 observation instruments.
Student surveys Tripod (Ron Ferguson)
Value-added on state tests
and in the number of student outcomes studied. Gains on state math and ELA testsGains on supplemental tests (BAM & SAT9 OE)
Student-reported outcomes (effort and enjoyment in class)
13
Observation Score Distributions: FFT
14
PLATO Prime, CLASS and MQI Lite
15
Observation Score Distributions: UTOP
16
17
0 0.25 0.5 0.75 11
2
3
4
5
6
7CLASS Score Distribution
18
0 0.25 0.5 0.75 11
2
3
4
5
6
7CLASS Score Distribution
19
0 0.25 0.5 0.75 11
2
3
4
5
6
7CLASS Score Distribution
0.68 points
20
Four Steps
Four Steps to High-Quality Classroom Observations
21
Step 1: Define ExpectationsFramework for Teaching (Danielson)
Four Steps
Unsa
tisfa
ctor
y Yes/no Questions, posed in rapid succession, teacher asks all questions, same few students participate.
Basic
Some questions ask for student explanations, uneven attempts to engage all students.
Profi
cient
Most questions ask for explanation, discussion develops/teacher steps aside, all students participate.
Adva
nced All questions high quality,
students initiate some questions, students engage other students.
22
Step 2: Ensure Accuracy of ObserversFour Steps
23
Step 3: Monitor ReliabilityFour Steps
24
Four StepsMultiple Observations
Leads to Higher Reliability
NOTES: The numbers inside each circle are estimates of the percentage of total variance in FFT observation scores attributable to consistent aspects of teachers’ practice when one to four lessons were observed, each by a different observer. The total area of each circle represents the total variance in scores. These
estimates are based on trained observers with no prior exposure to the teachers’ students, watching digital videos. Reliabilities will differ in practice. See the research paper, Table 11, for reliabilities of other instruments.
25
Step 4: Verify Alignment with OutcomesFour Steps
Teachers with Higher Observation Scores Had Students Who Learned More
26
We Compare using Three Criteria:
Dynamic Trio
Predictive power: Which measure could most accurately identify teachers likely to have large gains when working with another group of students?Reliability: Which measures were most stable from section to section or year to year for a given teacher?Potential for Diagnostic Insight: Which have the potential to help a teacher see areas of practice needing improvement? (We’ve not tested this yet.)
27
Measures have different strengths …and weaknesses
Dynamic Trio
Measure Predictive power Reliability Diagnostic Insight
Value-added
Student survey
Observation
H
ML
M
HM
L
HH
28
Student Feedback is Related to Achievement GainsSurvey StatementRank
1
2
3
4
5
• Students in this class treat the teacher with respect
• My classmates behave the way my teacher wants them to
• Our class stays busy and doesn’t waste time
• In this class, we learn a lot every day
• In this class, we learn to correct our mistakes
Student survey items with strongest relationship to middle school math gains:
38 • I have learned a lot this year about [the state test]
39 • Getting ready for [the state test] takes a lot of time in our class
Student survey items with the weakest relationship to middle school math gains:
Note: Sorted by absolute value of correlation with student achievement gains. Drawn from “Learning about Teaching: Initial Findings from the Measures of Effective Teaching Project”. For a list of Tripod survey questions, see Appendix Table 1 in the Research Report.
29
Combining Observations with Other MeasuresImproved Predictive Power
Dynamic Trio
Combining Measures Improved Reliabilityas well as Predictive Power
Dynamic Trio
30
Note: For the equally weighted combination, we assigned a weight of .33 to each of the three measures. The criterion weights were chosen to maximize ability to predict a teacher’s value-added with other students. The next MET report will explore different weighting schemes.
FFT alone
Tripod alone
VA alone
Combined(Equal Weights)
Combined(Criterion Weights)
.05
.1.1
5.2
.25
Diff
eren
ce in
Mat
h VA
(Top
25%
vs.
Bot
tom
25%
)
0 .1 .2 .3 .4 .5 .6 .7Reliability
Note: Table 16 of the research report. Reliability based on one course section, 2 observations.
The Reliability and Predictive Power of Measures of Teaching:
31
Compared to MA Degrees and Years of Experience, the Combined Measure Identifies Larger Differences
Compared to What?
… on state tests
32
…and on low stakes assessments
Compared to What?
33
…as well as on student-reported outcomes.
Compared to What?
34
The MET project reporting timeline:1. Student Perceptions 12/2010
2. Classroom Observations 1/2012
3. Weighting mid-2012• Rationale for different weighting schemes• Consequences for predictive power and reliability
4. Final report using random assignment mid-2012• Do “value-added” estimates control adequately for student characteristics?• Do they predict outcomes following random assignment?
MET project reports available at www.metproject.org
Validation Engine
System picks observation rubric & trains raters
Raters score MET videos of instruction
Software provides analysis of:• Rater consistency• Rubric’s relation to
student learning
35
Ensuring Reliable Observations
• Train • Test
• Calibrate• Refine
Video examples for anchor
points
Rater certification: Don’t pass,
don’t rate
Periodic tuning: Out
of tune, don’t rate
Adjustments to
observation framework
based on data
36
37
The MET Project is ultimately a research project. Nonetheless, participants frequently tell us they have grown professionally as a result of their involvement. Below is a sampling of comments we received.
From Teachers:•“The video-taping is what really drew me in, I wanted to see not only what I’m doing but what are my students doing. I thought I had a pretty good grasp of what I was doing as a teacher, but it is eye opening … I honestly felt like this is one of the best things that I have ever done to help me grow professionally. And my kids really benefited from it, so it was very exciting.”
•"With the videos, you get to see yourself in a different way. Actually you never really get to see yourself until you see a video of yourself. I changed immediately certain things that I did that I didn't like.”
•“I realized I learned more about who I actually was as a teacher by looking at the video. I learned of the things that I do that I think that I’m great at I was not so great at after all.”
•“Even the things I did well, I thought, ok that's pretty good, why do I do that, and where could I put that to make it go farther. So it was a two-way road, seeing what you do well, and seeing the things that have become habits that you don't even think about anymore."From Raters:•“Being a rater has been a positive experience for me. I find myself ‘watching’ my own teaching more and am more aware of the things I should be doing more of in my classroom.”•“I have to say, that as a teacher, even the training has helped me refine my work in the classroom. How wonderful!”•“I have loved observing teachers, reflecting on my own teaching and that of the teachers teaching in my school.”
What the Participants Said….
Our primary collaborators include:
•Mark Atkinson, Teachscape•Nancy Caldwell, Westat•Ron Ferguson, Harvard University•Drew Gitomer, Educational Testing Service•Eric Hirsch, New Teacher Center•Dan McCaffrey, RAND•Roy Pea, Stanford University•Geoffrey Phelps, Educational Testing Service•Rob Ramsdell, Cambridge Education•Doug Staiger, Dartmouth College
Other key contributors include:
• Joan Auchter, National Board for Professional Teaching Standards
•Charlotte Danielson, The Danielson Group•Pam Grossman, Stanford University •Bridget Hamre, University of Virginia•Heather Hill, Harvard University•Sabrina Laine, American Institutes for Research•Catherine McClellan, Clowder Consulting•Denis Newman, Empirical Education•Raymond Pecheone, Stanford University•Robert Pianta, University of Virginia •Morgan Polikoff, University of Southern California •Steve Raudenbush, University of Chicago•John Winn, National Math and Science Initiative
Research Partners
38
Questions and Answers
39
Upcoming Webinars
Special webinar on ESEA Flexibility, Principle 3, Wednesday, January 11, 2-3 pm EST
Regular monthly webinars, the second Tuesday of each month, from 2:00-3:00 or 3:30 p.m. EDT
40
Thank you
41