23
1 Research-Practice Partnerships and ESSA: A Learning Agenda for the Coming Decade William R. Penuel Caitlin C. Farrell University of Colorado Boulder Book chapter DRAFT: 8/11/16

Research-Practice Partnerships and ESSA: A Learning …learndbir.org/resources/160812-RPP-chapter.pdf · Research-Practice Partnerships and ESSA: A Learning Agenda for the Coming

Embed Size (px)

Citation preview

1

Research-Practice Partnerships and ESSA: A Learning Agenda for the Coming Decade

William R. Penuel

Caitlin C. Farrell

University of Colorado Boulder

Book chapter

DRAFT: 8/11/16

2

Research-Practice Partnerships and ESSA: A Learning Agenda for the Coming Decade

For more than a decade, the federal government has promoted policies to encourage

greater use of research evidence to inform educational reform efforts. The new federal law, the

Every Student Succeeds Act (ESSA), reflects a continued commitment among policymakers to

this idea. ESSA puts a much greater responsibility on local decision makers for knowing about,

using, and even developing evidence. To succeed, local leaders will need support to meet these

expectations. As in all other reform efforts, making use of research will require the collaborative

engagement of leaders and educators, as well as with researchers. Productive evidence use is an

interactive process, most likely when there are sustained opportunities for interactions between

researchers and educators.

The premise of this chapter is that RPPs are essential supports for implementing federal

policies like ESSA as a foundation for a new infrastructure for relating research and practice.

Traditional research and development imagines a one-way path from research to practice.

Partnerships are a two-way endeavor, in which practice informs the questions researchers ask,

making research more relevant. In partnerships, researchers and educators work together to

search for and test solutions to practice, blending ideas and evidence from research with the

wisdom of practice. Partnerships are an infrastructure for turning the insight that reform is a

social process into a systematic design for collaborative improvement that leverages the expertise

and passion of both researchers and educators.

Below, we argue that research-practice partnerships (RPPs) that include educators from

schools, districts, and out-of-school organizations are positioned to play an important role in

supporting educators to incorporate evidence in reform efforts. Using ESSA as a context, we lay

out four roles partnerships may play in supporting the provisions of ESSA regarding the use of

3

evidence, and we conclude with a set of questions for researchers, practitioners, and

policymakers that can increase the field’s capacity to engage in successful partnership work.

The Federal Role in Promoting the Use of Evidence in Educational Reform

Over the two decades, we have seen an ongoing conversation at the federal level around

how research evidence should inform local practice. In the early 2000s, the focus was on how

practitioners could use findings from large-scale experimental studies with randomized control

trials (RCTs), the “gold standard” of educational research.1 The 2001 federal policy No Child

Left Behind (NCLB) included over 100 references to “scientifically based research,” with

expectations that programs funded under the law draw on such evaluation evidence in school

improvement decisions.2 There was a parallel investment from the Department of Education in

the What Works Clearinghouse, with its goal of improving dissemination of research evidence

from RCTs.

Recently, the debate has shifted to focus on a broader approach to understanding and

scaling “what works” from education research. The Bush and Obama Administrations invested

in tiered evidence grant-making initiatives, like the Investing in Innovation (i3) Program. In a

tiered-evidence design, programs with more rigorous evidence of impact are eligible for the most

funding, while programs with less rigorous or emerging evidence are still eligible for smaller

grants. The recent reauthorization of the federal Elementary and Secondary Education Act is a

most recent example of the push for evidence in federal education policy, one that recognizes a

role for different tiers of evidence for selecting and implementing programs and practices.

Implementation of policies like ESSA that include provisions for research evidence will

not only requires technical solutions (e.g., additional administrative guidance) but also attention

to the social conditions that support such efforts. Research in this volume and elsewhere points to

4

the critical role of social relationships in improvement efforts.3 Policy implementation is a set of

interactive, social processes, occurring as actors make sense of, co-construct, and respond

collectively. Therefore, we must consider the social infrastructure required to relate research and

practice in new ways. We turn to research-practice partnerships, structures that are rich in

relationships and commitment to solving substantial problems of education.

The Promise of Research-Practice Partnerships

RPPs are collaborative efforts between practitioners and researchers, focused on solving

critical problems facing educational leaders through research efforts.4 Rather than “translating”

research done by others and “disseminating” it to practitioners, research partnerships engage in

joint research and development activities with educators. Research partnerships seek to identify

problems of practice, co-design solutions with practitioners, implement the design, and then

study this process and its results. RPPs share common strands of DNA. First, RPPs are long

term. Instead of being focused on a single study, researchers and educational leaders in RPPs

sustain their work across multiple projects. Second, RPPs organize their work around a problem

of practice, instead of leading with gaps in existing theory or research. Third, these partnerships

are mutualistic; all involved jointly negotiated, and hold authority over, the lines of work. This

approach stands in contrast to traditional research studies where the researchers often play a

central role in setting the research agenda. Fourth, RPPs develop and employ strategies to foster

partnerships. These strategies may include carefully designed rules, roles, routines, protocols, or

“ways of doing business” that structure RPP engagement. Finally, the partnerships involve

original analysis of data where RPP participants in an RPP gather data and conduct analyses to

provide insight into communities’ pressing questions. This analysis can be anchored by and

advance educational theory.

5

Outside of this shared DNA, RPPs can look quite different from one another in terms of

organization and strategy.5 In research alliance models, RPPs engage around analyses of

implementation of district policies, where the researchers share findings for educational decision

makers and work with them to develop solutions (e.g., the University of Chicago Consortium for

Chicago School Research). In design research partnerships, co-design work plays an even greater

role, with researchers and district leaders co-developing and testing strategies or tools for

improving teaching and learning system-wide.6 Still other RPPs organize as networks of schools,

districts, or other institutions, such as afterschool or informal organizations, and engage in

continuous improvement research to work on problems of practice.7

Not all partnerships may neatly fit into this typology. A partnership may engage in

different activities based on the goals of their work together. For instance, a partnership could

engage in activities that are more typical of research alliances, like integrating together multiple

datasets or perform independent analyses of district administrative data. The same group could

also be involved design work as they co-design and test strategies or tools for addressing

identified needs, a feature of design research partnerships. With recent investments by local and

federal funders like the Institute of Education Sciences (IES) and the National Science

Foundation, RPPs of all forms and types have multiplied.8

One claim made about research-practice partnerships is that they are an effective

mechanism for supporting research use. For instance, the theory of action for IES’ researcher-

practitioner partnership program names improved research use as a central goal.9 Recent

evidence from studies of research use give some support to this claim. Evaluation utilization

scholars find that participation in the research process—a key component of work in some kinds

of RPPs—is important for utilization.10 Several studies suggest participation in partnerships is

6

associated with greater access to research, another important contributing factor for research

use.11

A key reason why RPPs may support research use is that they create opportunities for the

interactive social processes involved, including persuasion, negotiation, and sensemaking.12

Research findings do not speak for themselves. Instead, engagement with research requires

leaders to make sense of findings, discuss their relevance to the current district context, and

design policies or programs in that particular context in light of other financial, political, or

temporal constraints.13 Within this interactive space, researchers and practitioners can sift

through the range of evidence-based programs and strategies available to select programs that

may work for a particular issue.14 Partnerships can create spaces to adapt these programs via

collaborative design efforts to fit local context or iterate on the design of a program or strategy to

tailor to new, unfolding needs, a key theme emphasized in this volume. Finally, conducting local

evaluations of these programs within a partnership may lead to results that are seen as more

timely, credible, and central to district leaders’ needs, making it more likely that leaders will act

upon the findings in their decision making. Research-practice partnerships are poised to play an

important role in supporting district leaders’ use of research, a key task of local leaders under the

new Every Student Succeeds Act (ESSA).

Key Provisions Regarding Evidence Use in ESSA

In December 2015, Congress passed the Every Student Succeeds Act (ESSA), the long-

awaited reauthorization of the Elementary and Secondary Schools Act (ESEA). ESSA represents

a significant devolution of decision-making authority from the federal government to state and

local agencies. With this new authority comes the explicit expectation that local policymakers

are “evidence-based” in their decision making.

7

ESSA, for the first time in ESEA history, defines the four levels of evidence that

constitute an “evidence-based” program, intervention, or activity (Section 8002). These four tiers

draw on those in the Investing in Innovation grant program (i3) – a stimulus program that

supported large-scale development of innovations. The evidence tiers are based on the strength

of the research base:

● Strong evidence: has at least one well-designed and implemented experimental study,

meaning a randomized controlled trial, that shows a positive impact;

● Moderate evidence: has at least one well-designed and implemented quasi-experimental

study, e.g., a regression discontinuity analysis, that shows a positive impact;

● Promising evidence: has at least one well-designed and implemented correlational study

that controls for selection bias that shows a positive impact.

● Research-based rationale: has a body of evidence from research and evaluation to

support the claim that the strategy or intervention is likely to improve student outcomes.

ESSA applies the term “evidence” or “evidence-based” more than 80 times. This language is

most notable in the regulations for the large formula and competitive grant programs. For Title I

funds, for instance, school districts are to develop improvement plans for low-performing

schools that include “evidence-based” interventions, programs or activities that meet the first

three tiers of evidence. Further, seven of the authorized competitive grant programs (i.e.,

Literacy Education for All, Results for the Nation, Section 2221) similarly restrict “evidence-

based” activities or interventions to the top three tiers of evidence. In these grant competitions,

the U.S. Department of Education will give priority to applications with evidence-based

approaches, presumably by awarding more points to proposals with strong evidence then those

proposals with promising evidence. Other ESSA formula grant programs, including Title II

(Preparing, Training, and Recruiting High Quality Teachers, Principals, and Other School

Leaders) and Title IV, Part A (Student Support and Academic Enrichment Grants) encourage

state and local school districts to invest in “evidence-based” interventions or programs, including

8

in afterschool and summer initiatives supported through the 21st Century Community Learning

Centers program. Here, activities may qualify via the fourth tier, a demonstration “of a rationale

based on high-quality research findings or positive evaluation” that also “includes ongoing

efforts to examine the effects of such activity, strategy, or intervention” (Section 8002).

ESSA also includes a number of provisions that further signal a federal commitment to

building the evidence base in education. The law authorizes the Education Innovation and

Research (EIR) Grants program (Section 4601). Similar to the Investing in Innovation (i3)

program, this program will provide tiered grants to support the testing and replication of new

strategies or programs and will require grantees to independently evaluate the effect of their

grant-funded activities, adding to the research base. ESSA also allows the U.S. Department of

Education to set aside portions of Title I and Title III funds for program evaluations (Section

8601). Table 1, below, highlights key provisions in ESSA that have a role for research evidence.

[INSERT TABLE 1]

The rollback of federal authority, combined with a commitment to interventions and

strategies supported by evidence, create serious new demands on state and local leaders. ESSA

requires that state and local leaders identify and select programs or interventions that meet the

standards of evidence. In the cases where there is not research that meets the top tiers, the work

of using research evidence will occur in the space of design, adaptation, and improvement. Local

decision makers will need to determine whether and how these strategies will work in their

context and adapt as needed. Based on what we know from implementation research more

generally, and the specific case of i3 grants more recently, it is likely that these evidence-based

programs or interventions will need to be iteratively refined to meet the needs of particular

communities.15 ESSA also requires that local policymakers and practitioners become partners in

contributing to the evidence base, through evaluation of the strategies in the fourth tier of

evidence (“research-based rationale”) or EIR grant efforts. In sum, ESSA puts a much greater

9

responsibility on local decision makers for knowing about, using, and even developing evidence,

and local leaders will need support to meet these expectations.

Proposed Roles for RPPs Under ESSA

Research-practice partnerships are positioned to play important roles to help realize

ESSA's vision for evidence-based policymaking. Where they exist, partnerships are positioned to

play an important role helping leaders in identifying and selecting evidence-based programs;

adapting evidence-based programs through collaborative design; iteratively refining evidence-

based programs through improvement science strategies; and conducting local evaluations of

evidence-based programs.

Identifying and Selecting Evidence-Based Programs

One role that RPPs can play is to help leaders identify and select evidence-based programs

that are appropriate for their context and the address the needs of their schools and districts.

Under Title I, for instance, local educators will need to identify those programs that meet the top

three evidence tiers in their action plans for low-performing schools. Identifying an evidence-

based program is not a simple matter of looking up evidence reports from a site like the What

Works Clearinghouse or Campbell Collaboration and selecting a program with strong evidence

of impact on student learning. Nor is it as simple as accepting a claim from a publisher that the

program or curricula includes “strong evidence” for its efficacy. Identifying evidence-based

programs requires knowing where to look for programs that have a strong evidence base that

matches the needs of a particular program, school, district, or community. Intervention reports

from the What Works Clearinghouse, summaries of evidence related to specific programs,

typically provide little guidance on the resources required to implement programs or on the

10

processes that educators might need to undertake to select programs based on a careful balancing

of needs and resources.

A good example of a research-practice partnership model that helps with the identification

and selection of programs is the Communities That Care (CTC) model created by the Social

Development Research Group at the University of Washington. In this model, researchers work

with multi-agency collaboratives to assess needs and then select and implement evidence-based

programs in primary prevention for adolescents. The model aims to build a culture of evidence-

based decision-making and commitment to primary prevention across a community. Although

this example comes from outside education, schools are integral implementation partners, and

the model shows the benefits of partnerships for helping identify and select evidence-based

programs. Researchers who tested the CTC model have found that when compared to control

communities, leaders in communities that formed partnerships with researchers to implement

CTC were more likely to devote resources to primary prevention than in comparison

communities.16 They have also documented positive impacts on youth, documenting lower levels

of alcohol and cigarette use and fewer delinquent behaviors compared to youth in control

communities.17

Developing Evidence-Based Programs through Collaborative Design

A few competitive grant programs in ESSA call for the development of evidence-based

programs. For example, the Supporting Effective Educator Development, under Section 2243,

names the creation of a comprehensive center that would “identify or develop evidence-based

professional development” targeting teachers of students at risk in literacy. Developing new

programs rooted in evidence is a labor-intensive process that takes time and extensive

collaboration. It is also a tall order for districts and states to do on their own or researchers to do

11

without significant input from educators regarding the design of programs that are feasible to

implement in schools and districts.

Some partnerships use an approach to develop interventions collaboratively with

practitioners where more realistic conditions of implementation occur from the start. This is the

strategy of design-based research, first developed in the learning sciences in the early 1990s.18

Design-based research involves the specification of a theory of how best to support learning of

particular goals that is then tested in the crucible of classrooms.19 It aims to produce not only

usable innovations but also knowledge and practical principles for guiding the design of future

innovations.20

A good example of design-based research within a partnership is a multi-year project

conducted collaboratively by researchers from the Strategic Education Research Partnership

(SERP) and the Minority Student Achievement Network (MSAN), a network of eight districts

focused on closing achievement gaps.21 The challenge that MSAN members gave to SERP

researchers was to design an intervention to close gaps in outcomes in algebra without isolating

students of color for intervention. The district leaders also emphasized that the program should

fit easily within teachers’ existing routines and that the program should value teachers’ expertise

and contributions in the classroom.

The team drew upon research about the importance of worked examples to create an initial

version of the intervention. A worked example is a clear, step-by-step demonstration of how to

solve a particular problem that illustrates a class of problem-types in a discipline.22 There is

strong evidence that worked examples can help develop problem-solving strategies, yet they are

not integrated into most mathematics texts.23 A collaborative design team of researchers and

teachers co-developed a set of assignments with worked examples that targeting difficult

12

mathematics concepts. Teachers piloted the assignments, and the team made subsequent

revisions based on teacher input. Subsequent iteration resulted in further refinements. The team

concluded its efforts, which involved several years of development and testing, with an

experimental study. The study examined the impact of the assignments with worked examples on

conceptual and procedural knowledge of algebra. The researchers found a significant main effect

of the treatment, with greater gains made by low-achieving students.24

This example illustrates the power of design-based research conducted within partnerships

to design usable, effective interventions. The iterative process of design research resulted not

only in successive refinements to the assignments to enhance their effectiveness, but the repeated

testing in a variety of classrooms also yielded improvements that aligned better and better to the

needs teachers and district leaders had expressed. A focused addition to practice yielded

significant gains in the long run.25

Iteratively Refining Evidence-Based Programs to Improve Reliability of Outcomes

To achieve better outcomes for students at scale, states and local agencies will need to

continuously refine their programs, activities, or interventions. For instance, findings from

evaluations supported under different provisions or as part of the Education Innovation and

Research (EIR) program could be leveraged to support continuous improvement efforts.

One strategy for iteratively refining a program is to employ methods of improvement

research within a research-practice partnership. Improvement research is widely used in

medicine to design and test strategies for improving practice using a Plan-Do-Study-Act (PDSA)

cycle.26 In these cycles, a partnership decides on a small change to be tested, define the steps

needed to test it, and determine the measures that will be used to gauge its success (Plan). Next,

13

the partnership carries out the plan (Do), analyzes data collected to see if their predictions were

borne out (Study), and finally determines what changes need to be made for the next cycle (Act).

This approach to improvement research is being more fully developed in a range of

initiatives facilitated by the Carnegie Foundation for the Advancement of Teaching.27 One such

initiative involves a partnership among the Harrisonburg City Public Schools, Motivation

Research institute, Carnegie Foundation for the Advancement of Teaching, and the Raikes

Foundation. It has employed improvement research methods to test and refine a brief

intervention focused on student motivation that had previously proven successful in other

settings. The team was compelled by evidence from previous studies that brief social

psychological interventions that target students’ feelings and thoughts about school can lead to

large gains in achievement.28

Three motivation researchers worked closely with six teachers and two administrators on

the effort. They first worked to identify what they perceived to be top motivational challenges to

students. The group concluded that the most pressing challenge was that “Students who do not

believe in themselves and give up at anything that is not quick and easy for them.” They used a

tool from improvement research called a “Key Driver Diagram” in order to name key leverage

points for addressing this problem. The diagram, constructed through both conversations with

partners and an analysis of the literature, highlighted four key drivers: Students believe they can

learn, students value learning, students feel that they belong in the context, and students use

effective learning strategies. The team decided to start with the first key driver, “Students believe

they can learn,” to select a brief intervention to adapt.

They chose an intervention focused on building growth mindset among students that had

been developed by Carol Dweck’s team at Stanford.29 The intervention had been developed for a

14

different age group, and it lasted longer than the partnership was willing to support. The team

made three adaptations to the intervention: developing material to make it engaging for middle

schoolers with limited English proficiency, shortening the intervention’s length, and making it so

the intervention could be delivered on a handheld tablet.

They used a careful approach to scaling prescribed by improvement research. First, the

team tested the adapted intervention in one classroom with older students, mainly to establish

that the content was comprehensible to them. Next, they moved to a handful of fifth grade

classrooms and addressed usability issues with the handheld devices through a PDSA cycle. In

the process, they learned a lot from testing in classrooms that they used to refine the intervention,

such as the need to add follow up activities for students who finished tasks early and to develop

new material for when students saw the handheld application more than once. The testing also

resulted in refinements to the assessment. The team is now testing the intervention in a large-

scale study throughout the district.

This example shows the potential of improvement research in a research-practice

partnership to refine an existing evidence-based intervention. Through working in close

collaboration with teachers and district leaders in what might be called a multi-tiered partnership

because it involves educators working at different levels of a system,30 a team was able to work

systematically to ensure that the social psychological intervention they adapted could be reliably

implemented. The project also illustrates in a powerful way that even the shortest and most

straightforward-to-implement evidence-based programs can require significant adaptation when

introduced into a new context.

Conducting Local Evaluations of Programs to Build a Research Base

15

Under ESSA, there is a role for evaluation of federally funded activities, and some

research-practice partnerships are well positioned to serve an evaluative role. Research alliances

are a type of partnership that function as independent voices in a community that document the

implementation and effectiveness of local school district policies and programs.31 Here,

educators take primary responsibility for the design and implementation of policies and

programs, while researchers serve as evaluators of those policies and programs. The evaluations

are not undertaken as many studies are, however, where contractors have no ties to the

community. Rather, alliances are place-based and committed to an ongoing relationship with

partner school districts to address problems of practice and to inform the search for solutions to

those problems of practice.

What Do We Need to Learn Together? A Collaborative Learning Agenda

Though the examples above illustrate the potential for research-practice partnerships to

contribute to key aspects of ESSA implementation, we are a long way from being able to claim

that partnerships are a viable strategy for all educational settings. There is much for us to learn,

not just about ESSA but also about how best to form and sustain partnerships that can impact

local policies, practices, and student outcomes. Below, we outline an agenda -- a learning

agenda, rather than a research agenda -- framed by questions that we need to explore in order to

increase the field’s capacity to engage in successful partnership work.

Making Sense of the Evidence Base

Under ESSA, educators in both formal and informal settings will have new roles as

research consumers. In some situations, this may involve selecting from programs that have

demonstrated success in an RCT setting. While the application of strong evidence likely continue

to be privileged, it is also important to recognize the very limited body of knowledge that meets

16

the criteria for the top two tiers. Therefore, educational leaders will need support to access,

interpret, and make use of research in a range of ways. This set of tasks will include interpreting

the results of standalone RCTs or quasi-experimental studies, an area where educational leaders

may need support.32 Additionally, it will require an ability to assess evidence for claims that

something is “research-based” or to gauge the strength of the research base across studies.

Partnerships will need to develop answers to the question: How can partners help local leaders

find and interpret “bodies of evidence” relative to specific problems that go beyond single

studies?

Further, decision makers at the local level will need to determine whether a research-

based program, curricula, tool, or strategy may work within their own setting given local

conditions. Beyond whether a program will “work” or not, educational leaders will need

information on how to implement those strategies given available resources, staffing, and

previous initiatives. This requires knowledge about the applicability of a program to one’s own

context, considering the generalizability of findings and evidence of replication.33 It will require

strategies for implementation and plans for adapting the program to meet local conditions in

order to achieve student outcome results. Partnerships may be well positioned to focus on this

demand. We need to know more about, how can RPPs iterate upon evidence-based programs to

improve reliability of implementation and equity of outcomes?

Developing Evidence-Based Programs

Another key question to address is, How can collaborative design processes best support

the development of evidence-based programs? Traditional research and development puts

researchers in control of the design process, but in RPPs, educators and researchers design

collaboratively. In real educational contexts, moreover, iteration must be based on both emerging

17

evidence of impact and on changing district environments.34 Co-design in such environments

must wrestle with multiple tensions across organizational structures, such as district units or

partnering institutions, and between central offices and schools.35 At present, we do not have a

range of well-tested models to draw upon for organizing collaborative design of evidence-based

programs that partnerships can use.

For instance, Title II provisions call on local leaders to “identify and develop” evidence-

based programs for professional development for teachers, principals, and other school leaders.

Although researchers have conducted experiments to test a number of different programs over

the past decade, only a few have been found to be effective, and fewer programs still have been

sustained. Recently, there have been calls to develop professional learning opportunities that are

better integrated into school and district infrastructures.36 Researchers working in partnerships

with districts could accomplish such a task.

Getting Better at Getting Better

Collaborative design is but one process that is important for developing evidence-based

programs. To meet the demands of practitioner timescales for rapid improvement, we need faster

ways to go from early stage development to reliable impact at scale. Traditional research and

development cycles take too long to generate such evidence, and by the time it is generated, it

may no longer be useful for decision makers. One of the promising aspects of the Carnegie

Foundation’s development of improvement research methods for education is the speed and

systematicity with which evidence can be generated from studying an innovation in a few

classrooms and learning quickly to improve the reliability of outcomes and integrity of

implementation. In 90-day cycles, small tests of change can be undertaken and an aspect of an

intervention can be refined based on evidence collected using practical measures that are targeted

18

to studying a few outcomes. Over the course of 1-2 years, a comprehensive “change package”

can be developed that can then be tested with large numbers of classrooms.

To implement these kinds of methods in schools and districts, we need to know: How can

networked improvement communities support the development and improvement of specific

programs and practices? At present, few researchers and educators have the capacity to form

networked improvement communities (NICs). Researchers need to develop the facilitation skills

to bring them about, and districts, for their part, do not have mechanisms for building in time for

educators to take on new roles and responsibilities required of them. Further, in a NIC, there is

no clear delineation of who is a researcher and who is an educator. All work together as a

network to carry out the work, but such a blurring of roles is counter-normative.37 To “get better

at getting better,” a chief aim of a NIC, we will need to learn much more about how best to take

on new roles in the midst of our respective organizations and their demands on us.

ESSA reflects a continued commitment among policymakers to the idea that research

evidence has an important role to play in supporting improvement efforts. RPPs can provide the

foundation for a new, collaborative infrastructure for relating research and practice. The place-

based nature of RPPs is a strength of RPPs and not a weakness, because the work of adapting

programs to be effective in a new context requires collaboration with people who have a stake in

the outcomes and responsibility for implementation of reforms. It is the infrastructures of

partnerships and processes of adaptation that can travel from place to place, so that others can

learn from these intentional efforts to foster collaborative reform.

19

Table 1.

Evidence provisions in ESSA

ESSA Provision

Focus

Title VIII, Section 8002

• Defines the four levels of evidence that constitute an “evidence-based” activity, strategy, or intervention by a state, local school district, or individual school.

Title I, Section 1003

• Requires states to set aside a portion of their ESEA Title I, Part A funds for a range of activities to help school districts improve low-performing schools.

• Local school districts need to select “evidence-based” interventions that meet strong, moderate, or promising levels of evidence in their improvement plans.

Title II

• Focus is on improving the quality and effectiveness of teachers, principals, and other school leaders in preparation programs, professional learning initiatives, or recruitment efforts.

• Encourages states and local school districts to use their Title II funds on “evidence-based” activities (where there is available evidence).

Title III, Section 4611

• Authorizes the Education Innovation and Research (EIR) Grants program, a federal evidence-based education innovation fund, similar to the existing i3 program

Title IV, Part A

• Student Support and Academic Enrichment Grants section encourages states and local school districts to use invest their Title IV funds in “evidence-based” activities (where there is available evidence).

Competitive grant programs

• U.S. Department of Education to give priority to applications that demonstrate strong, moderate, or promising levels of evidence for following grant programs:

o Sec. 2221, Literacy Education for All, Results for the Nation;

o Sec. 2242, Supporting Effective Educator Development; o Sec. 2243, School Leader Recruitment of Support o Sec. 4502, Statewide Family Engagement Centers o Sec. 4624, Promise Neighborhoods o Sect. 4625, Full-service Community Schools o Sect. 4644, Supporting High-Ability Learners and Learning

Title VIII, Section 8601

• Allows U.S. Department of Education to dedicate funds towards ESSA program evaluations

20

21

1 Grover J. Whitehurst, "The Institute of Education Sciences: New Wine, New Bottles" (paper presented at the Annual meeting of the American Educational Research Association, Chicago, IL, April 2003). 2 Meredith I. Honig and Cynthia E. Coburn, "Evidence-Based Decision-Making in School District Central Offices: Toward a Research Agenda," Educational Policy 22, no. 4 (2008). 3 Nienke M. Moolenaar and Alan J. Daly, "Social Networks in Education: Exploring the Social Side of the Reform Equation," American Journal of Education 119, no. 1 (2012). 4 Cynthia E. Coburn, Wiliam R. Penuel, and Kimberley E. Geil, "Research-Practice Partnerships: A Strategy for Leveraging Research for Educational Improvement in School Districts," (New York, NY: William T. Grant Foundation, 2013). 5 Ibid. 6 Paul A. Cobb et al., "Design Research with Educational Systems: Investigating and Supporting Improvements in the Quality of Mathematics Teaching and Learning at Scale," in Design-Based Implementation Research: Theories, Methods, and Exemplars, ed. William R. Penuel, et al. (New York, NY: National Society of the Study of Education Yearbook, 2013). 7 Bronwyn Bevan et al., "Enriching and Expanding the Possibilities: Research-Practice Partnerships in Informal Science Education " (San Francisco, CA: Research + Practice Collaboratory, 2015). 8 W. R. Penuel et al., "Conceptualizing Research-Practice Partnerships as Joint Work at Boundaries," Journal of Education for Students Placed at Risk 20 (2015). 9 Institute of Education Sciences, "Researcher-Practitioner Partnerships in Education Research," https://ies.ed.gov/funding/ncer_rfas/partnerships.asp. 10 Kelli Johnson et al., "Research on Evaluation Use: A Review of the Empirical Literature from 1986 to 2005," American Journal of Evaluation 30, no. 3 (2009). 11 William E. Bickel and William W. Cooley, "Decision-Oriented Educational Research in School Districts: The Role of Dissemination Processes," Studies in Educational Evaluation 11, no. 2 (1985). 12 Nabil Amara, Mathieu Ouimet, and Rejean Landry, "New Evidence on Instrumental, Conceptual, and Symbolic Utilization of University Research in Government Agencies," Science Communication 26, no. 1 (2004); Damien Contandriopoulos et al., "Knowledge Exchange Processes in Organizations and Policy Arenas: A Narrative Systematic Review of the Literature," The Milbank Quarterly 88, no. 4 (2010). 13 National Research Council, "Using Science as Evidence in Public Policy," (Washington, D. C.: Author, 2012). 14 Cynthia E. Coburn and Mary K. Stein, eds., Research and Practice in Education: Building Alliances, Bridging the Divide (Lanham, MD: Rowman & Littlefield, 2010). 15 Anthony S. Bryk et al., Learning to Improve: How America's Schools Can Get Better at Getting Better (Cambridge, MA: Harvard Education Press, 2015); Milbrey Wallin McLaughlin, "Implementation as Mutual Adaptation: Change in Classroom Organization," Teachers College Record 77, no. 3 (1976); Donald J. Peurach, "Educational Innovation and Problems of Improvement: Aligning Politics, Policies, and Practice," in National Center on Scaling Up Effective Schools Conference (Nashville2015). 16 Eric C. Brown et al., "Prevention Service System Transformation Using Communities That Care," Journal of Community Psychology 39, no. 2 (2011).

22

17 J. David Hawkins et al., "Early Effects of Communities That Care on Targeted Risks and Initiation of Delinquent Behavior and Substance Use.," Journal of Adolescent Health 43, no. 1 (2008); J. David Hawkins et al., "Results of a Type 2 Translational Research Trial to Prevent Adolescent Drug Use and Delinquency: A Test of Communities That Care," Archives of Pediatrics and Adolescent Medicine 163, no. 9 (2009). 18 Ann L. Brown, "Design Experiments: Theoretical and Methodological Challenges in Creating Complex Interventions in Classroom Settings," The Journal of Learning Sciences 2, no. 2 (1992). 19 Paul Cobb et al., "Design Experiments in Educational Research," Educational Researcher 32, no. 1 (2003). 20 Carl Bereiter, "Principled Practical Knowledge: Not a Bridge but a Ladder," The Journal of the Learning Sciences 23, no. 1 (2014); Daniel C. Edelson, "Design Research: What We Learn When We Engage in Design," ibid.11 (2002). 21 Julie L. Booth et al., "Design-Based Research within the Constraints of Practice: Algebrabyexample," Journal of Education for Students Placed at Risk 20, no. 1-2 (2015). 22 Robert K. Atkinson et al., "Learning from Examples: Instructional Principles from the Worked Examples Research," Review of Educational Research 70, no. 181-214 (2000). 23 Harold E. Pashler et al., "Organizing Instruction and Study to Improve Student Learning," in NCER 2007-2004 (Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education, 2007). 24 Booth et al., "Design-Based Research within the Constraints of Practice: Algebrabyexample." 25 William R. Penuel, Jeremy Roschelle, and Nicole Shechtman, "The Whirl Co-Design Process: Participant Experiences," Research and Practice in Technology Enhanced Learning 2, no. 1 (2007). 26 Donald M. Berwick, "The Science of Improvement," The Journal of the American Medical Association 299, no. 10 (2008). 27 Bryk et al., Learning to Improve: How America's Schools Can Get Better at Getting Better. 28 David Yeager and Gregory M. Walton, "Social-Psychological Interventions in Education: They're Not Magic," Review of Educational Research 81, no. 2 (2011). 29 David Paunesku et al., "Mind-Set Interventions Are a Scalable Treatment for Academic Underachievement," Psychological Science 26, no. 6 (2015). 30 Sam Severance, Heather Leary, and Raymond Johnson, "Tensions in a Multi-Tiered Research Partnership," in Proceedings of the 11th International Conference of the Learning Sciences ed. Joseph L. Polman, et al. (Boulder, CO: ISLS, 2014). 31 Melissa Roderick, John Q. Easton, and Penny Bender Sebring, "The Consortium on Chicago School Research: A New Model for the Role of Research in Supporting Urban School Reform," (Chicago, IL: Consortium on Chicago School Research, 2009). 32 William R. Penuel et al., "Findings from a National Study on Research Use among School and District Leaders," in Technical Report No. 1 (Boulder, CO: National Center for Research in Policy and Practice, 2016). 33Matthew Makel, C. and Jonathan A. Plucker, "Facts Are More Important Than Novelty: Replication in the Education Sciences," Educational Researcher 43, no. 6 (2014). 34 Joshua L. Glazer and Donald J. Peurach, "School Improvement Networks as a Strategy for Large-Scale Education Reform: The Role of Educational Environments," Educational Policy 27, no. 4 (2013).

23

35 Raymond Johnson et al., "Teachers, Tasks, and Tensions: Lessons from a Research-Practice Partnership," Journal of Mathematics Teacher Education 19, no. 2 (2016). 36 National Research Council, "Science Teachers' Learning: Enhancing Opportunities, Creating Supportive Contexts," (Washington, D. C.: National Academies Press, 2015). 37 Coburn, Penuel, and Geil, "Research-Practice Partnerships: A Strategy for Leveraging Research for Educational Improvement in School Districts."