22
This article was downloaded by: [James Elicker] On: 19 December 2012, At: 13:10 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Early Education & Development Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/heed20 Indiana Paths to QUALITY™: Collaborative Evaluation of a New Child Care Quality Rating and Improvement System James Elicker a , Karen M. Ruprecht a , Carolyn Langill a , Joellen Lewsader a , Treshawn Anderson a & Melanie Brizzi b a Department of Human Development & Family Studies, Purdue University b Bureau of Child Care Indiana Family and Social Services Administration To cite this article: James Elicker , Karen M. Ruprecht , Carolyn Langill , Joellen Lewsader , Treshawn Anderson & Melanie Brizzi (2013): Indiana Paths to QUALITY™: Collaborative Evaluation of a New Child Care Quality Rating and Improvement System, Early Education & Development, 24:1, 42-62 To link to this article: http://dx.doi.org/10.1080/10409289.2013.736127 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Indiana Paths to QUALITY™: Collaborative Evaluation of a New Child Care Quality Rating and Improvement System

Embed Size (px)

Citation preview

This article was downloaded by: [James Elicker]On: 19 December 2012, At: 13:10Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Early Education & DevelopmentPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/heed20

Indiana Paths to QUALITY™: CollaborativeEvaluation of a New Child Care QualityRating and Improvement SystemJames Elicker a , Karen M. Ruprecht a , Carolyn Langill a , JoellenLewsader a , Treshawn Anderson a & Melanie Brizzi ba Department of Human Development & Family Studies, PurdueUniversityb Bureau of Child Care Indiana Family and Social ServicesAdministration

To cite this article: James Elicker , Karen M. Ruprecht , Carolyn Langill , Joellen Lewsader , TreshawnAnderson & Melanie Brizzi (2013): Indiana Paths to QUALITY™: Collaborative Evaluation of a New ChildCare Quality Rating and Improvement System, Early Education & Development, 24:1, 42-62

To link to this article: http://dx.doi.org/10.1080/10409289.2013.736127

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representationthat the contents will be complete or accurate or up to date. The accuracy of anyinstructions, formulae, and drug doses should be independently verified with primarysources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or arising out of the use of this material.

Indiana Paths to QUALITYTM: CollaborativeEvaluation of a New Child Care Quality Rating

and Improvement System

James Elicker, Karen M. Ruprecht, Carolyn Langill, Joellen Lewsader, and

Treshawn Anderson

Department of Human Development & Family Studies, Purdue University

Melanie Brizzi

Bureau of Child Care Indiana Family and Social Services Administration

Research Findings: Developmental evaluation is a process in which researchers and program

implementers communicate collaboratively to produce an evaluation that is attuned to critical

program issues, provides useful data during program implementation, and results in rigorous

methodology and findings informed by program conditions (M. Q. Patton, 1997). The implemen-

tation evaluation of Indiana’s Paths to QUALITYTM, a statewide quality rating and improvement

system (QRIS), provides examples of this developmental evaluation process. Researchers and

program leaders engaged in collaborative evaluation planning, QRIS standards validation, and

review of formative data about the experiences of child care providers, parents, and children in

the system. Frequent communication between evaluators and program implementers during the

4-year evaluation project resulted in (a) QRIS leaders having timely data that they used to fine-tune

the program and (b) evaluators making needed adjustments in the research design and making more

plausible interpretations of results. Examples of the collaborative evaluation process are given, with

reflective comments provided by the state QRIS administrator. Practice or Policy: The collaborativestrategies used in the implementation and evaluation of this state-level child care QRIS may be

useful for other states or localities as they plan large-scale early care and education systems.

Child care quality rating and improvement systems (QRISs) are being implemented widely in the

United States. So far 26 states have implemented these systems, and many more states have a

QRIS in the planning or pilot phases (Tout et al., 2010). Although some of the early QRISs have

been evaluated, there is still not much rigorous research on how statewide child care QRISs

operate; whether they improve child care quality; or how they impact child care providers,

families, and children (Elicker & Thornburg, 2011; Lugo-Gil et al., 2011). Despite this relative

lack of research data, the acceptance and implementation of QRIS as a major public policy strat-

egy has been rapid and far-reaching. Various combinations of federal, state, local, and private

dollars fund pilot or statewide QRIS initiatives, and QRIS has received a strong endorsement

Correspondence regarding this article should be addressed to James Elicker, Department of Human Development &

Family Studies, Purdue University, 1200 West State Street, West Lafayette, IN 47907. E-mail: [email protected]

Early Education and Development, 24: 42–62

Copyright # 2013 Taylor & Francis Group, LLC

ISSN: 1040-9289 print/1556-6935 online

DOI: 10.1080/10409289.2013.736127

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

from the federal government in the form of requirements for the Race to the Top–Early Learning

Challenge quality enhancement grants. QRIS was recently described as having become in the

past 10 years ‘‘a ubiquitous tool for standardizing all early care and education programs and

systems building across sectors and funding streams’’ in the United States (Barnett & Goffin,

2012).

The goals of QRIS vary from state to state, but most systems are designed to (a) inform parents

about child care quality to guide their child care decisions, (b) offer incentives for providers to

improve child care quality, and (c) support better developmental and school readiness outcomes

for children as the result of receiving higher quality early education and care during their pre-

school years (National Child Care Information and Technical Assistance Center [NCCITAC],

2011). State QRIS program structures and procedures vary, but they generally include five basic

functional components: (a) quality standards for each QRIS level, (b) a process for monitoring the

standards, (c) a process for supporting quality improvements, (d) provision of financial incentives

to child care providers, and (e) the dissemination of information to parents and other stakeholders

about the quality levels (NCCITAC, 2009; Tout et al., 2010).

Research evaluation of QRIS is still in its formative stages. Although the federal government

is making efforts to inform states about QRIS and coordinate research efforts, evaluation projects

have been completed in only a few states and are currently under way or planned in others

(Barnard, Smith, Fiene, & Swanson, 2006; Norris & Dunn, 2004; Norris, Dunn, & Eckert,

2003; Thornburg, Mayfield, Hawks, & Fuger, 2009; Tout et al., 2010; Tout, Zaslow, Halle,

& Forry, 2009; Zellman, Perlman, Le, & Setodji, 2008). Colorado and Pennsylvania have com-

pleted QRIS evaluations and found that quality increases as child care providers move through

the system. The evaluation of the Pennsylvania QRIS, Keystone STARS, found that child care

quality increased with each level and that the system ratings were reliable indicators of child care

quality (Barnard et al., 2006). The need for rigorous evaluations of QRIS was reinforced by

evaluation findings from the Qualistar program in Colorado, one of the first states to conduct

a comprehensive evaluation that included measures of both child care quality and child

development outcomes (Zellman et al., 2008). This evaluation focused on validating Qualistar

as a measure of child care quality and as a policy tool to improve quality. The researchers found

evidence that there were improvements in child care quality over time, evidenced by increases in

environmental rating scale quality scores, but they found few relationships between the Qualistar

ratings and child development outcome measures. The Colorado evaluation helped to inform

other QRIS evaluations by bringing attention to the importance of validating each QRIS

component, singly and in combination, to inform the development of statewide systems (see also

Zellman & Fiene, 2012).

Because QRIS is a relatively new public policy approach, and research on large-scale child

care quality improvement programs is still limited, evaluation researchers engaged with QRIS

are searching for effective ways to study these systems as they develop and provide useful data

to inform state child care program leaders and the field of early childhood education and care

(see, e.g., Zellman, Brandon, Boller, & Kreader, 2011). The QRIS project described in this

article is an example of a research evaluation designed to inform policy at the state and national

levels; assist program leaders in fine-tuning QRIS in its initial phases; and evaluate whether the

stated goals are being met for child care providers, families, and children.

Various evaluation approaches and processes are used to assess a wide range of programs.

Most evaluations can be broadly categorized as formative (i.e., what is working in the program,

COLLABORATIVE EVALUATION 43

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

and what needs to be changed to reach initial goals and outcomes) or summative (i.e., the extent

to which program goals and outcomes have been achieved; Patton, 1994, 2010). Recent exam-

ples of formative evaluation from the early childhood field include studies that examined how

Early Head Start programs were implemented (Kisker, Paulsell, Love, & Raikes, 2002) and

evaluations conducted on home-visiting models in Early Head Start (Paulsell, Mekos, Del

Grosso, Rowand, & Banghart, 2006). Examples of summative evaluation include the evaluation

of the Early Reading First initiative, which examined the impact of the program on the language

development and emergent literacy skills of preschool children (Jackson et al., 2007), and the

focus on outcomes for children and families in the Early Head Start impact study (Love et al.,

2002). Another approach to evaluation, called developmental evaluation, might be applied in

both formative and summative studies. Developmental evaluation defines the role of the

evaluator as a collaborator:

The evaluator is part of a team whose members collaborate to conceptualize, design, and test new

approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional

change. The evaluator’s primary function in the team is to elucidate team discussions with evaluativedata and logic, and to facilitate data-based decision-making in the developmental process. (Patton,

1997, p. 105, emphasis added)

A developmental evaluation approach was used in the evaluation of the state QRIS reported

in this article. Developmental evaluations are suited for initiatives that are complex and geared

for significant change, that involve multiple stakeholders, and in which it is expected that the

program will be modified over time (Gamble, 2008). A central feature of developmental

evaluation is its ability to give program partners quick feedback, so they can use this infor-

mation to further develop the program or initiative. Evaluation is thus seen as a part of the

program planning process rather than an end result. Another important feature of this type

of evaluation is the role of the evaluator. In developmental evaluations, the role of the evalua-

tor encompasses more than collecting, analyzing, and reporting data. The evaluator is embed-

ded within the program team and uses data to actively shape decision making (Gamble, 2008;

Patton, 2010).

Consistent with this approach, a developmental evaluation process unfolded in this project, so

that findings, as they emerged from the evaluation research, could inform the program managers,

stakeholders, and professionals working directly with the child care providers and families, and

the aims and experiences of program staff could inform the evaluators. This exchange was

accomplished through frequent dialogue with all partners involved in the implementation of

the QRIS. This ongoing dialogue proved to be an effective way to balance the need for robust

research with program implementation goals and realities. Evaluation data were monitored

closely by program implementers, so that when enough data had been collected, partners began

discussing ways to improve the overall system and the specific training and technical assistance

needs of child care providers.

This article describes the development of that collaborative process and how the partnership

evolved, giving an overview of QRIS evaluation planning, implementation, and use of results

over a 5-year period, highlighting key processes, decisions, and outcomes. A recurring theme

is the importance of striking a balance between (a) doing rigorous research resulting in valid

answers to key evaluation questions and (b) engaging in continuing collaborative dialogue with

44 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

program implementers. To further illustrate the collaborative process, the state child care

administrator, who is the director of Indiana’s QRIS Paths to QUALITYTM (PTQ), provides

her reflective comments, shown in italics.

OVERVIEW OF INDIANA’S QRIS

The Indiana QRIS, named Paths to QUALITYTM (PTQ), was created as a voluntary system to

assist parents in identifying and selecting quality child care and to recognize child care providers

for their efforts to achieve higher standards of quality beyond the minimum state licensing

requirements. Providers who choose to enroll in PTQ receive a verification visit, are assessed,

and, based on the quality standards verified, are placed in one of four quality levels. Providers

receive annual reverification visits to determine whether they have maintained their current level

or achieved a higher level. Providers may also request a new rating any time 6 months after their

previous assessment.

PTQ was originally created in the late 1990s by a local group of community organizations

and private funders in northeastern Indiana, centered in the city of Fort Wayne. The organizers

were committed to improving child care quality in their community. They formed a Child Care

and Early Education Partnership whose goal was to develop an awareness of the importance of

high-quality early care and education for all children in their region. In 1996 this partnership,

which included a family foundation and the local child care resource and referral agency, funded

a community action plan to address the early care and education needs of residents. From 1996

to 1999 the PTQ program was created and implemented. In 2000, a wider regional PTQ program

was implemented in that region of the state. By 2007, 60% of child care providers in the region

had joined the system (Elicker, Langill, Ruprecht, & Kwon, 2007).

In this original regional PTQ program, standards for four quality levels were created for

licensed child care centers, licensed family child care homes, unlicensed registered child care

ministry centers, and unlicensed part-time early care and education programs. Although not

legally required to be licensed by the state, registered child care ministry centers and part-time

preschools were included because a large number of children received care in these facilities.

Including them provided an avenue to bring them into a quality improvement system.

Unlicensed registered child care ministries are a unique category of child care providers in Indi-

ana. According to state law, registered child care ministries are defined as child care services

provided as an extension of a group’s religious mission and are considered nonprofit organiza-

tions exempt from state regulation because of their religious affiliation (Indiana State Law,

1992). Registered ministries do not have to comply with licensing rules for adult=child ratio,

group size, and many health and safety standards. In 2005, another regional child care resource

and referral agency at the opposite end of the state, in southwestern Indiana, implemented the

PTQ system in close collaboration with the original designers in Fort Wayne. Within 2 years,

46% of the child care providers in the 11 counties served by this second agency had joined

the system (Elicker et al., 2007).

Beginning in 2006, the Indiana Bureau of Child Care, a division of the Family and Social

Services Administration, began discussions with leaders and stakeholders of the two regional

PTQ systems with the goal of expanding the program statewide. Meetings were held that

included stakeholders from the two originating regions, state agency representatives, and

COLLABORATIVE EVALUATION 45

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

statewide service providers who could potentially participate in expanding PTQ statewide. The

goal of this planning group was to develop a system that would have comparable quality stan-

dards for licensed child care centers, licensed family child care homes, and unlicensed registered

child care ministry centers. Early in the planning process, Purdue University faculty were invited

to join these discussions, to consult about the design of a comprehensive evaluation study for the

new statewide PTQ system.

State Child Care Administrator: ‘‘A core group of partners who were heavily invested in child carequality improvement gathered together in numerous strategic planning sessions to determine theimpact that each partner could have on the long-term success of the QRIS. This group focusedon several key questions: Do providers have the supports necessary to make meaningful qualityimprovement? Are the standards evidence based, attainable, aligned with existing state and nationalstandards, and consistent across provider types? The inclusion of the evaluation researchers at thisearly planning stage greatly enhanced and simplified the planning of the evaluation by clearlyarticulating the questions that needed to be answered by the evaluation, the specific role of eachpartner, and how each partner fit together to support PTQ.’’

PTQ LEVELS AND STANDARDS

The Indiana PTQ rating system has a ‘‘building block’’ organization (NCCITAC, 2011). Each

PTQ quality level includes specific criteria that must be met for that level to be awarded. To

qualify for each succeeding level, the provider must meet the standards for the new level and

also maintain all standards at the levels below. An overview of the four quality levels, including

a brief description of the required criteria at each level, is shown in Table 1 (Indiana Family and

Social Services Administration, 2011).

The statewide PTQ standards were aligned with standards used in the original pilot regional

programs, with a few exceptions. The unlicensed registered ministry and licensed family child

care home standards were revised to align with the licensed child care center standards, making

some quality criteria more stringent for those providers. Also, part-time preschool or child care

programs were not included in the statewide system, because by Indiana law they are not

required to be regulated by the state child care agency.

PLANNING THE RESEARCH COLLABORATION

Prevalidation of the State QRIS

The Purdue University research partner first became involved in Indiana’s QRIS by conducting

an evaluation of the southeastern regional pilot PTQ program. In 2005 the researchers were

asked by the funder, a private foundation, to evaluate a comprehensive set of early learning

initiatives being offered through the local child care resource and referral agency that replicated

the original PTQ model. Evaluation questions were developed to help the local agency, the fun-

der, and the state’s early childhood stakeholders better understand why providers joined PTQ,

how it was implemented, what the most beneficial aspects were, and what challenges child

providers found in moving up to higher quality levels. Results of this regional PTQ evaluation

46 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

were used by local stakeholders but also influenced state-level decisions about expanding the

QRIS statewide. A timeline summarizing the entire PTQ evaluation process is outlined in

Table 2.

State Child Care Administrator: ‘‘The results of the evaluation of the regional pilot program had alarge influence on the decision to take the PTQ system statewide. The evaluation had shown that thepilot program was replicable, providers were eager to participate, parents were using the ratings asa tool when choosing child care, and the system could effectively increase the quality of availablechild care. These early evaluations gave the State the necessary data to garner support from keystakeholders in moving forward with statewide implementation.’’

Early in the PTQ planning process, state child care leaders recognized the importance of

aligning the PTQ quality standards with evidence-based practices and providing assurances to

potential participants that the standards had been examined for validity. Therefore, in 2007

the state’s Bureau of Child Care contracted with Purdue University to complete a scientific

TABLE 1

Paths to QUALITYTM Levels and Sample Standards

Level Sample standards

Level 1: Health and safety

requirements (state licensing

requirements)

Basic requirements for health and safety, consistent with state licensing

rules, are met.

Develop and implement basic health and safety policies and procedures.

Staff members receive orientation within 30 days of being hired.

Level 2: Learning environment Provide an environment that is welcoming, nurturing, and safe for the

physical, emotional, and social well-being of all children.

Activities and materials reflect the age, interests, and abilities of all

children. Provide for children’s language and literacy skill development.

Provide pertinent program information to families.

Promote staff=assistant caregivers’ development and training.

Program has a written philosophy and goals for children.

Level 3: Planned curriculum A written curriculum and planned program for children reflects

developmentally appropriate practice.

Program evaluation is completed annually by parents and staff.

Actively engage in program evaluation and have an action plan for

improvement.

Demonstrate professional growth of director and staff or lead caregiver and

assistants in excess of licensing requirements.

Facilitate family and staff input into the program.

Program has been in operation for a minimum of 1 year or lead caregiver

has at least 12 months of experience in a licensed or Bureau of Child Care

nationally recognized accredited child care setting as a child care

provider.

Level 4: National accreditation Accreditation is achieved through the National Association for the

Education of Young Children or the National Association of Family

Child Care, Council on Accreditation, and the Association for Christian

Schools International.

Professional development and involvement continues, including mentoring

other directors=providers.

COLLABORATIVE EVALUATION 47

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

review of the proposed PTQ quality standards. The goal of this review was to evaluate the scien-

tific validity of the proposed state QRIS standards. The review focused on three questions:

1. What were results of the two state QRIS regional pilot programs?

2. Will the proposed QRIS standards result in increasing the quality of child care

children receive?

3. Is the QRIS system as proposed likely to improve developmental outcomes for

children?

The Purdue evaluation team conducted this validation study by reviewing published research

articles, technical reports from previous QRIS evaluations in other states, and available evalu-

ation results from the two existing regional PTQ pilot programs. The team reviewed each pro-

posed standard in the QRIS, concluding that most of the standards had a basis in evidence in the

research literature supporting child care quality. The evaluation team found that most elements in

the proposed standards could be categorized into 10 evidence-based quality indicators: state

regulation, teaching education=training, structural quality, process quality, assessment, pro-

visions for children with special needs, program policies, director professional development, par-

ent–teacher communication=involvement, and national accreditation (Elicker et al., 2007). This

preliminary validity study supported the state’s efforts to introduce the voluntary QRIS by show-

ing that the proposed quality standards were grounded in research.

State Child Care Administrator: ‘‘It is essential that the sometimes challenging changes that PTQrequires from child care programs are likely to improve the quality of care for children. By sharingthe Purdue validity report with families, providers, and community leaders, the Bureau was able toshow that the standards had been researched and reflected evidence-based best practices forchildren in child care. The validity report was shared in a variety of settings, including community

TABLE 2

Timeline of PTQ Evaluation Activities

Evaluation activity Timeline

Purdue researchers conduct an evaluation of the PTQ regional pilot

program.

2006–2008

Researchers are invited by the state child care administrator to

develop an evaluation proposal for a statewide PTQ.

December 2006

Researchers conduct a prevalidation study of proposed statewide

PTQ standards.

March–July 2007

Collaborative evaluation planning between researchers and the PTQ

committee.

July 2007–January 2008

PTQ evaluation steering committee formed. January 2008

State funding begins for PTQ in the two pilot program regions. January 2008

Evaluation data collection begins in the two pilot PTQ regions. September 2008

Phased evaluation data collection in all four PTQ regions. September 2008–July 2011

Final PTQ evaluation report submitted to the state. September 2011

Researchers participate in 11 stakeholder meetings to disseminate

evaluation findings at the local level.

November–December 2011

Note. PTQ¼ Paths to QUALITYTM

48 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

open house events, partner rollout meetings, study sessions of the legislative Committee on ChildCare, and PTQ introductory sessions for providers. The fact that the standards had been validatedby a respected university in this way generated substantial critical support for PTQ.’’

KEY DECISIONS: EVALUATION QUESTIONS

Once the preliminary standards validation study was completed and the statewide PTQ program

was preparing to launch, the Purdue University researchers began working with the Bureau of

Child Care to develop a comprehensive implementation evaluation. Early in the development

of the plan, the researchers met with state officials to discuss the most important evaluation

questions for the implementation phase of PTQ and how key processes, outputs, and outcomes

of the QRIS should be assessed. Whereas the researchers had expertise in child care quality and

child development measurement, state officials had program expertise and clear ideas about

information they needed from the evaluation. A primary goal of the state leaders was to increase

child care quality statewide. However, other important elements and processes of the QRIS

needed to be assessed, such as what aspects of the QRIS child care providers found beneficial,

what motivated providers to participate, what obstacles providers found in moving to higher

QRIS levels, overall provider satisfaction with PTQ, and also parent perspectives, so that

program decision makers could understand how the new system was working in the field.

In order to broaden the evaluation planning process beyond the Purdue research team and

managers within the state Bureau of Child Care, a state evaluation advisory group was formed

to provide input from organizations and individuals that would be affected by the implemen-

tation of PTQ. The purpose of the advisory committee was to provide input on evaluation

questions and methodology, to develop ideas about how to promote the evaluation research

to stakeholders, and to ensure broad participation and a representative evaluation sample. Repre-

sentatives from the Bureau of Child Care; the state child care resource and referral agencies; the

state chapter of the National Association for the Education of Young Children; United Way; and

child care providers representing center-based care, family child care, and registered child care

ministries participated in the committee. A QRIS program logic model was developed with this

committee to represent the proposed quality improvement process and to guide the formulation

of evaluation questions—modeled after the logic model developed for the Colorado Qualistar

evaluation (Zellman et al., 2008, 2011). This logic model was discussed in the advisory commit-

tee to determine whether the PTQ program theory of change, from initial program inputs to

outcomes, was adequately represented from the perspectives of all participants. The logic model

helped the steering committee arrive at a consensus about the most important evaluation

questions, given the program goals and processes, and was also used throughout the PTQ

implementation phase to illustrate how the program was intended to work (see Figure 1).

The advisory committee also helped to refine specific evaluation questions. Researchers and

PTQ program partners agreed that it was critical that the evaluation address the goals of the

program: (a) improving child care quality, (b) recognizing providers for their achievements,

(c) providing a tool for parents to use to select child care, and (d) supporting better development

for children. These goals became four distinct areas of the evaluation. Ultimately the evaluation

questions addressed these aspects of the QRIS implementation, including child care providers’

COLLABORATIVE EVALUATION 49

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

experiences, quality levels, children’s development, and parent perspectives. Specifically, the

following seven questions drove the evaluation:

1. Are child care providers of all types entering the voluntary PTQ system? Do provi-

ders understand the system?

2. What are the incentives for providers to enroll? What are the barriers?

3. Do child care providers move to higher PTQ levels after enrolling in the system?

FIGURE 1 Simplified logic model for Paths to QUALITYTM (PTQ) quality rating and improvement system.

50 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

4. Are providers aware of available training=technical assistance resources to help them

increase PTQ levels, and do they use them? Does training=technical assistance help

providers advance in PTQ level?

5. When providers attain higher PTQ levels, does this result in higher quality care as

assessed using research-validated measures?

6. Are children who are placed with providers with higher PTQ levels developing more

optimally than children placed with providers with lower PTQ levels?

7. Are parents of Indiana infants, toddlers, and preschool children aware of, and do they

understand, the PTQ system? Does the PTQ system affect parents’ child care decisions?

State Child Care Administrator: ‘‘The goals of PTQ determined the scope of the evaluation. To makea meaningful, lasting difference in the quality of child care, many factors had to be considered. It isnot sufficient to have standards for high-quality care. Child care programs must have confidence inthe system itself, and they must believe in the validity of the standards and understand how tosuccessfully implement them. Adequate supports for professional development, technical assistance,supplies, and equipment must be available to ensure that providers feel the standards are attainableand that program changes are sustainable. It was essential that the evaluation include questions tohelp understand what motivates providers to join PTQ and what keeps them motivated to implementcontinual quality improvements. Additionally, PTQ was designed to be an easy-to-understand toolfor parents and communities. It was critical to the ongoing success of the system to understandhow families and communities are viewing and understanding the system. We needed to confirm thatPTQ consumer awareness information was understandable and reaching the target audience.’’

KEY DECISIONS: DEVELOPING MEASURES

In addition to providing input for the research questions, the evaluation advisory committee

reviewed specific design features and measures that the PTQ researchers proposed. The

researchers proposed a descriptive, correlational design to examine the associations between

PTQ ratings, global quality, and participant outcomes. Although such a nonexperimental design

would not allow for causal conclusions about the effectiveness of the PTQ intervention, it was

more practical for implementation evaluation of a large public program in which the goal was

primarily descriptive and focused on how the program was functioning in its early stages.

The researchers proposed validated measures when appropriate and created new surveys when

necessary to measure these constructs. All evaluation instruments were used in preliminary field

tests to ensure that they were appropriate for Indiana contexts. This report focuses primarily on

the evaluation results as they were discussed and used in the collaborative process. Other results

will be presented in future articles, so only selected measures are described here. (A detailed

description of all measures used in the PTQ evaluation is available in Langill, Elicker, Ruprecht,

Kwon, & Guenin, 2009.)

Measures: Child Care Quality

Classroom quality was measured using the Early Childhood Environment Rating Scale–Revised

(Harms, Clifford, & Cryer, 1998) in center-based preschool classrooms, the Infant Toddler

Environment Rating Scale–Revised (Harms, Cryer, & Clifford, 2003) in center-based infant

COLLABORATIVE EVALUATION 51

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

classrooms, and the Family Child Care Environment Rating Scale–Revised (Harms, Cryer, &

Clifford, 2007) in family child care homes. The decision to use these North Carolina Environ-

ment Rating Scales (ERS) to validate the PTQ levels was purposeful and resulted from advisory

committee discussions. Many of the PTQ standards written by the original community group

reflected items in the ERS. For example, if a provider wanted to move from Level 1 to Level

2, certain environmental features had to be present, such as interest centers, cozy areas, and

so on, reflected in the minimal to good categories of the ERS. PTQ program partners also

had experience using ERS measures to drive technical assistance to child care providers and felt

comfortable with these tools as measures of child care quality. Moreover, at the time the PTQ

evaluation was planned, the ERS were the only widely used, research-validated quality measures

available that could be used with infants, toddlers, and preschoolers in centers and family child

care. The inclusion of the Infant Toddler Environment Rating Scale–Revised for classrooms

serving children younger than 3 years of age was efficacious, because PTQ includes quality stan-

dards that specifically address infant=toddler care, and previous national research has found that

infant=toddler care tends to be lower in quality than preschool care (e.g., National Institute of

Child Health and Human Development Early Child Care Research Network, 1997).

Measures: Child Care Providers’ Perceptions of PTQ

Self-administered surveys were created for providers in the evaluation sample to assess their

qualifications and their attitudes and experiences with PTQ. Based on discussions with PTQ

partners about key information to track with providers, a similar but shorter follow-up telephone

survey was also created. The advisory board made suggestions about shortening the surveys to

keep providers’ interest but deferred to the researchers for final decisions about the survey design.

Measures: Parents’ Perceptions of PTQ

PTQ partners were interested in parents’ perceptions of PTQ—both parents who were using the

observed PTQ child care programs and parents in the general public. Similar surveys were used

with these two groups of parents. After the parents in the general public were surveyed for the first

time, a statewide marketing campaign was launched for PTQ. PTQ partners wanted to know

whether this campaign was effective, and they requested a second round of interviews with a

random sample of Indiana parents. In light of the public awareness campaign, the researchers

decided with the campaign manager and other program partners to include some additional survey

questions that would assess the awareness and impact of specific public awareness methods.

Measures: Children’s Development

It was important to PTQ program managers and partners from the beginning that child outcomes

be assessed as part of the evaluation. The constructs that were most important to program part-

ners were children’s cognitive, language, and social-emotional development. There was much

discussion about the pros and cons of assessing outcomes for a newly launched statewide quality

improvement program. Evaluation researchers generally do not recommend testing impacts on

child outcomes until a program has fully matured, and even then rigorous experimental research

52 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

designs are recommended (see Elicker & Thornburg, 2011). It was agreed that child data would

be collected but primarily for descriptive purposes rather than to evaluate the effectiveness of

PTQ. Researchers field tested age-appropriate and reliable measures and created a child assess-

ment protocol that could be completed efficiently within the child care settings. The advisory

board relied on the expertise of the researchers and approved the researchers’ recommendations

for this child development measurement strategy.

COLLABORATION IN CENTRAL DATA SYSTEM DEVELOPMENT

An important aspect of the PTQ system development and evaluation research was the planning

and design of a QRIS central data system. The Purdue University researchers were involved in

the planning of the state’s data system at the request of the state program managers. Because

PTQ was new, a data system was needed to track and monitor the progress of child care providers.

The PTQ quality raters and the training=technical assistance mentors needed a way to directly

enter data to show how providers had been assessed, the types and amounts of training provided,

and how providers were progressing toward the next QRIS level.

The university researchers participated in the data system planning committee to ensure that

data needed for the evaluation research would be gathered and retained. This collaboration also

resulted in the researchers having direct unlimited access to the PTQ central data system,

enabling them to use the database to search for state-gathered information about participating

child care providers. These data proved essential for sample stratification and the evaluation

of participant recruitment. One significant contribution that resulted from the researchers’ input

was that the system was designed to preserve longitudinal, historical records of providers’ PTQ

participation. These longitudinal data from the central system (e.g., the amount of technical

support received by each PTQ provider, the time required for providers to advance to each level)

will be used to generate evaluation findings in future reports.

State Child Care Administrator: ‘‘The Bureau of Child Care relied heavily on the expertise of theevaluation steering committee in choosing the measures for this evaluation. Due to the close collab-oration between the State and the researchers, the goals of the evaluation were clearly understoodby all and the most appropriate measures for answering the research questions were selected.Implementing PTQ statewide was an enormous policy decision that required significant buy-in onthe front end as well as ongoing monitoring for impact, effectiveness, and program integrity. Weneeded a data system that would capture the depth and breadth of information that would allowus to make future data-driven policy and budget decisions. The collaboration with the research teamon the central data system design equipped the Bureau to think more broadly about the desired dataelements and build a system that will allow ongoing evaluation of key facets of PTQ.’’

KEY DECISIONS: DATA COLLECTION

Recruiting the Sample

Implementing the statewide QRIS evaluation required considerable planning with program leaders

and training of the evaluation data collection team. As PTQ was launched in four successive

COLLABORATIVE EVALUATION 53

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

regional waves over the course of 1 year, the program managers needed early data on the

implementation and responses of participants in each region. Therefore, the evaluation sample

was stratified into 11 subsamples, each representing a region served by one of the local child care

resource and referral agencies that was the primary source of provider training and technical assist-

ance and local information for parents. Because collecting data at each PTQ level for each type of

provider was central to the research questions, the sample was stratified by provider type and PTQ

level. Finally, the sample was planned to include approximately equal numbers of children

younger than 3 and children 3 to 5 years of age.

Although sampling was well planned, the implementation of the evaluation study depended

on the team’s ability to recruit a voluntary sample of the three types of child care providers in

each service delivery area, at each quality level of PTQ. Throughout the evaluation, researchers

met regularly with program implementers, including child care resource and referral mentors,

Indiana Association for the Education of Young Children (IAEYC) quality advisors, and state

child care licensing consultants, to confer about garnering child care providers’ support for

the evaluation research. Because the evaluation team had no direct interaction with large num-

bers of providers prior to sample recruitment, it was essential that program partners who did

have regular contact understand and encourage providers to participate in the evaluation research

if they were randomly selected. Fliers and talking points about the evaluation were developed for

organizations and individuals involved in promoting PTQ, informing parents about the evalu-

ation, and recruiting and supporting providers as they joined the system.

The university researchers made decisions regarding the procedure for sample selection in

line with the state’s need to include all three types of regulated providers. In order to ensure

adequate representation at each of the four QRIS quality levels, equal numbers of providers were

selected at each QRIS level when possible. Sample selection and data collection began in each

region approximately 1 year after the start of PTQ funding to allow enough time for significant

numbers of providers to enroll and receive quality ratings. This 1-year period was established

based on enrollment projections by state leaders.

Initially, one infant=toddler classroom and one preschool classroom were to be randomly

selected for participation in the evaluation in each selected licensed child care center and

unlicensed registered child care ministry center. Licensed family child care homes were also

randomly selected. Up to two children and their parents were randomly selected in each class-

room or family home to participate in the child assessment and parent interviews, to equal 1,040

child and parent assessments. It was planned that the research team would complete the assess-

ments in four waves over a 2-year period.

Adjusting the Sample

Mid-course in the evaluation, it became necessary to adjust the sample size for two reasons.

First, although providers enrolled in significant numbers in the first 2 years—more than 80%of all licensed child care centers and more than 50% of all licensed family child care

homes—movement to higher PTQ levels went more slowly than predicted in some regions.

Therefore, recruitment of the equal numbers of providers at each PTQ quality level was not poss-

ible. Second, unlicensed registered child care ministry centers were not enrolling in PTQ at the

rate anticipated. The ministry programs, according to PTQ field staff, had significant difficulties

54 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

meeting the minimal requirements needed to enroll in the program, which were similar to the

requirements for licensed centers. As a result, the projected sample quotas for types of providers

and PTQ quality levels were not met in each of the regions. Ongoing discussions with state PTQ

leaders allowed the researchers to make adjustments in the sampling plan during the data collec-

tion process. The new strategy called for sampling reduced numbers of registered child care

ministries in each region while still allowing for a statistically adequate sample of available

providers statewide. Ultimately, the number of child care providers included in the evaluation

sample was 276, including 95 licensed child care centers (135 classrooms), 168 licensed family

child care homes, and 12 unlicensed registered child care ministries (14 classrooms). These num-

bers represent 27% of licensed child care centers, 11% of family child care homes, and 20% of

unlicensed registered child care ministries that had enrolled in PTQ by the end of data collection

in September 2011. Ultimately 557 children and 450 parents from the three types of providers

participated in the evaluation. Furthermore, in response to the low enrollment rate data for

unlicensed registered child care ministries during the first year of PTQ, the state and local child

care agencies undertook several new funded projects to encourage more ministries to enroll and

to support them in meeting the Level 1 and Level 2 quality standards.

Launching PTQ

Early in the planning process, the state PTQ managers made the decision to roll out the program

to different areas of the state in four waves, over the course of 1 year, based upon their perceived

readiness to implement the statewide model and in an effort to address implementation issues

regionally as they arose. The original two pilot regions were the first to implement the statewide

PTQ, because it was believed that there would be wide acceptance of the now-familiar program

in these regions.

Data collection by the PTQ evaluation team began 9 months to 1 year after the child care

resource and referral agency in each region was funded to enroll and support providers. This

allowed time for each region to start up, enroll providers of all types, and assess each provider

for initial placement in a PTQ quality level in sufficient numbers to make evaluation data

collection feasible. Data collection was estimated to take about 7 months in each of the four

waves. Preliminary evaluation reports from each region (11 regional reports in total) were issued

to the state Bureau of Child Care 2 months following each data collection period. The purpose of

the regional reports was to give the state Bureau of Child Care an early preview of how PTQ was

rolling out in each region. Each report included preliminary results on regional child care quality

scores, child development outcomes, and provider and parent survey responses.

Six months after the first phase of data collection for each region concluded, a randomized

telephone survey of parents of preschool children, again stratified by the 11 regional districts

in the state, was conducted using a random-digit dialing procedure with all available households

with children younger than 6. The purpose of this survey was to assess general public awareness,

understanding, and use of PTQ. The Purdue researchers conducted follow-up telephone surveys

with each regional sample of child care providers 6 months after the assessment visit to deter-

mine whether they had advanced in PTQ level and to explore current motivations, incentives,

and obstacles in relation to PTQ. Following this additional data collection in each state region,

a second regional evaluation report was issued to the state leaders.

COLLABORATIVE EVALUATION 55

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

State Child Care Administrator: ‘‘Our collaborative partnership with the research team allowed theBureau of Child Care to make timely decisions in several key areas. For example, during rollout theBureau of Child Care knew that the participation of registered ministries would be a challenge;however, the difficulties that the evaluation team had in recruiting registered ministry participantshighlighted the full extent of these difficulties. The Bureau of Child Care responded with a number ofinitiatives designed to provide increased technical assistance and supports to these programs. Thecollaborative approach to the evaluation also allowed for improved decision making in regard toevaluation timing and sampling based on the realities of system implementation. The State was ableto provide a program viewpoint to these issues, while the evaluation team ensured that the integrityof the data collection was not jeopardized.’’

KEY DECISIONS: SHARING AND USING EVALUATION RESULTS

In traditional evaluations, once the evaluation research begins, communication often ends

between the evaluator and the program management until the final evaluation report is issued.

However, in the PTQ evaluation, communication and collaboration intensified over the course

of data collection. At first, the principal investigator and other members of the core evaluation

team participated in monthly telephone calls with managing staff from the Bureau of Child Care.

These phone meetings served to update everyone on the progress of the evaluation research and

important program events that might affect the evaluation research. In the last 2 years of the

evaluation project, the Purdue evaluation team also actively participated in periodic face-to-face

program partner meetings, which included all organizations actively involved in the operation of

PTQ. Program partners included (a) the state child care resource and referral network, which

provided enrollment and mentoring for providers and public information about PTQ for families;

(b) the IAEYC affiliate, which provided mentoring and scholarship support to providers striving

to reach Level 3 and Level 4; (c) the company engaged to provide the PTQ quality ratings; (d)

licensing consultants from the Bureau of Child Care; and (e) the local United Way, which was

launching its own initiative to help raise child care quality in a specific region of the state. The

decision to foster open communication among the researchers, the state PTQ managers, and pro-

gram partners allowed for discussion of preliminary evaluation findings, problem solving about

obstacles encountered in the evaluation, and discussion of program improvements that could be

made on the basis of early data. Although program evaluation can sometimes be viewed as judg-

mental or punitive, these collaborative discussions evolved as a process that helped all involved

to understand how the implementation of PTQ was proceeding.

The decision to provide interim evaluation reports at the conclusion of each phase of data

collection for each of the 11 PTQ service regions in the state proved valuable. For example, early

results from parent surveys suggested that the level of awareness of PTQ was relatively low

(19%), even among parents whose children were placed with PTQ providers. Also, the surveys

showed that among parents who were aware of PTQ, almost all had learned about the program

directly from their own child care provider. The random public parent survey also had an impact

on early program planning. The first statewide general public survey showed that 12% of parents

in Indiana had heard of PTQ. Again, the parents in the general public who knew about PTQ

reported that their primary source of information about PTQ was their own child care provider.

As a result of this finding, the state developed a specific marketing package for child care

56 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

providers to use with parents, including pamphlets, yard signs, and talking points. In order to

evaluate the effectiveness of these marketing strategies, a second general public survey was then

planned to determine whether the marketing campaign was effective. This second statewide

survey, conducted 1 year after the first survey, resulted because of these ongoing communica-

tions with the state program leaders.

Another benefit of the interim evaluation reports and collaborative discussions with program

leaders and partners was that they allowed the research team to discuss preliminary findings and

to use program partners’ assistance in interpreting research results. For example, one early find-

ing was that scores on the Environmental Rating Scales, Personal Care subscale (including

health and safety items, such as diapering, hand washing, etc.) were quite low across all PTQ

levels and provider types. Discussion with program leaders and program partners revealed that

this was an area that mentors, quality advisors, and licensing consultants did not directly assess

or address in their visits. As a result, the state child care resource and referral agency developed a

new online training module to help mentors and advisors address issues of health, safety, and

personal care with providers.

Another finding that emerged from the preliminary data analyses was that although global

environmental quality was generally positively correlated with PTQ quality ratings for licensed

child care centers and licensed family child care homes, quality scores were variable, and the

differences in scores from Level 1 to Level 4 were greatest for family child care providers

(see Table 3). This was not totally unexpected—family child care providers were expected to

have a wider range of global quality before PTQ implementation, and as smaller operations, they

may be able to make quality changes more effectively than centers. However, these findings

TABLE 3

Environment Rating Scale Global Quality Scores by Paths to QUALITYTM Level, Descriptive Statistics

Range

Measure n Min Max M SD

ECERS-R (centers, preschool classrooms)

Level 1 19 1.70 5.47 3.85 0.85

Level 2 30 2.40 5.14 4.05 0.66

Level 3 18 2.88 5.85 4.35 0.66

Level 4 23 2.93 5.74 4.62 0.70

ITERS-R (centers, infant–toddler classrooms)

Level 1 14 2.00 5.08 3.74 0.95

Level 2 18 2.32 4.81 3.87 0.66

Level 3 8 3.19 5.06 4.03 0.52

Level 4 17 3.26 5.54 4.43 0.65

FCCERS-R (family child care homes)

Level 1 51 1.43 4.53 2.85 0.63

Level 2 42 1.84 5.58 3.33 0.70

Level 3 48 2.28 5.16 3.58 0.67

Level 4 26 2.39 5.73 4.04 0.91

Note. ECERS-R¼Early Childhood Environment Rating Scale–Revised; ITERS-R¼ Infant Toddler Environment

Rating Scale–Revised; FCCERS-R¼Family Child Care Environment Rating Scale–Revised.

COLLABORATIVE EVALUATION 57

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

provoked discussion in the program partners meetings. Why did family child care providers

display the greatest quality contrasts? Was the level of variability in observed quality in all pro-

viders at each level unacceptably high? Also, what might be the reasons why licensed centers,

which had higher mean scores for global quality than licensed family child care homes, did not

reach higher levels of quality at PTQ Level 4, for which they must be nationally accredited?

Program leaders used these discussions to devise marketing strategies emphasizing the dramatic

improvements in home-based child care in PTQ. The evaluation researchers came away from the

discussions with plans to analyze the data further to determine the degree of quality variability

and which specific items in the ERS were most responsible for the top-rated centers’ lower

quality scores. Subsequent study of the quality data resulted in a recent report that pinpointed

the lower quality indicators observed in Level 3 and Level 4 providers. These results are

currently being used by PTQ committees to review and possibly revise quality standards, rating

procedures, and plans for training and technical assistance to providers.

It became clear based on evaluation results indicating that there was a great deal of variance

in quality among the highest PTQ level providers that an in-depth examination of the PTQ rating

process was needed. In another follow-up project, the researchers are currently partnering with

administrators to examine the procedures and processes involved in PTQ ratings and technical

support. Researchers are observing PTQ rating visits and conducting focus groups with PTQ

raters, mentors, and quality advisors to collect more information about the PTQ system from

the program partner perspective. These observations and interviews in conjunction with PTQ

evaluation results led to a further conclusion that an in-depth review of the PTQ quality stan-

dards and the creation of a health and safety checklist that can be utilized during both technical

assistance and rating visits were warranted. Researchers are currently working with a committee

composed of PTQ administrators, mentors, quality advisors, and child care licensing personnel

to review PTQ standards and recommend changes.

State Child Care Administrator: ‘‘The preliminary reports were a great benefit to the State. Theyallowed us to be much more responsive to needs within the system. We didn’t have to wait 2 yearsto realize that we were on the wrong path with a certain approach or that providers were in need ofdifferent supports in certain areas. We used the preliminary feedback and created additionaltrainings and supports for providers and tailored the consumer education approach to make it moreeffective. We were also able to use the preliminary data to show the success of the rollout and theinitial investments. This helped the Bureau of Child Care develop additional support from other keystakeholders.’’

Dissemination of findings from the evaluation was not limited to the state program partners.

Throughout the evaluation, the researchers were able to discuss the evaluation plan and present

preliminary results to state professional groups, other states’ QRIS researchers, and federal offi-

cials, which promoted state and national awareness of PTQ. These public presentations helped

the state leverage additional partnerships to support quality improvement efforts via collabora-

tive research networks. The university evaluation team participated in the federal Quality Initia-

tives Research and Evaluation (INQUIRE) working group, part of the Child Care Policy

Research Consortium sponsored by the federal Office of Planning, Research and Evaluation,

and presented preliminary findings of the evaluation at national conferences during the course

of the evaluation.

58 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

DISSEMINATION OF EVALUATION FINDINGS

As described previously, throughout the evaluation process the researchers shared formative

reports with state leaders and key stakeholders. At the conclusion of the evaluation, the

university evaluation researchers, along with other key partners in PTQ, participated in a series

of statewide regional meetings at which the final evaluation results were shared with child care

providers and other stakeholders. The goal of these meetings was to give an overview of the

results of the first 2 years of PTQ, celebrate the documented successes of the program, and

discuss the areas in which PTQ fell short of its goals—all in the interest of stimulating public

input about future directions for PTQ. In addition, the university research team will soon author

a series of research policy briefs to summarize important evaluation findings and recommenda-

tions, highlighting specific topics of interest to child care providers and other stakeholders.

Based on discussions with PTQ program partners and other stakeholders, the briefs will focus

on PTQ quality improvement results, providers’ experiences, parents’ awareness and percep-

tions, and children’s access to PTQ providers and their developmental status.

CONCLUSIONS

The evaluation of PTQ was a multiyear project that adopted a developmental approach in which

university evaluators became a part of a team that supported the implementation of PTQ (Patton,

1997). Developmental evaluation by definition involves an ongoing process of continuous

improvement, similar to the structure and purpose of many QRIS models to support continuous

quality improvement in the child care field.

In any evaluation, particularly one that is large scale, is ambitious, and involves public fund-

ing, the potential exists for conflict between the evaluator, who is charged with seeking answers

to a set of questions, and the program partners, who want answers to the evaluation questions but

also want a high rate of participation and want the project to be widely perceived as successful.

Using a university partner to conduct a rigorous evaluation carried certain risks for the PTQ

program planners and partners, particularly if this statewide voluntary program was not imple-

mented as fully as anticipated. There were questions about whether the intensive nature of the

research would result in providers not enrolling in PTQ, not moving up to higher PTQ levels,

or not being willing to participate in the evaluation. By including the university researchers early

on as a part of the planning team, evaluation became part of the program planning process.

Researchers were not viewed as outsiders, and evaluation was not viewed as negative, but rather

as a positive contribution to the implementation of PTQ. University research partners, the

Bureau of Child Care, and other program partners were able work collaboratively to address

challenges that arose in the implementation and evaluation of PTQ.

Several factors were central to the success of the evaluation and also can be considered key

components of developmental evaluation. Frequent communication, both face to face and via

distance technology, ensured that the state had an evaluation that met its needs and that the

university researchers could deliver an evaluation based in sound methodology, producing valid

results. Planning and conducting an evaluation study within a large state system can be compli-

cated, but the evaluation implementers and researchers made intentional efforts to communicate

goals and strategies. Early and continuing involvement by the evaluation researchers in program

COLLABORATIVE EVALUATION 59

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

planning was another key factor. Although the evaluation objectives outlined at the beginning

did not change, processes were carefully adjusted in some cases to accommodate the needs of

the program partners. Sample sizes were decreased in order to address the low participation rates

of registered child care ministries and the time it was taking for providers to increase their PTQ

levels. The decision was made to alter the sampling plan with the state partners, but only after

the state was assured that it would not affect the rigor of the evaluation results. By including a

second, unplanned general parent survey, the university evaluation team was able to assist the

state and program partners in monitoring statewide marketing efforts over time.

Another factor in the success of the evaluation was the frequent reporting of evaluation results,

which became a focus of continued communication with state program partners. Although issuing

24 reports over 2 years required substantial effort on the part of the university researchers, this

became a way for program partners to keep informed about preliminary results as they were gen-

erated. Frequent reporting resulted in early program improvements. For example, after the third of

four waves of data collection was completed, representatives from the program partners met to

discuss preliminary findings. Researchers presented ERS scores by level and type of care as well

as descriptive information about the providers’ and parents’ experiences and child outcomes. Dur-

ing the meeting researchers and program partners were able to discuss an area of needed quality

improvement—providers’ health and safety procedures as based on ERS scores. Program partners

brainstormed ideas with researchers about program improvement and began efforts to increase

awareness of health and safety procedures among providers, coupled with new methods of pro-

vider training. This shared in-depth examination of early evaluation results allowed PTQ partners

to be proactive about addressing early findings instead of reacting to one final evaluation report.

Consequently, many conversations in the midst of evaluation data collection allowed program

partners to garner support for program improvement and work toward altering the PTQ system,

with a greater focus on children’s health and safety.

A final factor contributing to the success of this evaluation and an underlying component of

developmental evaluation is the concept of relationship building between program partners and

the evaluators (Patton, 1997). The definition of developmental evaluation invokes the term team,but in the PTQ evaluation process, working relationships were built in which university partners

were used not only for QRIS evaluation but also in other consultative roles. These relationships

led to using the university team to provide training and technical assistance to program partners

in the PTQ system, such as in terms of improving the reliability of the PTQ quality ratings.

Discussions have also focused on how to integrate state data systems so that data collected in

the evaluation can be used with other statewide projects, such as mentoring, professional devel-

opment, and other PTQ advancement projects.

State Child Care Administrator: ‘‘The Bureau of Child Care has found that the value of the relation-ship with the research team extends well beyond the preliminary evaluation. The expertise and depthof understanding of PTQ by the university partners has been helpful in the enhancement of Indiana’sprofessional development system, improving interrater reliability and determining the effectivenessof other quality improvement initiatives.

‘‘Engaging in an evaluation of our QRIS at this depth could have been seen as a risk; however, it ismuch riskier to not evaluate the effectiveness of such important work. We must invest our limited funds inthose initiatives that have the most value and positive impact for children. We must continually evaluatethe effectiveness of even the most successful program and be willing to evolve as we learn more. The

60 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

developmental approach to the research process has made the evaluation the most valuable tool theBureau of Child Care has to ensure the successful design, implementation, and continuation of PTQ.’’

Although a developmental approach was productive in this evaluation, it may not work for all

state QRIS projects. There are challenges in using this approach. For example, all program

partners—from university researchers to state administrators—need to be cognizant of what each

partner is able bring to the process. The program partners need to clearly articulate what is

needed from the evaluation from their perspective. Is it data on child care quality, child out-

comes, provider perspectives, parent perspectives, or program partner perspectives? Researchers

need to be mindful of their role as a trusted source of systematic, unbiased data. In this case, we

found that stating from time to time both our need to remain unbiased and our willingness to

provide data and suggestions for program improvement was helpful and respected. Developmen-

tal evaluation takes more time than traditional outside-source evaluation. All parties need to be

aware of this aspect and plan for adequate time and funding for productive collaboration to

occur. As in all successful collaborative partnerships, researchers and program managers need

to make efforts to communicate clearly and often about their goals, needs, and important

constraints. If the conditions are right, a developmental approach to program evaluation and data

sharing can result in efficient and effective use of evaluation resources.

ACKNOWLEDGMENTS

This research was supported by a contract with the Indiana Family and Social Services

Administration.

REFERENCES

Barnard, W., Smith, W. E., Fiene, R., & Swanson, K. (2006). Evaluation of Pennsylvania’s Keystone STARS Quality RatingSystem in child care settings. Retrieved from http://www.pakeys.org/docs/Keystone%20STARS%20Evaluation.pdf

Barnett, W. S., & Goffin, S. G. (2012). Early Childhood Research Quarterly—special issue: call for papers, quality rat-

ing and improvement systems (QRIS) as change agents. Retrieved from www.journals.elsevier.com/early-childhood-

research-quarterly/call-for-papers/quality-rating-and-improvement-systems/

Elicker, J., Langill, C., Ruprecht, K., & Kwon, K. (2007). Paths to QUALITY: A child care quality rating and

improvement system for Indiana, What is the scientific basis? (Tech. Rep. No. 1). West Lafayette, IN: Purdue

University, Center for Families.

Elicker, J., & Thornburg, K. R. (2011). Evaluation of quality rating and improvement systems for early childhoodprograms and school-age care: Measuring children’s development (Research-to-Policy, Research-to-Practice Brief

OPRE 2011-11c). Washington, DC: U.S. Department of Health and Human Services, Administration for Children

and Families, Office of Planning, Research and Evaluation.

Gamble, J. (2008). A developmental evaluation primer. Retrieved from http://tamarackcommunity.ca/downloads/vc/

Developmental_Evaluation_Primer.pdf

Harms, T., Clifford, R., & Cryer, D. (1998). Early childhood environment rating scale–Revised edition. New York, NY:

Teachers College Press.

Harms, T., Cryer, D., & Clifford, R. (2003). Infant toddler environment rating scale–Revised edition. New York, NY:

Teachers College Press.

Harms, T., Cryer, D., & Clifford, R. (2007). Family child care environment rating scale–revised edition. New York, NY:

Teachers College Press.

Indiana Family and Social Services Administration. (2011). FSSA: Paths to quality home page. Retrieved from http://

www.in.gov/fssa/2554.htm

COLLABORATIVE EVALUATION 61

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012

Indiana State Law. (1992). 470; IAC 3–4.5–1, Rule 4.5 Child Care Facilities; Registered Day Care Ministries. Retrievedfrom http://www.in.gov/fssa/files/Rule4.5.pdf

Jackson, R., McCoy, A., Pistorino, C., Wilkinson, A., Burghardt, J., Clark, M., . . . Swank, P. (2007). National evaluation

of early reading first: Final report. Washington, DC: U.S. Government Printing Office.

Kisker, E. E., Paulsell, D., Love, J. M., & Raikes, H. (2002). Pathways to quality and full implementation in early headstart programs. Retrieved from http://www.mathematica-mpr.com/publications/PDFs/pathwayfnl.pdf

Langill, C., Elicker, J., Ruprecht, K., Kwon, K., & Guenin, J. (2009). Paths to QUALITY—A child care quality rating

and improvement system for Indiana: Evaluation methods and measures (Tech. Rep. No. 2). West Lafayette, IN:

Purdue University, Center for Families.

Love, J. M., Kisker, E. E., Ross, C. M., Schochet, P. Z., Brooks-Gunn, J., Paulsell, D., Boller, K., Constantine, J., Vogel,

C., Fuligni, A. S., & Brady-Smith, C. (2002). Making a difference in the lives of infants and toddlers and their

families: The impacts of Early Head Start. Washington, DC: U.S. Department of Health and Human Services.

Lugo-Gil, J., Sattar, S., Ross, C., Boller, K., Tout, K., & Kirby, G. (2011). The quality rating and improvement system

(QRIS) evaluation toolkit (OPRE Report No. 2011-31). Washington, DC: U.S. Department of Health and Human

Services, Administration for Children and Families, Office of Planning, Research and Evaluation.

National Child Care Information and Technical Assistance Center. (2009). QRIS definition and statewide systems.Retrieved from http://nccic.acf.hhs.gov/print/resource/qris-definition-and-statewide-systems

National Child Care Information and Technical Assistance Center. (2011). QRIS goals and objectives. Retrieved from

http://nccic.acf.hhs.gov/print/pubs/goals-objectives.html

National Institute of Child Health and Human Development Early Child Care Research Network. (1997). Child care in

the first year of life. Merrill-Palmer Quarterly, 43, 340–360.

Norris, D. J., & Dunn, L. (2004, November). Reaching for the Stars: Family child care homes validation study final report.

Stillwater, OK: Oklahoma State University.

Norris, D. J., Dunn, L., & Eckert, L. (2003, November). Reaching for the stars center validation study final report.

Stillwater, OK: Oklahoma State University.

Patton, M. Q. (1994). Developmental evaluation. American Journal of Evaluation, 15, 311–319. doi: 10.1177=

109821409401500312

Patton, M. Q. (1997). Utilization focused evaluation. (3rd ed.). Thousand Oaks, CA.

Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New

York, NY: Guilford Press.

Paulsell, D., Mekos, D., Del Grosso, P., Rowand, C., & Banghart, P. (2006). Strategies for supporting quality in kith and

kin child care: Findings from the early head start enhanced home visiting pilot program evaluation final report.

Retrieved from http://mathematica-mpr.com/publications/PDFs/kithkinquality.pdf

Thornburg, K., Mayfield, W. A., Hawks, J. S., & Fuger, K. L. (2009). The missouri quality rating system schoolreadiness study. Retrieved from http://mucenter.missouri.edu/MOQRSreport.pdf

Tout, K., Starr, R., Soli, M., Moodie, S., Kirby, G., & Boller, K. (2010). The child care quality rating system (QRS)

assessment: Compendium of quality rating systems (OPRE Report). Washington, DC: U.S. Department of Health

and Human Services, Administration for Children and Families, Office of Planning, Research and Evaluation. Avail-

able at http://www.acf.hhs.gov/sites/default/files/opre/qrs_compendium_final.pdf

Tout, K., Zaslow, M., Halle, T., & Forry, N. (2009). Issues for the next decade of quality rating and improvement systems

(OPRE Issue Brief No. 3, Publication No. 2009-014). Washington, DC: U.S. Department of Health and Human

Services, Administration for Children and Families, Office of Planning, Research and Evaluation.

Zellman, G. L., Brandon, R. N., Boller, K., & Kreader, J. L. (2011). Effective evaluation of quality rating and improve-

ment systems for early care and education and school-age care (Research-to-Policy, Research-to-Practice Brief

OPRE 2011-11a). Washington, DC: U.S. Department of Health and Human Services, Administration for Children

and Families, Office of Planning, Research and Evaluation.

Zellman, G., & Fiene, R. (2012). Validation of quality rating and improvement systems for early care and education

and school-age care. Research-to-Policy, Research-to-Practice Brief OPRE 2012-29. Washington, DC: Office of

Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human

Services.

Zellman, G., Perlman, M., Le, V., & Setodji, C. M. (2008). Assessing the validity of the Qualistar early learning quality

rating improvement system as a tool for improving child-care quality. Santa Monica, CA: Rand Corporation.

62 ELICKER ET AL.

Dow

nloa

ded

by [

Jam

es E

licke

r] a

t 13:

10 1

9 D

ecem

ber

2012