10

Internal evaluation in a self-reflective organization: one nonprofit agency’s model

Embed Size (px)

Citation preview

Page 1: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

Internal evaluation in a self-re¯ective organization: one nonpro®t

agency's modelp

Ann M. Minnett*

Salesmanship Club Youth and Family Centers, Inc., 106 East Tenth Street, Dallas, TX 75203, USA

Abstract

Nonpro®t agencies face the evaluation dilemma of how to adequately assess outcomes and build and evaluate the program

from within using limited resources. This article presents an alternative evaluation/management model for nonpro®ts that calls

for an expanded role for the internal evaluator. For the past 5 years, a trained internal evaluator has conducted on-going

evaluations of multifaceted service programs while simultaneously serving on the leadership team of a nonpro®t agency. The

self-re¯ective nature of the agency enhances evaluation use and organizational learning, but creates unique conditions under

which to conduct evaluation studies. # 1999 Elsevier Science Ltd. All rights reserved.

Keywords: Internal evaluation; Self-re¯ection; Organizational learning

1. Introduction

Internal evaluations provide learning organizations

with invaluable information for program development

when results are available to and fully understood by

stakeholders. A major function of the internal evalua-

tor is to facilitate their understanding, and in so doing,

he or she may assume many roles within organizations

(Love, 1991). Most internal evaluator roles are rele-

gated to data collection, analysis and feedback in sup-

port of management's decision-making (Love, 1991;

Mathison, 1991a). However, internal evaluators' roles

could be expanded to include management because

evaluators are well placed for serving multiple capabili-

ties and are su�ciently sensitive to multiple perspec-

tives at all levels of sta� (Lyon, 1989). Cli�ord and

Sherman (1983) encouraged evaluators to think of

themselves as managers with experience in evaluation,

basic research and planning because those skills would

make the evaluator appealing to the organization and

would increase the likelihood of participating in pro-

gram planning.

When internalized into the organization's oper-

ational systems, evaluation guides and models individ-

ual, group and organizational learning (Torres,

Preskill & Piontek, 1996). Learning in organizations is

the interaction of members' learning and the organiz-

ation's capacity to nurture and facilitate learning and

change. Learning occurs when members are given

access to information and encouraged to re¯ect on

their own values, beliefs, and assumptions at the same

time that they process this information and act on

their newly constructed knowledge. Levels of learning

vary (Cousins & Earl, 1995a; Huber, 1991) according

to the degree of self-re¯ection engaged by the mem-

bers. Argyris and SchoÈ n (1978) described three levels

of learning in organizations. The lowest, single-loop

learning in which individuals change incrementally in

reaction to stimuli, leads to instrumental learning, but

has little to do with self-re¯ection or organizational

learning. Double-loop (generative) learning occurs

when members articulate and re¯ect upon deeply held

assumptions about their work and can lead to shifts in

organizational structure and practiceÐwhat Senge

(1990) calls mental models or maps for the organiz-

Evaluation and Program Planning 22 (1999) 353±362

0149-7189/99/$ - see front matter # 1999 Elsevier Science Ltd. All rights reserved.

PII: S0149-7189(99 )00029 -4

www.elsevier.com/locate/evalprogplan

p

A portion of this paper was presented at the annual meeting of

the American Evaluation Association in San Diego, November 1997.

* Tel.: +1-214-941-9192; fax: +1-214-946-7140.

E-mail address: [email protected] (A.M. Minnett)

Page 2: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

ation's work. The highest level of organizational learn-

ing, labeled `duetero learning', involves the organiz-

ation's capacity to learn how to learn. To achieve

double-loop or deutero learning, it is critical that

appropriate structures, mechanisms, and systems be

developed that enable organization members to engage

in critical re¯ection and dialogueÐa shared re¯ection

(Preskill & Torres, 1999; Wheatley, 1994).

The key linkage between individual and organiz-

ational learning occurs when evaluation ®ndings are

shared with members, and they engage in a shared

re¯ection about practice. Here, evaluation becomes

part of the job itself, requiring ongoing re¯ection

about performance and resulting in enlightenment

(Owen & Lambert, 1995). Indeed, a learning organiz-

ation renews itself through self-re¯ection. Evaluators

are in a unique position to inform the mental model of

leaders at any stage of the development of a given pro-

gram (Shu�ebeam, 1983) and in doing so, in¯uence

decision-making and organizational learning. By

enhancing the quality of decisions made within the or-

ganization, the internal evaluator helps build a culture

of self-re¯ection (Torres et al., 1996).

Real organizational payo� happens when participa-

tory evaluation becomes fully integrated into organiz-

ational learning (Cousins & Earl, 1995b) because

people in the organization will learn by getting

involved in evaluation work themselves, and evalu-

ation will be one of the major instruments of learning

(Forss, Cracknell & Samset, 1994). The processes of

evaluation support change in organizations by getting

people to think empirically and teaching them to use

data-based decision-making. One advantage of this

process use of evaluation has immense potential for or-

ganizational learning in that it o�ers participants

opportunities to be re¯ective and discriminating about

their practice (Patton, 1997; Shulha & Cousins, 1997).

Another advantage is that evaluators will likely

become more involved in program planning and devel-

opment (Bickman, 1994). And ®nally, evaluation can

be woven into program practice because the two are

planned simultaneously. Patton (1997) calls this devel-

opmental evaluation where the evaluator collaborates

to conceptualize, design, and test new approaches in

long-term processes of continual development.

Unfortunately, pressing needs within the nonpro®t

sector limit evaluation use. Funders are increasingly

concerned about their grantees' accountability and are

requiring additional evidence that programs do what

they claim to do and that their services make a di�er-

ence in clients' lives (e.g., Plantz, Greenway &

Hendricks, 1997). Nonpro®t agencies have ®nite

resources, most of which are targeted for direct ser-

vices, yet their directors face the evaluation dilemma

of how to adequately assess outcomes and build and

evaluate the program from within. The Independent

Sector believes that regardless of limited resources,

evaluation is always possible and each nonpro®t

agency must be concerned with ``how to see that it

gets done and how it can best ®t within the organiz-

ation's context'' (Mayer, 1992). One way to see that

evaluation gets done is to encourage program man-

agers to assume the evaluator role in their organiz-

ations. A recent trend in evaluation treats evaluation

as a leadership function of all managers and program

directors (Patton, 1997), and certainly within smaller

nonpro®t agencies evaluation concerns typically fall to

the executive director or program administrators. As

nonpro®ts come under greater scrutiny for account-

ability for funding, we have seen evaluation increase in

importance while the evaluator's role is assumed by

practitioners and managersÐpersons who often lack

the requisite technical skills and methodological and

analytical expertise, not to mention the interest in such

matters.

For the past 5 years, a trained internal evaluator

(IE) has conducted on-going evaluations of multifa-

ceted service programs in a nonpro®t agency that

values and encourages self-re¯ection. The IE conducts

evaluations in a ¯uid environment of interaction, feed-

back, and change while simultaneously serving on the

agency's leadership team. This article is o�ered as an

example for evaluators to assume multiple roles and

help resolve a pressing administrative need in the non-

pro®t sector.

2. The nonpro®t agencyÐa self-re¯ective organization

The Salesmanship Club Youth and Family Centers,

Inc. (SCYFC) provides a variety of comprehensive

programs to children and families residing in the

Dallas area. The oldest program is the Salesmanship

Club Youth Camp, a year-round, 72-bed residential

treatment facility for adolescents with behavioral and

emotional disturbances (GAO, 1994; Loughmiller,

1965). The largest program, Outpatient Family

Therapy, serves around 2500 Dallas County children

and family members each year and also conducts a

training institute for mental health professionals. The

J. Erik Jonsson Community School serves impover-

ished children and families living in an inner city

neighborhood that is predominantly Hispanic. Social

support services and parent education, skills training,

citizenship, and ESL classes are also o�ered and

shared between the three service programs. The

Research and Evaluation Program (R&E) is separate

from, yet equal to, the direct service programs in the

organizational structure.

SCYFC's benefactor, The Salesmanship Club of

Dallas, is a self-re¯ective organization that long ago

recognized the importance of research-based program

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362354

Page 3: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

planning and decision making.1 The membership con-

ducts self-studies in which every member participates

via questionnaire or focus group interview to deter-

mine long-range planning for the Salesmanship Club

and the agency. For example, nearly 50 focus groups

were held in the past year before the membership

voted on (and passed) a 6 million dollar capital cam-

paign for the agency's expansion. Their thorough

re¯ection on procedures and governance serves as a

model and often challenges the agency to re-evaluate

the goals and practices of the service programs.

Because we believe that self-re¯ection, based upon

evaluation, leads to program improvement, evaluation

is built into every service program and its notion is re-

inforced by the continual presence of research and

evaluation projects and personnel. Self-re¯ection is

woven into all agency processes as we continually

question how to improve our work (``How does our

work a�ect our clients? Are we providing the best level

of service possible to our clients? If not, how can we

improve?''). Every employee is hired with the knowl-

edge that SCYFC values self-re¯ection and continuous

program development as part of our mission, and all

are invited to participate. The pervasive insistence on

accountability, growth and development, and emphasis

on research and evaluation are powerful indicators of

the agency's value for self-re¯ection and learning.

3. Internal evaluation at SCYFC

The internal Research and Evaluation (R&E)

Program has a full-time director (the IE) and one or

two graduate student assistants, an independent budget

and individual programmatic mission. The R&E pro-

gram was created in 1993 to support and facilitate

SCYFC's service program development and account-

ability through grounded investigations in collabor-

ation with service program experts. We also have a

mission to disseminate what we have learned to col-

leagues and potential collaborators. The structure pro-

vides exclusive, independent status for R&E while

simultaneously including R&E in the agency's oper-

ational organization. An External Evaluator (EE) has

conducted long term outcome evaluations of the resi-

dential treatment program since 1983 and community

school since 1996.2 The evaluators share technical

expertise and occasionally consult with one another on

methodological issues. Some data derived from in-

ternal evaluation studies are available to the EE as

baselines for his outcome evaluations.

The Internal Evaluator functions as evaluator and

administrator with responsibilities that fall into three

categories: designing and implementing evaluation stu-

dies and supervising others' research; participating in

the agency's leadership team; and representing the

agency as an administrator.

3.1. The process and role of evaluation

The IE is a participant observer who collaboratively

engages in applied social research with practice-based

experts. The evaluation process incorporates both

deductive and inductive approaches and uses multiple

methods, perspectives and measures for continuous

program development. Our model is similar to focusing

or evaluability assessment (Patton, 1997) in that the IE

often guides program sta� in their visioning process by

keeping them grounded in clarifying goals, operation-

Fig. 1. Research and evaluation process for continuous program development.

1 The Salesmanship Club of Dallas, a 75-year old group of 500

business and civic leaders, operates the SCYFC as a charity. Yearly

proceeds from the GTE-Byron Nelson Classic, a PGA golf tourna-

ment, provide for 93% of SCYFC's annual $4.5 million operational

budget. Remaining funds are derived from client fees, The United

Way and foundation grants and awards.2 J. Michael Coleman, Ph.D., is SCYFC's External Evaluator and

is currently the Dean of Undergraduate Studies at The University of

Texas, Dallas.

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362 355

Page 4: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

ally de®ning practices and answering the question

``What would that look like?'' or ``How will you know

when you're successful?''. With the program mission,

beliefs, values and goals in place, the working relation-

ship between sta� and R&E becomes more collabora-

tive, and the evaluator experiences a greater role in the

process. (see Fig. 1.) Typically, the evaluation loop

begins when members question the e�ectiveness of a

practice (Box 1) and believe it can be improved. The

sta� and IE collaborate (Box 2) to de®ne parameters

of a study to evaluate the e�ectiveness of the practice.

Next, the IE re®nes the design, that is, determines who

will be involved, the methods and procedures for the

data collection and data analysis, and the duration of

the study (Box 3). After sharing the evaluation design

with the sta� to insure that it melds with their practice

and that it is doable, the program practice and evalu-

ation study are implemented simultaneously (Box 4)Ð

a process that lasts anywhere from one week to as

long as one year, depending on the evaluation ques-

tion. The data are then analyzed and synthesized by

the IE (Box 5) who subsequently reports the results

back to the sta� (Box 6). When practitioners are satis-

®ed with the results, practice remains unaltered (Box

7a), and the study ends. However, when results indi-

cate that further re®nements are needed (Box 7b), they

make adjustments based upon their experience and the

evaluation results (Box 8a). We then implement identi-

cal methods and procedures to evaluate the altered

practice. The loop continues until program experts are

satis®ed with results and adopt the practice (Box 7a)

or until they decide to abandon the practice (Box 8b).

The IE designs and implements evaluation studies

for the community school to evaluate short-term out-

comes and investigate practice. Comprehensive longi-

tudinal studies are in place to track the cognitive and

social development of each student and the status of

their families across the course of their tenure at the

school. Several process evaluations of practice in curri-

culum development, parent involvement and integrated

team teaching are currently in progress. A series of

studies of procedures in our residential program are

also in progress, and we have recently created and im-

plemented an evaluation of short term e�ects of family

therapy services on clients' sense of self-e�cacy.

Other evaluation-related functions include oversight

of the agency's Institutional Review Board, dissemina-

tion of results not only to sta� but to colleagues and

potential collaborators, and information retrieval from

outside sources about exemplary practices and demo-

graphics that relate to our programs. Finally, the IE

provides seminars and workshops to other nonpro®t

agencies about evaluation techniques and designs. The

IE is not required to write grants or participate in fun-

draising activities, but would likely do so in other non-

pro®t settings.

3.2. The evaluation process within leadership

We believe that evaluation procedures should be

built into program procedures and that evaluation

®ndings should be used for the betterment of the pro-

grams. The IE uses a procedure similar to

Sonnichsen's (1988, p. 142) Advocacy Evaluation,

which states that evaluators must view themselves as

change agents and step away from the position of neu-

tral observer to that of active participant in the de-

cision making process. His model suggests that the IE

be actively involved with supervisors in the organiz-

ational process of discussion, approval, and implemen-

tation of recommendations. Contrary to those who

believe that advocacy compromises the evaluator's

objectivity (Scriven, 1997; Stake, 1997), Sonnichsen's

view neither advocates programs nor supports admin-

istrators, but acts as an ``independent advocacy for

needed change, arrived at through the conduct of an

objective, valid, defensible evaluation of the program''.

Indeed, Sonnichsen believes that advocacy begins only

after the completion of the evaluation study.

We agree. Involvement in program planning and

development is a critical feature of the IE's role at

SCYFC. We believe that objectivity and neutrality

should guide the design and implementation of the

evaluation study to the extent possible. In all candor,

it would be very di�cult, if not rather odd, for the IE

to conduct long term participant evaluations in this

agency and retain neutrality. Siegel and Turkel (1985)

call that sort of behavior ``dysfunctional'' and argue

that evaluation without advocating recommendations

renders internal evaluation ine�ectual.

The IE functions as a leader in the organization by

advocating for evaluation results and providing an

evaluator's perspective on issues. She brings evaluation

®ndings to the planning table while other leaders bring

their programmatic expertise and the practical limi-

tations that they face in implementing their services.

The leadership team (with the IE as a member) builds

evaluation into program planning and implementation

so that practitioners implement their programs while

the evaluator conducts studies that are woven into the

services. For example, family therapy clients provide

periodic assessment of the status of problems that

brought them to therapy, and the information is acces-

sible to the therapist (with the client's permission) for

re¯ection. The IE then analyzes and synthesizes the

data and provides the leadership team with thick

descriptions along with quantitative results that are de-

rived from sound studies. We believe that the IE's syn-

thesis of the data is enhanced by her thorough

knowledge of the issues and participants. As a result,

the IE is positioned to facilitate evaluation use and fol-

low-through, program re®nement occurs routinely, and

the process of re¯ection on practice is accorded im-

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362356

Page 5: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

portance. Issues of objectivity and neutrality do not

pose a threat to this model because objectivity and

neutrality guide the conduct of the study, while advo-

cacy guides the IE's recommendations (Sonnichsen,

1988).

3.3. Leadership roles unrelated to evaluation

Organizationally, the IE serves as ex-o�cio consult-

ant in Youth Camp planning meetings with regard to

sta� training and revisiting its organizational structure

(mission, values, beliefs and goals). That same function

exists in current Outpatient Family Therapy discus-

sions of accountability as that program collaborates

with other agencies to create a managed care network.

The most concentrated participation occurs within the

school where the IE is a member of the school admin-

istration. Finally, the IE is a member of the agency's

Leadership Council, representing the R&E Program

but helping to determine agency policy. For all three

service programs and the agency as a whole, the IE is

a program developer and decision-maker working col-

laboratively with others. The IE also represents the

agency (as an administrator) to other groups in a var-

iety of contexts such as community development com-

mittees or nonpro®t networks.

4. Issues that shape internal evaluation at SCYFC

4.1. Continuing relationships

Among the unique components of this model is the

IE's continuing, multifaceted relationship with evalu-

ation participants who are at once subjects, stake-

holders and collaborators over long periods of time.

First, the longevity of the role alone means that the IE

is accountable to colleagues for previous recommen-

dations and therefore assumes some responsibility for

outcomes. It follows that the IE has grown more

invested in and committed to the organization's devel-

opment. Put simply, the IE has a strong desire to see

the organization have positive e�ects on clients.

Love (1991) describes the complimentary, if not con-

tentious, perspectives of program managers and eva-

luators and elaborates the stereotypes one usually

holds of the other to describe when data are ignored

(in the IE's view) or when real-world constraints prohi-

bit action (manager's view). We believe that these dif-

fering perspectives are healthy in the self-re¯ective

organization. Mathison (1991b) and others have writ-

ten about the internal evaluator's vulnerability to co-

optation where administrators exert pressures on the

evaluator by relating rationale for behavior or back-

ground information for decisions they make in their

practice, usually justifying why recommendations

could not be followed. The realities of program im-

plementation and political compromise do contribute

to the IE/leader's understanding of the issues and syn-

thesis of the data. Thus, the real danger for the IE is

not co-optation from others, but a spontaneous loss of

evaluation perspective (Browne & Wildavsky, 1983). It

is unavoidable that the IE eventually becomes part of

the culture and loses some perspective in this model,

so it is especially critical that we capture multiple per-

spectives (Greene, 1997), including new sta� or visitors

who may question long-held agency beliefs or prac-

tices.

Because the R&E sta� engage in long term, colla-

borative relationships, we are constantly mindful of

gaining and retaining the trust of individual sta� at all

levels. First, we rarely use the word ``evaluation'' with

sta�Ðthey know of the program and our work simply

as ``research''.3 Evaluation has negative connotations

for many and appears to give too much power to the

IE when we would prefer to engage in a collaborative

work environment. Second, any information provided

by sta� or clients to an R&E study is treated as con®-

dential by the researchers. In program development

work, agency sta� are the IE's clients, and everyone

who contributes to the evaluation project (sta�,

families, children) are the study subjects and are

accorded rights (i.e., Joint Committee on Standards

for Educational Evaluation, 1994). All results are

reported on a group basis, and no one individual is

ever identi®ed in written reports or in verbal communi-

cation to any stakeholder group, unless the IE judges

that the individual is causing harm to another. Third,

the IE does not conduct or participate in personnel

evaluations other than for R&E sta�. Fourth, we are

respectful about the amount and type of information

we request from individuals. In this setting where most

children, families and sta� participate in R&E projects

over long periods of time, we avoid requesting a great

deal of information ``just in case'' we might need it

later. Fifth, we collaborate with sta� at all levels, in

part so that they learn to trust the process and our

work. More importantly, while the evaluation sta� is

trained in research and methodological design, we

value the expertise of teachers, therapists, social

workers, support personnel and administrators and

rely on them to help us design and conduct e�ective,

meaningful evaluations. Without trust, our entire colla-

3 A later section of this article describes the necessity for the in-

ternal evaluator to speak the practitioner's language. We use the

term research to meaningfully describe the collaborative process of

re®ning practice and developing programs because it is helpful for

practitioners to understand the work in that way. Therefore, we see

no useful purpose in discussing the internal evaluation process with

practitioners in evaluation terms, but rather focus on the process as

a means to program development.

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362 357

Page 6: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

borative model would fail and the bene®ts of evalu-

ation use would disappear.

We tripped along a rocky road to trust. For

example, 4 years ago, the school was engaged in exten-

sive restructuring as its focus shifted from treatment of

children with emotional and behavioral disturbances

to a community school that emphasizes academic

excellence for inner city children. The IE designed a

comprehensive study to evaluate the appropriateness

of former practices with the new population and then

enthusiastically presented the plan to the entire sta�.

Two teachers cried during that meeting, several more

were visibly shaken, and a contingent of teachers later

complained to administrators that they were unpre-

pared for such intrusions into their practice. R&E in

the school was derailed for an entire year. The evalua-

tor used that time to engage teachers individually, sup-

porting each in unrelated action research projects of

their choosing in order to regain their trust and co-

operation in the larger study (which we began the fol-

lowing year). On another occasion, we were required

to initiate an outcome evaluation of family therapy

services, which are notoriously di�cult to evaluate

(Seligman, 1995). After engaging in rather heated

debates about the need for and merits of outcome

evaluations, the IE and therapists came to an under-

standing that we would collaboratively design the most

palatable study as possible. As the pilot study

unfolded, therapists suggested procedural changes,

which we made. When the pilot data were presented to

the group, they could see the value of the information

and were more receptive to the design revisions and

subsequent evaluation study.

Di�culties such as these forced the R&E program

to be more collaborative and to respect participants'

opinions and expertise (Cousins & Earl, 1995a;

Greene, 1988; Preskill & Torres, 1999). We have found

that this attitude engenders their trust and that they

become more participative in our work. ``Trusting the

process'' as we do would not be possible without the

support of a very secure leadership team which has

opened the agency to constant, pervasive evaluation

and self-re¯ection (Senge, 1996).

4.2. Communication roles and functions

The evaluator translates evaluation results into prac-

tical applications in three ways. First, the R&E team

investigates, summarizes and presents exemplary prac-

tice from a variety of disciplines to our sta�.

Practitioners decide its value and then assimilate those

®ndings into their practice to re®ne the program.

While this appears as instrumental learning on the sur-

face, this information sometimes precludes an evalu-

ation study and self-re¯ection.

Second, and most often, the IE communicates

results from internal studies to our self-re¯ective prac-

titioners using the professional language that is most

meaningful to them. For example, SCYFC's family

therapists mull over decisions based on theoretical con-

structs, abstract concepts and empathic acceptance for

clients. They eschew labeling diagnoses and are most

receptive to hearing results about clients' perspectives

and how clients managed to change their own lives.

On the other hand, community school teachers are

grounded in sequential events, details and goals for

learning. They are exceptionally interested in strategies

that will help them improve practice in the classroom,

such as integrating curriculum, including parents in

their children's learning and time management, for

example. The action-oriented residential treatment

team, deals with a volatile population and makes criti-

cal split-second decisions constantly. They espouse a

solution-focused approach to practice with children

and families, and want direct, succinct information

drafted in language that focuses on the solution rather

than the problem. Any good evaluator should alter

communication strategies to ®t their audiences' con-

cerns (Torres et al., 1996). SCYFC's model simply fa-

cilitates the quality of information provided by the IE

because the IE is well grounded in the work and

languages of the agency (Goering & Wasylenki, 1993).

The third way in which the IE functions as commu-

nicator in this multi-disciplinary agency is by acting as

translator between programsÐusually at the time feed-

back is presented and always during program plan-

ning. The IE who works with all programs and has

leadership responsibilities is able to provide feedback

in the context of organizational culture and in so

doing, informs others about larger issues in the

agency.

There are also opportunities to make smaller con-

nections. Recently, the IE conducted telephone inter-

views of selected families who had dropped out of the

admissions process for our residential treatment pro-

gram. Over 1100 families express interest in the pro-

gram each year, but the camp is able to serve about

one-®fth that number. The remainder are referred else-

where, ®nd other placement, or simply withdraw from

the process. We were interested in the reasons that

families decide to discontinue the admissions process

and what happens to them afterward. We found that

some families dropped out of admissions because their

child had a ``scared straight'' experience regarding the

possibility of residential treatment. Their family pro-

blems subsequently lessened in severity, and they

reported that they needed something less intrusive

than residential treatment for their child. These

families are now a source of referrals for our

Outpatient Family Therapy program.

There is an additional component to communication

that has no name and is not included in a job descrip-

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362358

Page 7: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

tion. Self-re¯ective organizations bene®t from a ``dis-

cusser of the undiscussables'' (Senge, 1997)Ðsomeone

who can ask hard questions. Mathison (1991a, p. 177)

stated that internal evaluation can maintain its integ-

rity only when evaluators see the big picture and dare

to ``pursue potentially unpopular issues''. SCYFC's IE

is perfectly positioned to pose questions that appear

naõÈ ve to everyone but to those for whom the issue is

undiscussable. This role is permitted because the IE is

both part of the system that is questioned and yet ade-

quately set apart to be moderately objective.

The evaluator in this model acts as one of several

liaisons/communicators to the Board and other fun-

ders. Occasionally, this includes formal presentation of

program outcomes but most often involves describing

and explaining a complex set of programs and services

to Board members and others. Our funders are

business leaders who are comfortable with succinct,

bottom-line bullet points about client status, organiz-

ational process, and the status of education and social

services. The external evaluator creates annual reports

of long term follow-up evaluations of the residential

treatment and school programs. The internal evaluator

submits occasional reports about the residential pro-

gram's development and annual reports of the

Outpatient Family Therapy outcomes evaluations, the

school's continuous program development and the sta-

tus of research and evaluation at SCYFC. The

research and evaluation report addresses accountability

issues for R&E and recounts the year's evaluative pro-

cesses.

4.3. Evaluation studies and information are accorded

importance

When our board of directors formed a separate

R&E program and placed the IE in a leadership pos-

ition, the agency ``walked the walk'' and reinforced to

sta� and others that R&E is crucial to the agency's

success. In order to compensate for the scope of our

work and size of R&E sta�, we have established a

community of researchers, similar to Preskill and

Torres' (1999) community of inquirers, in which each

person's expertise and perspective are valued and real

e�ort is made to nurture their self-re¯ection (Minnett,

1998b, 1999). In the spirit of collaborative learning

(Torres et al., 1996), we believe that evaluation

research is everyone's responsibility, and the commu-

nity of researchers allows for that expression. Indeed,

individual sta� are encouraged and recognized for

their research e�orts and participation. We believe that

individuals who feel ownership in the research process

are more likely to assume research responsibilities and

understand the information. We have found that they

also are more likely to make multiple uses of the data

(see below) as well as become more self-re¯ective. The

sta� is also invited to make sense of the ®ndings,

which acknowledges their expertise and likewise

enhances the accessibility of the data.

4.4. We ask meaningful questions

An internal evaluator is more likely to know which

questions are most appropriate and meaningful and

when to ask them than is an external evaluator (Lyon,

1989). The IE in our model has not only the advantage

of an insider, but the added perspective of viewing

issues in terms of larger organizational systems and

processes. This also allows the IE to provide for conti-

nuity of project development where one research ques-

tion builds upon another. And ®nally, the evaluator

provides some evaluative continuity by coordinating

the next set of questions that the leadership asks about

the agency or that program directors ask about their

programs.

An example of the value of continuity in evaluation

for continuous program development has occurred in

the school. The community school is developing a

team-teaching model in which each self-contained

classroom has two certi®ed teachers who integrate all

subjects throughout the day.

. Our ®rst study identi®ed two successful teams and

examined in depth their perspectives on their work

together (Minnett, Kaye, Bryant, Wetzel &

Camacho, 1997). Teachers were interviewed about

philosophies of education, strategies, training, atti-

tudes about teaching and working relationships with

their partner and the administration.

. The second study used what we learned about tea-

chers' views and observed how they interacted in the

classroom (Minnett, 1998a). Each pair of teachers

was observed repeatedly for one semester as they

engaged in a variety of classroom activities to see

how teachers communicated with one another, nego-

tiated, amount of time spent with students and div-

ision of responsibilities. Teachers were also observed

as they planned.

. A third study examined qualities/characteristics and

criteria for partnering teachers using qualitative

interviews of all school personnel.

. The fourth study compared what we have learned

in-house with what others have done vis-aÁ -vis team-

ing and integrated curriculum, and we concluded

that our model of teaming was inappropriate for

junior high grades and warranted change. New

teaming practices were developed to acknowledge

junior high teachers' expertise (they are not general-

ists as are elementary teachers on whom the model

was patterned) as well as adolescents' developmental

needs for broader social a�liations.

. The next study investigated the e�ects of new pro-

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362 359

Page 8: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

cedures in 7th and 8th grades, and results showed

continued di�culty with the model for this age

group. As a result of our work and the agency's

increasing interest in early intervention, grades 7

and 8 were dropped from the school at the close of

the 1997±1998 school year and a new K±3 program

was installed in the school.

. Our current study of teaming includes all teachers

and administrators in creating a Teaming Pro®leÐ

the operational de®nition of teaming. They have

identi®ed processes associated with teaching (sche-

duling, meeting special needs of students, etc.) and

described the necessary skills, relationships and

resources to accomplish each activity in a teaming

environment.

. Once the Teaming Pro®le is complete, we will video-

tape indicators of good practice in teams and study

concurrent e�ects of teaming practice on student

learning. The entire sequence has taken less than

three years and has incorporated several small stu-

dies into the larger investigation of teaming. This

work will contribute to making teaming more e�ec-

tive for everyone by applying the research process

for continuing program development to re®ne the

practice. And ®nally, the pro®le will form the basis

of a new personnel evaluation system currently

under development.

Not only are questions more appropriate and perhaps

more planful as the example above demonstrates, they

are more creative because we are able to adopt an

inductive, generative approach to investigating prac-

tice. Social service programs are moving targets, and if

evaluation data are meaningful, they must be current.

Because R&E is part of program planning, our readi-

ness to ask questions puts R&E into the mix at an

early stage of planning which is critical to the process

of evaluation and program planning. Further, self-

re¯ection is encouraged and study questions are

actively solicited from all participants. We therefore

bene®t from multiple perspectives from the very begin-

ning of the research and evaluation process.

4.5. Multiple uses for meaningful data

Our evaluation process often, but not always, results

in greater utilization of the data. It stands to reason

that data, which are understood by practitioners and

administrators, are more likely to be utilized, and that

having access to a researcher who understands the

data will increase the likelihood that program sta� will

understand the results. It is possible that the manner

in which the data are framed will increase their use,

and again, the evaluator in leadership will have greater

insights into the potential use of the data and can an-

ticipate the need. More importantly, the IE and each

participating sta� member may make practical appli-

cations of the data in distinct ways. For example, for

the past 5 years every person on the school sta� has

recorded their contacts with parents as one measure of

parent involvement at school (e.g., Minnett, 1996).

School personnel record who initiates the contact, how

it is made (note, phone, incidental meeting, conference,

participation in a school activity), and the tenor of the

conversation (positive focus, problem focus or infor-

mation exchange). Everyone contributes to the data-

base, which provides a wealth of information about

parent participation and the e�ects of certain practices

on parent involvement, the intended uses for the data.

The IE originally provided feedback to teachers pri-

marily as a courtesy to acknowledge their e�orts in

this large, on-going project. However, teachers

revealed that they used the organized printout (by

child, showing contacts in sequence, and the nature of

the contact) to determine which families had been

overlooked or to detect troubling patterns in school-in-

itiated contacts with their students' families. The data

have shaped their practice.

Another use for data involves client data feedback

to practitioners. The IE always adheres to con®dential-

ity when acquiring information from clients but o�ers

them the option of agreeing to release the information

to their practitioner. Clients often agree to release this

information and it is surprising how important these

data are to practitioners. We ®nd that both positive

and negative feedback enhance practitioners' re¯ec-

tions and are welcomed.

4.6. Evaluator skills

Internal evaluators working in this environment

must have technical and methodological skills and

should be able to assume a mentoring role with the

community of researchers. Because we engage in par-

ticipant evaluation, IE's conducting this type of work

should be motivated to participate in organizational

activities and have tolerance for imperfection (Cousins

& Earl, 1995b). Due to our emphasis on collaboration,

few skills are as essential as ``people skills'' (Ayers,

1987). Social skills allow internal evaluators to build

connections and trust, especially with those who are

skeptical of their presence. We have found those qual-

ities are essential in building trust among participants

at the agency, gaining cooperation and buy-in from

the sta�, and communicating negative ®ndings while

maintaining long term relationships with colleagues.

We are a social service agency that places high value

in respect for clients. It is therefore important that the

IE engages clients, student, or family members in the

same respectful manner espoused by the practitioners.

That means knowing a little about the services, how

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362360

Page 9: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

clients access them and being able to interact with

them informally.

Goering and Wasylenki (1993) argued that substan-

tive program expertise is also an asset because it facili-

tates the evaluator's reconciliation of a host of

program and organizational context variables that

impact the use of evaluation information. Our IE had

previous experience in educational settings but was less

adept in mental health and community development

issues. It was imperative for the IE to learn their

language, observe their work, and acknowledge the

fact that she would conduct evaluations on their turf.

5. Conclusion

Our evaluative procedures lend themselves to non-

pro®t settings where employees may assume multiple

roles. However, our example is unique because of the

substantial support of our funders for R&E, their

insistence upon continuing program development, and

the pervasive nature of evaluation in our organization.

Self-re¯ection is an isomorphic process permeating our

agency through every organizational level and pro-

gram. We are encouraged to explore alternatives and

re®ne practice through research and evaluation, which

further nourishes our self-re¯ection. This article is

simply another exampleÐan internal evaluator re¯ects

about R&E and the unique opportunity to function in

the dual role of internal evaluator and leader.

Acknowledgements

The author would like to thank the reviewers who

made substantial contributions to this article and

thank Richard Sonnichsen for his comments on an

earlier draft.

References

Argyris, C., & SchoÈ n, D. (1978). Organizational learning. Reading,

MA: Addison±Wesley.

Ayers, T. D. (1987). Stakeholders as partners in evaluation: a stake-

holder-collaborative approach. Evaluation and Program Planning,

10, 263±271.

Bickman, L. (1994). An optimistic view of evaluation. Evaluation

Practice, 15(3), 255±259.

Browne, A., & Wildavsky, A. (1983). Should evaluation become im-

plementation?. In: A. J. Love, Developing e�ective internal evalu-

ation. New directions in program evaluation, vol. 20). San

Francisco: Jossey±Bass.

Cli�ord, D. L., & Sherman, P. (1983). Internal evaluation: integrat-

ing program evaluation and management. In: A. J. Love,

Developing e�ective internal evaluation. New directions for program

evaluation, vol. 20). San Francisco, CA: Jossey±Bass.

Cousins, J. B., & Earl, L. M. (1995a). The case for participatory

evaluation: theory, research, practice. In: J. B. Cousins, &

Bradley, Participatory evaluation in education: studies in evaluation

use and organizational learning. London: Falmer Press.

Cousins, J. B., & Earl, L. M. (1995b). Participatory evaluation in

education: What do we know? Where do we go?. In:

J. B. Cousins, & Bradley, Participatory evaluation in education:

studies in evaluation use and organizational learning. London:

Falmer Press.

Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help

an organization to learn?. Evaluation Review, 18(5), 574±591.

Goering, P. N., & Wasylenki, D. A. (1993). Promoting the utilization

of outcome study results by assuming multiple roles within the

organization. Evaluation and Program Planning, 16, 329±334.

GAO Government Accounting O�ce (1994). Residential care: some

high-risk youth bene®t, but more study is needed. Report to the

Chairman, Subcommittee on Oversight of Government

Management, Committee on Governmental A�airs, US Senate.

Greene, J. C. (1997). Evaluation as advocacy. Evaluation Practice,

18(1), 25±35.

Greene, J. C. (1998). Stakeholder participating and utilization in pro-

gram evaluation. Evaluation Review, 18(5), 574±591.

Huber, G. P. (1991). Organizational learning: the contributing pro-

cesses and the literature. Organizational Science, 2(1), 88±115.

Joint Committee on Standards for Educational Evaluation (1994).

The program evaluation standards: how to assess evaluations of

educational programs, (2nd ed.). Thousand Oaks: Sage

Publications.

Loughmiller, C. (1965). Wilderness road. The University of Texas,

Austin: The Hogg foundation for mental health.

Love, A. J. (1991). Internal evaluation: building organizations from

within. Newbury Park: Sage Publications.

Lyon, E. (1989). In-house research: a consideration of roles and ad-

vantages. Evaluation and Program Planning, 12, 241±248.

Mathison, S. (1991a). Role con¯icts for internal evaluators.

Evaluation and Program Planning, 14, 173±179.

Mathison, S. (1991b). What do we know about internal evaluation?.

Evaluation and Program Planning, 14, 159±165.

Mayer, S. E. (1992). Common barriers to e�ectiveness in the inde-

pendent sector. In: Proceedings of Independent Sector's Annual

Meeting and Assembly of Members, Minneapolis, MN.

Minnett, A. M. (1996). Longitudinal study of teacher-parent contacts

and parents' subsequent involvement at school. In: Proceedings of

the Annual Meeting of the American Educational Research

Association, New York.

Minnett, A. M. (1998a). Two-teacher teaming in elementary class-

rooms: a pilot study. National Association of Laboratory Schools

Journal, XXII(1), 9±11.

Minnett, A. M. (1998b). Establish a community of researchers and

transform the process of internal evaluation. In: Proceedings of

the Annual Meeting of The American Evaluation Association,

Chicago.

Minnett, A. M. (1999). Building a community of researchers to

improve teaching practice. In: Proceedings of the Annual Meeting

of the Association for Supervision and Curriculum Development,

San Francisco.

Minnett, A. M., Kaye, B., Bryant, H., Wetzel, M., & Camacho, G.

(1997). Two teachers in the classroom: process and outcomes of

collaborative teaming. In: Proceedings of the Annual Meeting of

Texas Association for Supervision and Curriculum Development,

Houston.

Owen, J. M., & Lambert, F. C. (1995). Roles for evaluators in learn-

ing organizations. Evaluation, 1(2), 237±250.

Patton, M. Q. (1997). Utilization focused evaluation: the new century

text, (3rd ed.). Thousand Oaks: Sage Publications.

Plantz, M. C., Greenway, M. T., & Hendricks, M. (1997). Outcome

measurement: showing results in the nonpro®t sector. In:

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362 361

Page 10: Internal evaluation in a self-reflective organization: one nonprofit agency’s model

E. Chelimsky, New directions for evaluation, vol. 75 (pp. 15±30).

San Francisco: Jossey±Bass.

Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in

organizations. Thousand Oaks: Sage Publications.

Scriven, M. (1997). Truth and objectivity in evaluation. In:

E. Chelimsky, & W. R. Shadish, Evaluation for the 21st century

(pp. 477±500). Thousand Oaks: Sage Publications.

Seligman, M. E. P. (1995). The e�ectiveness of psychotherapy: the

consumer reports study. American Psychologist, 50(12), 965±974.

Senge, P. M. (1990). The ®fth discipline: the art and practice of the

learning organization. New York: Doubleday Currency.

Senge, P. M. (1996). Leading learning organizations: the bold, the

powerful, and the invisible. In: F. Hesselbein, M. Goldsmith, &

R. Beckhard, The leader of the future (pp. 41±57). San Francisco:

Jossey±Bass.

Senge, P. (1997). Comments during satellite discussion of issues in

education at the Annual Meeting of the Texas Association for

the Supervision of Curriculum Development, Houston, TX.

Shu�ebeam, D. A. (1983). The CIPP model for program evaluation.

In: G. F. Maddus, M. Scriven, & D. L. Shu�ebeam, Evaluation

models: viewpoints on educational and human services evaluation.

Boston, MA: Kluwer±Nijho�.

Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: theory,

research, and practice since 1986. Evaluation Practice, 18(3), 195±

208.

Siegel, K., & Turkel, P. (1985). The utilization of evaluation

research: a case analysis. Evaluation Review, 9(3), 307±328.

Sonnichsen, R. C. (1988). Advocacy evaluation: a model for internal

evaluation o�ces. Evaluation and Program Planning, 11, 141±

148.

Stake, R. E. (1997). Advocacy in evaluation: a necessary evil?. In:

E. Chelimsky, & W. R. Shadish, Evaluation for the 21st century

(pp. 470±476). Thousand Oaks: Sage Publications.

Torres, R. T., Preskill, H. S., & Piontek, M. E. (1996). Evaluation

strategies for communicating and reporting: enhancing learning in

organizations. Thousand Oaks: Sage Publications.

Wheatley, M. J. (1994). Leadership and the new science. San

Francisco: Berrett±Koch.

A.M. Minnett / Evaluation and Program Planning 22 (1999) 353±362362