11
The Historical Record Papers in this section focus on evaluation from an historical perspective. They may analyze important turning points within the profession, provide commentary on historically significant evaluation works, or describe and analyze what promises to be a contemporary watershed event with important implications for the future of evaluation. The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta THE ORAL HISTORY PROJECT TEAM 1 In early 2002, Jean King, Mel Mark, Robin Miller, and Stacey Stockdill began a project to conduct oral history interviews with individuals who have made signal contributions to the program evaluation field and those well-placed observers who were present at and played a role in pivotal moments in the field. In developing this project, it is our goal to add to the important historical work conducted by others documenting the professional development of individuals who have influenced the way evaluation is understood and practiced today. We also hope to capture the spirit of the times in which these individuals’ ideas and interests were nurtured. In this, our second oral history interview, we spoke with Lois-ellin Datta. The interview was conducted by Robin Miller and Valerie Caracelli, with assistance from Aleise Matthews, Jules Marquart, and Irving Lazar. Lois-ellin Datta received her Ph.D. in comparative and physiological psychology from Bryn Mawr College. Currently, she heads her own evaluation consulting firm, Datta Analysis, based in Hawaii. She served in many roles over the course of her distinguished career in govern- ment, including: Director of Program Evaluation in the Human Services Area (PEHSA) at the U.S. General Accounting Office’s Program Evaluation and Methodology Division; Director for Teaching, Learning and Assessment at the U.S. Department of Education’s National Institute of Education; National Director of Evaluation for Project Head Start and the Children’s Bureau; and, as Research Fellow at the National Institutes of Health. Her numerous contributions to the field of evaluation include serving as Editor-in-chief of New Directions for Evaluation. Her publications have significantly advanced thinking and practice in evaluation. Among the areas 1 The Oral History Project Team consists of Robin Miller, Jean King, Melvin Mark, and Stacey Stockdill. This interview was edited by Robin Miller and Lois-ellin Datta, with the assistance of Melvin Mark and Valerie Caracelli. Robin Miller Department of Psychology (M/C 285), University of Illinois at Chicago, 1007 West Harrison Street, Chicago, IL 60607-7137, USA; Tel: (1) 312-413-2638; E-mail: [email protected]. American Journal of Evaluation, Vol. 25, No. 2, 2004, pp. 243–253. All rights of reproduction in any form reserved. ISSN: 1098-2140 © 2004 by American Evaluation Association. Published by Elsevier Inc. All rights reserved. 243

The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

Embed Size (px)

Citation preview

Page 1: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

The Historical Record

Papers in this section focus on evaluation from an historical perspective. They mayanalyze important turning points within the profession, provide commentary onhistorically significant evaluation works, or describe and analyze what promisesto be a contemporary watershed event with important implications for the future ofevaluation.

The Oral History of Evaluation Part II: TheProfessional Development of Lois-ellin Datta

THE ORAL HISTORY PROJECT TEAM1

In early 2002, Jean King, Mel Mark, Robin Miller, and Stacey Stockdill began a projectto conduct oral history interviews with individuals who have made signal contributions to theprogram evaluation field and those well-placed observers who were present at and played a rolein pivotal moments in the field. In developing this project, it is our goal to add to the importanthistorical work conducted by others documenting the professional development of individualswho have influenced the way evaluation is understood and practiced today. We also hope tocapture the spirit of the times in which these individuals’ ideas and interests were nurtured.In this, our second oral history interview, we spoke with Lois-ellin Datta. The interview wasconducted by Robin Miller and Valerie Caracelli, with assistance from Aleise Matthews, JulesMarquart, and Irving Lazar.

Lois-ellin Datta received her Ph.D. in comparative and physiological psychology fromBryn Mawr College. Currently, she heads her own evaluation consulting firm, Datta Analysis,based in Hawaii. She served in many roles over the course of her distinguished career in govern-ment, including: Director of Program Evaluation in the Human Services Area (PEHSA) at theU.S. General Accounting Office’s Program Evaluation and Methodology Division; Director forTeaching, Learning and Assessment at the U.S. Department of Education’s National Institute ofEducation; National Director of Evaluation for Project Head Start and the Children’s Bureau;and, as Research Fellow at the National Institutes of Health. Her numerous contributions to thefield of evaluation include serving as Editor-in-chief of New Directions for Evaluation. Herpublications have significantly advanced thinking and practice in evaluation. Among the areas

1The Oral History Project Team consists of Robin Miller, Jean King, Melvin Mark, and Stacey Stockdill. This interview wasedited by Robin Miller and Lois-ellin Datta, with the assistance of Melvin Mark and Valerie Caracelli.

Robin Miller • Department of Psychology (M/C 285), University of Illinois at Chicago, 1007 West Harrison Street,Chicago, IL 60607-7137, USA; Tel: (1) 312-413-2638; E-mail: [email protected].

American Journal of Evaluation, Vol. 25, No. 2, 2004, pp. 243–253. All rights of reproduction in any form reserved.ISSN: 1098-2140 © 2004 by American Evaluation Association. Published by Elsevier Inc. All rights reserved.

243

Page 2: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

244 AMERICAN JOURNAL OF EVALUATION, 25(2), 2004

in which she has had great impact are case study methodology, evaluations in non-traditionalsettings, and mixed-method evaluation approaches.

In the edited transcript of the interview that follows, Lois-ellin highlights the importanceof colleagues in generating new insights, creating career-changing opportunities, and helpingindividuals to do their best work. She also underscores the many benefits of approaching lifeas an advanced learner.

INTERVIEW WITH LOIS-ELLIN DATTA

Robin: Lois-ellin, would you say a little bit about how you came to the point whereyou realized that you wanted to be a social scientist, that you had a passion for children andeducation? From where did those interests evolve?

Lois-ellin: If I hadn’t sat on my glasses on a train coming from Camp Te Ata in the NewYork area to Pittsburgh, Pennsylvania where I was taking the final entrance test for CarnegieMellon’s program for artists, art might have been the road taken. But, I did sit on my glassesand I couldn’t take the test and I couldn’t get in to Carnegie Mellon in the artist program infall 1949. The default option was to go to West Virginia University for a semester until I couldtake the test again. I had a marvelous time in the all the arts courses, but was required to takea course in science. Psychology seemed the least noxious.

The professor was John Townsend, whose sense of drama makes even our own eloquentevaluators seem pale. In his first lecture, this tall, gaunt man walked in, asked how manypeople smoked—at that time almost everyone raised their hands—and then asked “How manyof you can tell your own brand of cigarettes?” Almost everyone said they could. “And how doyou think we could tell whether this assertion is true?” Dr. Townsend just happened to havepacks of cigarettes in his coat pocket. “Should we try this out with you smoking?” After somediscussion, leading to “Gotta cut the cork tips off,” he whipped out these huge scissors. So wedid a blindfolded study of whether people could tell their own brand of cigarettes. I’d neverbeen in a discussion quite as interesting. It was fabulous. It was exciting. I couldn’t wait untilthe next psychology class. I loved the laboratories, and a passion for research was ignited.

The same semester, I was privileged to take a course from Professor Jacob Saposnekow,an erudite, passionate man, who had us immersed in the issues of the McCarthy period, whatwas happening to Benjamin Lattimore, and the role that social science had played in the past,and could play in the future. His passion for social justice would light up the darkest skies,as would his courage at a time when many faculty members kept quiet. I never even took thesecond spring Carnegie exams.

Robin: You have a master’s degree in sociology and social psychology and a Ph.D. in com-parative and physiological psychology. How did you come to work for the federal governmentand get in to conducting evaluation after leaving Bryn Mawr?

Lois-ellin: One of my students at Bryn Mawr1 worked in the National Institutes of Healthfor Dr. Morris Parloff who was doing research on the development of creativity. When apost-doctoral fellowship program was developed at NIH, the student thought that I would bean interesting person for Morris to have around, since I was working at the General ElectricMissile and Space Vehicle Division on the development of creative scientists.

The context in the early ’60s was of concern in the United States that the Russians had abasketball in space before we did.2 There was a realization that we weren’t doing as well in theproduction of very creative scientists as we might be. So there was a policy push to understand

Page 3: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

The Oral History of Evaluation Part II 245

the development of creativity, particularly scientific creativity. The purpose of the project atNIH in the Laboratory of Psychology was to understand the trajectory of development frompre-teens through at least early adulthood in careers during those important transitional periods,but particularly to understand the development of exceptionally creative scientists.

The research design was rather interesting. Exceptional creativity in young persons wasdefined as the top 40 winners of the Westinghouse Science Talent Search. The comparisongroup, since we didn’t have a control group, was the entrants in the top 10% and then in thetop 25%. From these, we selected groups who were matched to the top 40 winners in terms oftheir performance on the test of scientific knowledge that was used. So scientific knowledgeas assessed by this well-developed, standardized test was equal in the three groups, but theirprojects showed notable differences in scientific creativity.

Robin: How did you come to be interested in creativity?Lois-ellin: After I finished my Ph.D. in comparative and physiological psychology with

a specialty in invertebrate behavior including the earthworm, the cockroach, and the Bermudaland crab, I looked for a job in the same place that my husband was able to find employment.The closest match was with the General Electric Company Missile and Space Field Divisionworking on the Apollo Project. They were interested in the promotion of scientific creativity intheir advanced physics lab for the Apollo Project. The research had to do with the conditionsunder which exceptional scientific creativity could flourish in adults.

Robin: How did you make the transition from NIH work to your role in Head Start?Lois-ellin: Two ways. At NIH my officemate next door was Dr. Earl Schaffer, who had done

outstanding work in parent–child relationships as an influence on development, particularlymother–child relationships. Earl was not fond of computers. Part of my responsibility in thestudy of young geniuses was the analysis of the huge data set and I thought that computerswere fascinating! I loved going down to this room full of refrigerators and seeing the little R2D2s doing their thing. Earl asked if I’d work on his data analysis, which I gladly did. Thenhe was tapped to be one of the advisory group members for this new program dealing withlow-income kids, and would I be willing to do a little work with him on that?

So I organized a group of volunteers to do a study of Head Start in the Washington, DC area.The organizing paradigm was the value of diversified information about child development,which could be fed back to teachers and help them, if they wanted to, shape child development.Ann O’Keefe, a friend from Bryn Mawr, and I worked out the measures with Earl’s help, thehelp of others such as Nancy Bayley, who was also at NIH at the time, and of course, as this wasparticipatory research, Head Start staff. We invented measures of psychosocial development forlow-income kids and used what seemed like reasonable existing measures such as the PeabodyPicture Vocabulary Test. The national Head Start office people were intrigued with it as anexample of volunteer research on Head Start; teachers seemed to find the project interestingand helpful.

The second way in which the connection got made was that Ann’s doctoral dissertationadvisor was Edith Grotberg at American University, a fine developmental psychologist. Shealso was the first National Director of Research at Head Start.

The year is now 1968. When the time came to get a National Director of Program Eval-uation, Earl said “I think I know somebody who’s actually done evaluations of Head Start atthe grassroots level” and Edith said “Yes, I know her, too.” I was invited to come to Head Startfor an interview with Jules Richmond, Jule Sugarman, and Dick Orton.

Robin: Could you talk a little bit about the mandate to evaluate Head Start and characterizethe climate in government at that time?

Page 4: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

246 AMERICAN JOURNAL OF EVALUATION, 25(2), 2004

Lois-ellin: The Office of Economic Opportunity, like so many programs, was a compro-mise between different beliefs about the best way to promote national well being and achievesocial goals. Part of the compromise then, as it is now, was often “well, let’s try it out and let’ssort of see how it works.” That’s been the history of some of the major social policies, longbefore evaluation was capitalized.

The Head Start Program in winter/spring 1964 was thought of as a tiny experimental part ofthe Office of Economic Opportunity. People were overwhelmed with its popularity. The Officeshad been set up in a hotel that previously had been used by ladies of the night because it wascheaply available in Washington. So in all of these intriguingly decorated rooms, redolent withatmosphere, there were scores of volunteers processing applications from all over the countryfor summer Head Starts. The program obviously had face validity and demand validity. Sinceit was intended to be experimental, Head Start began with a distinguished advisory panel ofresearchers, funds for research, and funds for evaluation that we might call today “programimprovement, process, formative.” In addition, Office of Economic Opportunity, from thebeginning, had a separate evaluation office, the “summative” arm.

Robin: One of the things that was really striking to me in doing research on you wasthe extent to which the early work in which you were involved really laid the foundation formixed-method evaluations and set a very high standard for how they might be approached inthe future. How did you come to recognize the importance of mixed-method approaches forunderstanding social programs?

Lois-ellin: One of the first things I did, after unpacking my coffee cup, was to realize howlittle I understood about what Head Start really was. With the permission of very kind people,I went on a little tour of Head Start programs in the United States. One program I particularlyremember was in the Mississippi area, which was politically contentious. It was the site of thebook “The Devil Wears Slippery Shoes” about the politics of community action programs inthe South and the enfranchisement of the low-income Black community. This, remember, was1968, ’69.

We got into a car, we went down this dusty, winding road for a long time and came out ina clearing. There was shade from live oaks and a shack. In the shack was the whole Head Startcommunity: the moms, the dads, the teachers, the teacher aides, all of the children. I spent acouple of days there and heard how much Head Start meant to the people who had never evercome to the table to be the decision makers sitting at the head of the table, to be the ones who ranthe programs. I watched the children interact, listened to them, saw them learning—and learnedmyself about how rotten teeth were taken care of, vaccinations given, nutritious meals prepared.In the shack, in the clearing, was a real good program for the children and the community. Wereimprovements possible? Yes, but there was a sense of fundamental soundness. I went aroundthe United States and had experiences like that. So, when I finally got back to Washington andgot assigned to be the Head Start representative to the Westinghouse Ohio Study, I broughtwith me a sense that the story of what Head Start really meant to communities, and to thechildren and to families was an important part of evaluation.

Robin: What resources did you draw on to make sure that the kind of data that tell thatstory of the program were combined with the kinds of data that might have greater credibilityto the people who had commissioned evaluation of this effort?

Lois-ellin: My role in the Westinghouse Ohio Evaluation was to try to make it as good aspossible, including, where necessary, to scream loudly and stomp my feet, as the Head Startliaison to the project. I’d like to think I was helpful through the Head Start Research AdvisoryCommittee in getting the full attention of people like Shep White and Don Campbell to some of

Page 5: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

The Oral History of Evaluation Part II 247

the methodological issues in the design. I remember writing a 43-page, single-spaced critiqueof the report, which was fairly widely circulated.

The evaluations that were funded through Head Start involved a different set of opportu-nities. Head Start had two types of evaluations. The first was based on the notion that you getyour best evaluations and the field would move along most rapidly if research and evaluationwere integrated and benefited each other. Through a competitive process completed before Icame on the scene, there were 14 Head Start Research and Evaluation Centers. The centerswould get money to do basic research on child development, poverty. In exchange for doing thisbasic research, they would then form a consortium and design in a collaborative way a nationalevaluation for Head Start. They’d collect the data, the data would be sent to Washington, andwe would analyze it and prepare the reports, disseminate the findings, and coordinate the nextevaluative cycle. These centers included the Bank Street College of Education, the Universityof Chicago, Tulane University, UCLA, and the University of Hawaii.

Working with the members of the consortia was an opportunity to have almost an advancedgraduate seminar in issues of meaningful data, meaningful data collection, interpretation,alternative methodologies. There were quite heated debates among the directors on these. Ifthese were some of the best minds in the country, and they were, then two things were possible.One, you could inadvertently have blinders on about the righteousness of validity, the exclusivevalidity, and the superior qualities of your own methodology. Second, none of the approachesby themselves really could deal with legitimate issues that were raised by the others. So,meaningful evaluation in the Head Start context had to be a mixed-method evaluation.

Fortunately, in addition to the Centers, Head Start had funds for contracts. The evaluationcontracts included a major assessment of the impact of Head Start on communities, led by Irv-ing Lazar, who later directed the child development consortium and whose follow-up researchon the pioneering, randomized-design intervention studies led to “As the Twig is Bent,” estab-lishing the value of early education. And, one of the first awards I made was to Gary McDanielsand Laura Dittman, then of the University of Maryland, for a longitudinal ethnographic studyof the development of individual children in Head Start. Another RFP led—intentionally, bydesign—to two independent contracts, using quite different approaches to analysis of the datacollected through the Centers, to find out to what extent conclusions are robust when alternativeanalytic methods are used. Another contract was a longitudinal developmental study of chil-dren before they entered Head Start, following them through the program (or whatever otherexperiences they had), and into primary school. Another, the evaluation of a new televisionprogram involving a green frog and a tall yellow bird [i.e., Sesame Street].

Valerie: We were talking in the very beginning of the interview about research. How didyou reframe [your thinking] to think in an evaluative way about these issues and how ultimatelydid the society [The Evaluation Research Society] form, the infrastructure that then becamethe support?

Lois-ellin: Through a lot of stumbling. My own set of blinders, my own basic mindset isthat of an experimental psychologist. My inclination is to try to impose as closely as possiblethe canons of scientifically-based research on every question that comes along.

It was helpful to have at least three influences. The first influence was the Office ofEconomic Opportunity Evaluation Program under John Evans, who was one of the foundingmembers of the Council for Applied Social Research and who believes that the best wayto evaluate a program is through randomized, experimental designs. We have become goodfriends, colleagues on the U.S. Office of Education Joint Dissemination Review Panel, and onpanels developing protocols for reviewing evidence of effective promising practices.

Page 6: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

248 AMERICAN JOURNAL OF EVALUATION, 25(2), 2004

By 1968, Ed Suchman had published his book on program evaluation. On the HeadStart Research and Evaluation Advisory Panel, among Urie Bronfenbrenner, Ed Ziegler, BoydMcCandless and the people who were more identified with research, there was Ed Suchman,explaining what he meant by evaluation. Marcia Guttentag, who left us at far too young anage, with a passion for social justice, was another influence. So, discourse, dialogue, mistakes,and a sense of what evaluation might be as distinct from research emerged.

By 1973, ’74 there were quite a few evaluators who agglutinated into three groups. One an-chor point was concern with local practice, helping individual schools and individual health de-livery systems, improve, document their abilities. Many evaluators with these interests formedE-Net. At another anchor point were the evaluators who were doing the very large evalua-tions asking, “Does this program work?” “Is Head Start effective?” “Are housing voucherseffective?” “What’s right or wrong with Medicaid?” In the Council for Applied Social Re-search group evaluators/researchers such as Peter Rossi and Clark Abt found a home. And thenthere were groups in the middle who would be working at the state level or interfacing withboth of them.

The Council for Applied Social Research had its organizing meeting called by Abt Asso-ciates in I think 1975, with commissioned papers eventually published in a book and agreementthat it would be good thing to form an organization. Marcia Guttentag was there and CarolWeiss and I had been invited. It was noticed that, with some exception, the speakers werewhite males—that there wasn’t a lot of attention to diversity. Marcia felt that this would not doevaluation a great service and she didn’t see a way of making such concerns of high prioritywithin CASR. That is, her vision was of the equal importance of the national level evaluationquestions and of an inclusive organization, concerned with the social justice implications ofevaluations. So she organized the Evaluation Research Society.

Marcia asked if I would be the first conference chair. In 1975 or 1976 we met at the Shore-ham Hotel. Robert Rich was the local arrangements chair. Speakers included Don Campbell,Bob Boruch, Marcia Guttentag, Edmund Gordon, and Ed Ziegler, and the topics weren’t totallydissimilar to some of the topics we’ve heard today or these past wonderful days [at AEA 2003].We had 200 people. Yes! We were ecstatic, awed, delighted. People seemed to enjoy them-selves. There clearly was a clicking, a sense that we had challenges, we had problems, we couldlearn from each other, and we wanted to get together and have another party. The EvaluationResearch Society had begun. Interestingly both societies had parallel tracks in recognition.AEA’s Laszerfeld award comes from The Council for Applied Social Research and it is notinsignificant the emphasis was on evaluation theory. The Evaluation Research Society had theMyrdal Awards, which included awards for excellence in the government, in practice, and incontributions to the field.

Robin: One of the things that I’m struck by is that during that period of time, you atsome point as a psychologist made a transition to incorporate this other professional identityas an evaluator, where it is my impression that some people, such as Ed Ziegler and UrieBronfenbrenner, did not maintain or adopt that identity in addition to the disciplinary identitythat they brought into those early meetings. What attracted you so much so to evaluation thatyour professional identity was to expand? At what point did that happen for you?

Lois-ellin: Location may be destiny. Urie and Ed had their professional identity at CornellUniversity and at Yale. They were professors with wonderful graduate students. Shep Whiterefers to the flying professors. Perhaps their identity was located in departments of psychologyor departments of human ecology and that was their center and location, a marvelous point ofinfluence. Mine happened to be in an old hotel in Washington, DC, eventually in government

Page 7: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

The Oral History of Evaluation Part II 249

offices, not in the disciplinary departments of a university. Perhaps it was easier to make atransition from someone who wrote basic research articles on the development of potentiallycreative young scientists and earlier, invertebrate behavior, to being someone at the intersectof national policies, with a profound appreciation of local change and local events and the wayin which information potentially could be a useful bridge and where it could be the dark towerwhere it was very destructive.

Robin: Could you say a little bit more about what some of the difficulties were in havingevaluation recognized and respected as distinct from some of the other established disciplinesthat use the same or similar tools.

Lois-ellin: I wonder if it’s helpful to make a distinction between the academic setting andthe government settings. The General Accounting Office offers a form of accountability andthat goes back to the 1850s, as does an Office of Statistics in the Department of Education. Ingovernment settings there is a tradition of having offices of accountability, offices of review,offices of monitoring, inspectors. Evaluation in the federal government was not particularly ahard sell.

It may be more complex in the academic setting. Perhaps there is a natural gravitation ofpeople doing evaluations within a disciplinary area or field to find their primary professionalidentification within that field. There are, for example, many people in AERA who are notmembers of AEA because they find a greater compatibility among others studying education.The health area has some wonderful evaluators doing important evaluations that you rarelysee at AEA. The field of labor. Ditto. The field of agriculture. Ditto. The field of experimentalpsychology. Ditto. The evaluation courses and fields tend to be infused in forestry, in agriculture,and pulling together can be more difficult. But, I’m not an academic so this is seen from afarabout the emergence of discipline.

Valerie: Let’s take the next leg of the journey. After Head Start and the Ohio WestinghouseStudy you did move to the Department of Education. How did the move come about and whattypes of activities were you engaged in?

Lois-ellin: Head Start and the Children’s Bureau merged in about 1970 and I mergedwith it very happily, working with an outstanding scholar, Charles Gershenson, who headedChildren’s Bureau Research and whose institutional history went practically back to the 1912White House Conference on Children. I loved having exposure to new ideas and a new set ofresearchers. But by about 1972, I had been working in that area for 5 years or so. I agree withGary McDaniel’s theory of growth change and richness. Gary’s notion is that particularly inthe government, you join an office and your plate’s filled with new challenges and you have awonderful 2 years of trying out your solutions. You learn if this works or that doesn’t. Thenyou go for another round for 2 years and you’ve made some improvements. At the end of 5years, you’ve given what you’ve got in terms of fresh ideas and it’s time to move on for thegood of the program, for the good of yourself, unless you’re one of these people who havefound their life calling.

The year is 1972. I was ready to move on. I’d done the best I knew how to for Head Startevaluation and Children’s Bureau Research. I didn’t know how to do anything better and it wastime to let somebody else try it for a while. The National Institute of Education had just formed.Yes! Education was going to be reformed! At Head Start, we had asked Don Campbell if hewould do a re-analysis of the whole body of Head Start data from the Follow-through Head StartVariation Study. Don couldn’t do it but he recommended Mike Smith. David Cohen and Mikeformed the Huron Institute, got the contract, and did some fine re-analyses of the Head StartFollow-through Planned Variation Study. One of Mike’s colleagues at Harvard was Corinne

Page 8: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

250 AMERICAN JOURNAL OF EVALUATION, 25(2), 2004

(Cory) Rieder, who was working in the Department of Education as it was forming. WhenCory was looking for someone to direct the Research and Evaluation Program in the CareerEducation Department of NIE, I thought, Sure, Why not? Cory is brilliant. She understoodpolicy. She understood research. I had done a little work in career development with the youngscientists and so there I was at NIE, the Career Education Department. And, if I thought HeadStart was political, the range of evaluative studies at NIE was broader and more politicized.

Robin: Were there key moments in your time there that shaped your perspective on therelationships among evaluation, social policy and politics?

Lois-ellin: Remember Senator Mike Mansfield? He brought home the bacon. An airforce base in Montana had been closed down. The Senator mandated that the air force basewould be turned into a training program site. Families would be imported from Colorado andthe lower 36, moved to Montana, away from the pernicious influences of their families andneighborhoods. They would be given counseling, training on how to manage budgets, careerpreparation, and then would go forth and become productive members of society. We in theCareer Education Department had a 5 million dollar project in Montana (that’s $5 million in1972 dollars, annually) which we were to evaluate and find effective.

That was a formative moment in my life, particularly formative since the displeasure ofthe Senator on our responsiveness had a negative impact on the budget and the situation of NIE.The only time I’ve flown on Air Force Two was en route to Montana, where for 4 hours goingand coming back we learned in detail why repainting the houses on the base and spending timethere making this project work was a top priority. Congress expressed itself as most unhappywith NIE’s understanding of educational research and the total NIE budget was cut. It was aprofound lesson about the relation between some kinds of politics, some kinds of evaluation,and some kinds of studies.

But there were also opportunities to sense the relation between policy and evaluation ina more positive way. After a reorganization following the senatorial displeasure and resultingbudget cuts, I found myself as the Director for Planning and Management in the unit of basicresearch on teaching, learning, and assessment which was run by Sylvia Scribner, a marvelousethnographer justifiably famous for her studies of functional literacy. Sylvia had selected mein part because she was intrigued by having a comparative and physiological psychologist onher staff. Those earthworms went a long way in my life.

The National Assessment of Educational Progress was part of the responsibility of thegroup in teaching, learning, and evaluation, as was the Northwestern laboratory that includedDon Campbell, Bob Boruch, and Paul Wortman, and a treasure chest of evaluation ideas.3 JeffSchiller, who had been the program officer for the Westinghouse Ohio Study and became a closefriend, was a fount of incredibly creative ideas. He’d heard something about the adversarial orjudicial evaluation model and thought we ought to try it out. The director of NIE at that timehad connections with Barbara Jordan so we were able to get her as the trial officer, and hadfull support of the organization to go first-class all the way in trying out the approach. Beingat NIE was like being a kid in a candy shop: exceptionally able researchers were attracted toworking there and there was strong support for exploring alternative approaches to evaluationin the context of issues that were politically important.

While I was at NIE, President Reagan was elected. The year is 1981. The scene is theOffice of NIE’s Director. We have in the scene the people employed at NIE under the previousadministration, as it’s known, meeting for the first time with our new peerless leaders at NIE.One of the first questions out of the chute was, “Did you have anything to do with that terribleuse of federal funds called “Freestyle” which purports . . . ”—the voice got louder and louder

Page 9: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

The Oral History of Evaluation Part II 251

and I thought the man was going to have apoplexy—“which purports to encourage womento consider non-traditional careers? What right do you think you have to use federal funds topromote values?!”

Guilty as charged. When I was with Career Education, we funded “Freestyle,” a televisionprogram that went on national TV showing girls exploring non-traditional careers. It was apretty effective program. The new director was also aware of the research grants program onwomen and mathematics.

Being a perceptive person, who picks up subtle cues, I had a sense this might not bethe happiest administration in which I could be working. The relationship did not improvefrom this first interview. I was about to be reassigned to a galaxy far far away from research,evaluation, or funds.

Fortunately for me, Eleanor Chelimsky (and Gary McDaniels) had faith that I couldcontribute something useful to the work she wanted to accomplish for evaluation as a whole, forthe U.S. Government Accounting Office, and for her new organization, the Program Evaluationand Methodology Division.

Valerie: You made many contributions at GAO. You were there for a while and made somemethodological advances in prospective evaluation synthesis but also the mix of methods, thecase study was given more depth of treatment than we’d seen before, and some very key studies.Lots of your experience at GAO contributed so much to evaluation.

Lois-ellin: Looking at each study to see what methods—new or old—might best answerthe questions was Standard Operating Procedure for PEMD.

For example, Eleanor was keenly aware when I joined PEMD in 1982 that many of theGAO reports could be considered case studies: small N’s, qualitative, observational, not a lotof attention to sampling and other methodological issues. My first assignment at GAO wasto write a guidebook on case study methodology. So, again, I went “on travel” to visit theother divisions hearing how they thought about their methodology and asking for examplesof which they were proud—best case examples. I also drew a random sample of every GAOreport (except the straight accounting reports or the legal opinions) for the past 2 years anddetermined how many of them would be considered case studies.

Reading Robert Yin and other marvelous writers on case studies, talking to them, beganto shape a framework that might help define the quality of a case study in a way that could bereasonably translatable into existing GAO terminology and typical purposes for conductingevaluations. I drew on examples from GAO that illustrated good practice where I possiblycould, talked in more generalized ways about “here are some real methodological disasters”and the consequences of doing things very badly. The approach was tried out in a series ofworkshops for people from the different divisions to see where the language needed adjustment.

Then with great fear and trembling, the draft was sent to our evaluation community, someof whom liked it while a few others blasted it. Because PEMD’s vision required credibility inevaluation and GAO communities, the manuscript was torn-up and I started out all over again,being more attentive to citations and concept-bridges, and with a few more lessons learned.

The first “blue book” evaluation I did at GAO had to do with the deaf–blind centers.After the German measles epidemic, when several thousand children were born deaf andblind, Congress had authorized centers for their support, training, and other services. By 1983,Congress noted that the measles epidemic children were now in their 20s and 30s and theevaluative question was “Are these centers still needed?” and “We would like testimony in 6weeks.” The methodology was two-fold: examining numbers and Center quality. The Centersfor Disease Control had information on the numbers of deaf–blind children born each year.

Page 10: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

252 AMERICAN JOURNAL OF EVALUATION, 25(2), 2004

Although the measles epidemic was gone, there had been, horribly, sadly, other reasons whydeaf–blind children were being born and were being born numerically in relatively large num-bers. Then we did a modest survey on users and their satisfaction, found administrative datashowing that there hadn’t been problems and scandals in the administration of the units, andwith appropriate clarity on limitations, gave the testimony.

Robin: Would you talk a little bit about what it has been like to be a professional womanand mother and how you think about women in evaluation?

Lois-ellin: We all have got to be profoundly grateful to women like Carol Weiss, MarciaGuttentag, and others, who at a time when applied social research wasn’t necessarily as het-erogeneous as it is today, for being absolutely first rate and doing marvelous work. Personally,I was never conscious of any distinction between being a woman and being a man in termsof professional acceptance. I was lucky to have professors who, while they were of the malepersuasion, seemed happy to spend as much time as I wanted talking with me, giving me pilesof books to read, and access to their laboratories.

The fact that I went to Bryn Mawr College for my Ph.D. was again partly chance. I’dplanned on the University of Pittsburgh but a friend suggested I also send my resume to BrynMawr. At Bryn Mawr, there are no limits to what a woman is expected to do or can do or isencouraged to do. As President M. Cary Thomas famously said, “Our failures only marry.”You were never expected to do one or the other.

Most of all, I was blessed in being married at 20 to Padma Datta, who saw our livestogether as a partnership. When we had babies, he diapered them, he fed them, walked thefloor with them, shared their lives, and helped grow them up to become fine men. He wentto work in the dark to be home after school; I made breakfast, read to them, got them off toschool and came back in the dark. For my first year at Bryn Mawr (1955–1956), while Padmawas finishing his dissertation at West Virginia, he was the “single parent” taking care of ourolder son.

Robin: Did those experiences as a working mom inform the way you thought about theprograms that you evaluated or give you insights that other evaluators might not have broughtto the table?

Lois-ellin: I don’t think more so than other evaluators. The concern for social justice wasinformed by understanding first hand what it’s like to be a woman and having to earn yourliving. If you like to eat, you have to work, and we were poor, struggling graduate students.And I have been economically “on my own” since I was 16. One has an understanding of whatit’s like to deal with a family, who might not have a supporting partner, what it feels like to getup at 4 O’clock in the morning and come home at night. But perhaps no more so than my malecolleagues who were dedicated to their families, cared about them, wanted to be participatingspouses, and knew first-hand what it was like for a male to try to earn a living, to balance theneeds of a career and to negotiate all the things you negotiate.

Valerie: You had two children—two boys—and you were balancing work and family lifethroughout with your work at NIE and GAO, and you had a supportive partnership marriageto get through that. I want to bring you to the present and future.

Lois-ellin: I think sometimes you reflect on who were the people who influenced you.Why did you take this path instead of that? For me, there’s a lot of reasons. One is the politicalclimate of the time, what’s happening in the policy space around us, what’s possible, what’snot possible, where are the opportunities. But a lot is in the people with whom one has beenprivileged to work. One of the many things for which I am deeply grateful is I’ve had so manyopportunities to learn from the people who were my supervisors, like Cory, like Eleanor, like

Page 11: The Oral History of Evaluation Part II: The Professional Development of Lois-ellin Datta

The Oral History of Evaluation Part II 253

Charles. With a single exception, I have loved and learned from everyone with whom I’veworked.

But even more than my supervisors, I’ve learned from the people who have been mycolleagues and eventually were on my staff, fine evaluators such as Jeff Schiller, CharlesStalford, and Norman Gold. All the people who I worked with at NIE knew so much morethan I did about so much. Virginia Richardson, who went on to become president of AERA,knew more about teachers and evaluation research than I ever did. Ramsey Selden knew moreabout chief state school officers, unions, and that aspect of politics. Jeff Schiller knew moreabout data analysis. Then, through GAO, I was incredibly privileged to learn from Roger Straw,Fritz Mulhauser, who’s a passionate advocate of ethnographies and evaluation, Patrick Grasso,Stephanie Shipman, Terry Hedrick, Val Caracelli, Martha Ann Carey, Leslie Cooksy and manyothers who today are an important part of AEA.

Being on the grants and contracts side of evaluation at Head Start and NIE was a mag-nificent opportunity to learn from the evaluators we were lucky enough to support, many ofwhom are past-Presidents of AEA and recipients of AEA’s distinguished awards.

This perhaps illustrates the importance of colleagues. It’s difficult to be the lone evaluator.How valuable it is to bring together people who care passionately about evaluation, who havediverse backgrounds, and who are empowered to talk freely and to do evaluations! One ofthe highlights of my last year at GAO was when Val, Leslie and other recent arrivals notedthat PEMD was quantitative in orientation and asked if we could have a weekly seminar onqualitative methods. We’d meet in my office with our brown bag lunches, talk about qualitativemethods in evaluation and where we might apply them in the evaluations that we did. I learnedso much from these wonderful evaluators.

So it is important to look at the constellation of evaluation offices and their placements, andto be concerned about “lone evaluators” in the field. Politically, as AEA, we need to support thecontinuance of evaluation units and not let them get obliterated in different political climates.We need to create within senior government agencies and to support within academia, strongvibrant places where evaluators can get together. Sitting here at AEA 2003 I was aware sadlythat the time may be coming soon when the founding fathers and mothers are not going tobe around. Then, I look at the constellations that are in the sky now of the people who arerunning AEA and just feel we’re in great hands. Most of all, I look at the rising stars andthey’re great. They’re filled with ideas. They’re willing to try almost anything. They’re willingto be skeptical. They ask marvelous questions. And this is good.

NOTES

1. Editor’s Note: Lois-ellin taught at Bryn Mawr College while studying there as a graduatestudent. Her first post-doctoral position was at General Electric.

2. Editor’s Note: Lois-ellin uses “basketball” here as a metaphor for a satellite, referencing com-petition between the United States and Soviet Union following the 1957 launch of the Soviet satellite,Sputnik.

3. Editor’s Note: To learn more about the Northwestern Training Program during this era, readersare referred to the historical record section of the American Journal of Evaluation, 24 (2).