9
AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION Thomas Dietz and Alicia Pfund Development agencies have invested billions of dollars in projects in- tended to improve economic well-being and overall quality of life in the developing world.' Some of these projects have been great successes, meet- ing most of their goals and generating few adverse effects. A few have caused more harm than good, while the majority have met some of their goals and also generated both beneficial and harmful effects that were not anticipated by development planners (Finsterbusch & Van Wicklin, 1988). What most projects have in common is that they cause social change. Social change in developing countries is a complicated process, and the links between a project and the changes it produces may not be obvious a priori. An understanding of why some development projects succeed while others fail is an important prerequisite to improving quality of life in the develop- ing world. There is a long tradition of applying evaluation research methods to development programs and projects to determine the success of those cfforts, and in turn to build knowledge of what works under various condi- tions (Hoole, 1978). In this paper we will review the constraints under which most development project evaluations are conducted. The con- straints (and potential benefits) that arise in development project evalua- tion are quite similar to those faced in social impact assessment. Thus methods developed for social impact identification are also useful in evaluating the full range of social impacts that result from development projects. We describe a technique that has proven successful in impact assessments, and has performed well in initial efforts to apply it to develop- ment project evaluation. 2 CONSTRAINTS IN DEVELOPMENT PROJECT EVALUATION Resource Limitations In development projects, resources for evaluation usually are very limited. Development programs typically allocate a very small percentage of total funds to evaluation. To make matters worse, the cost of carrying out a development project evaluation is high because the agency conducting An earlier version of this paper was presented at the 1986 Annual Meeting of the Midwest Sociological Society. The authors thank R. Bartlett, R. Frey, D. Gill, G. Wandesforde-Smith, D. Wernette, and R. Williams for comments, and the people throughout Latin America and the Caribbean who have served as experts in our studies. The views expressed here do not necessarily reflect those of the Inter-American Development Bank. 137

AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

Embed Size (px)

Citation preview

Page 1: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

Thomas Dietz and Alicia Pfund

Development agencies have invested billions of dollars in projects in- tended to improve economic well-being and overall quality of life in the developing world.' Some of these projects have been great successes, meet- ing most of their goals and generating few adverse effects. A few have caused more harm than good, while t he majority have met some of their goals and also generated both beneficial and harmful effects tha t were not anticipated by development planners (Finsterbusch & Van Wicklin, 1988). What most projects have in common is tha t they cause social change. Social change in developing countries is a complicated process, and the links between a project and the changes i t produces may not be obvious a pr ior i . An understanding of why some development projects succeed while others fail is an important prerequisite to improving quality of life in the develop- ing world.

There is a long tradition of applying evaluation research methods t o development programs and projects to determine the success of those cfforts, and in turn to build knowledge of what works under various condi- tions (Hoole, 1978). In this paper we will review the constraints under which most development project evaluations a re conducted. The con- s t ra ints (and potential benefits) that arise in development project evalua- tion are quite similar to those faced in social impact assessment. Thus methods developed for social impact identification a re also useful in evaluating the full range of social impacts that result from development projects. We describe a technique tha t has proven successful in impact assessments, and has performed well in initial efforts to apply i t to develop- ment project evaluation. 2

CONSTRAINTS IN DEVELOPMENT PROJECT EVALUATION

Resource Limitations

In development projects, resources for evaluation usually a re very limited. Development programs typically allocate a very small percentage of total funds to evaluation. To make matters worse, the cost of carrying out a development project evaluation is high because the agency conducting

An earlier version of this paper was presented at the 1986 Annual Meeting of the Midwest Sociological Society. The authors thank R. Bartlett, R. Frey, D . Gill, G. Wandesforde-Smith, D. Wernette, and R. Williams for comments, and the people throughout Latin America and the Caribbean who have served as experts in our studies. The views expressed here do not necessarily reflect those of the Inter-American Development Bank.

137

Page 2: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

138

the evaluation usually will be headquartered far from the project so that travel and field costs will be high. And often in less developed nations, background data, such as those available in censuses, previous studies, official records, and other forms of secondary data, will be limited in quan- tity, coverage, historic depth, and in some cases quality, so that the evalua- tion effort will require additional primary data collection. As a result of these constraints, methods to be used in development evaluations must be frugal of financial and technical resources.

Multidimensionality of Impacts

Even development projects with only a few specific goals are likely to cause a diversity of impacts. Evaluation designs that focus only on explicit project goals or on economic impacts will be misleading because the indirect and unanticipated impacts may be more significant than the intended impacts. Thus methods and research designs for development project evaluations must be capable of examining a diversity of impacts.

Cultural Barriers

We are skeptical of the reliability and validity of many traditional evalua- tion methods such as surveys when used in developing societies among people who are unfamiliar with them, do not trust the surveyor, and do not expect their responses to be kept confidential. In addition, surveys that are not based on a careful identification of sociocultural issues may measure variables tha t are not important in the local context and ignore other more critical variables. So appropriate methods must be sensitive to local culture.

Diversity of Settings

Development projects are conducted in a diversity of local environments and pursue many goals through many strategies. Even a single project with a single goal and a single strategy for achieving that goal often will be implemented in different cultural, political and natural environments. Thus the tools employed should be flexible enough to be useful across settings.

Given these constraints, we believe emphasis should be placed on iden- tifying all relevant impacts rather than on expending resources to quan- tify a few impacts. Without adequate identification, non-critical impacts may be quantified, while critical impacts are ignored. In addition, with limited resources, an evaluation tha t emphasizes identification may produce useful results where a quantitative approach could not be imple- mented. Thus we turn our attention to methods for impact identification.

Page 3: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

Dietz/Pfund: An lrnpact Identification Method.. 139

AN IMPACT IDENTIFICATION METHOD

The methodological requirements of impact identification in evaluations a re quite similar to the requirements of social impact assessments (Cramer, Dietz, & Johnston, 1980; Dietz, 1987). In both situations, the methods used should be frugal; emphasize identification over quantification; be sensitive to environmental and sociocultural factors; and be able to consider a variety of impacts in a diversity of settings. Previous work on methods for conduct- ing social impact assessments has shown tha t a structured group process, using panels of experts interacting through the nominal group technique, can meet these requirements (Cramer e t al., 1980; Dietz, 1984).

The nominal group technique is designed to elicit information from a group of individuals meeting face-to-face. I t structures the group process to enhance group creativity while minimizing many of the problems tha t occur in standard committee meetings. Several panels of experts a re formed, each using a different definition of expertise. For example, some panels are composed of ordinary citizens whose assessment of the project is based on locally grounded experience, while others are made up of individuals with scholarly or professional expertise. The researcher integrates information obtained from the panels with other information to produce the impact assessment or evaluation.

Selection of E x p e r t s

A critical element of t he method we propose is the use of multiple definitions of expertise. Clearly, one appropriate definition is a ra ther traditional one tha t considers as experts those individuals who have formal training in the problem being addressed by the development project, or who administer t he project being evaluated or similar projects. We consider these individuals to be technical experts. They should be drawn from the ranks of academics, consultants, and government and development agency staff. But other kinds of expertise also should be included to insure the evaluation draws on a diverse body of knowledge. A second logical group would be the development project staff, who we consider to be operations experts. Whereas the agency officials in the technical experts panel have an overview of t he project being evaluated and can compare i t to other projects, t he operations staff a re engaged in the day to day functioning of t he project and know i ts strengths and weaknesses from a personal perspec- tive. A thi rd obvious type of expertise is that of the individuals in the community in which the project is located, especially, but not exclusively, those individuals who are to be served by the project. This group we label community experts. They understand the local context into which the project must fit, and are acutely aware of the changes tha t have actually accompanied the project, ra ther than those presumed to accompany the

Page 4: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

project. In practice, each of these types of experts may be further sub- divided. For example, multi-purpose projects may have multiple clientele groups tha t differ in expectations, location or other factors, and each such group could reasonably be defined as having a unique perspective and expertise.

The ideal number of panelists seems to be about seven. Five or fewer panelists provides too limited a set of opinions and experience, while more than a dozen panelists makes the process burdensome for convener and participants. If more than a dozen experts in a particular category are available, we suggest dividing the pool of experts a t random into several panels. Two rules should be followed in crcating a panel. The first is that t h c members of t h e panel should share a common level of technical knowledge. This allows the "jargon" used within a panel to be relatively homogenous, and reduces the danger that panelists will not be able to follow each other's arguments or will defer to one or a few individuals with perceived expertise or status. Thus each panel includes only one type of expert. Placing academics and senior administrators on the same panel yields fruitful discussion, placing academics and project clients on the same pancl does not. The second rule is that , within a homogenous technical vocabulary, panel members should be as diverse as possible. This is to provide for multiple perspcctives and promote active discussion within panel meetings.

Conduct of the Meetings

The panels are conducted using a six step modification of t he nominal group process (Delbecq, Van de Ven, & Gustafson, 1975). First, when the panel is convened, the purpose of the meeting and of the evaluation study is explained. I t is important to clarify the possible effects of the study on the project being examined. Panelists may have false expectations about the amount of response that will follow from an evaluation, and it is important to dispel unrealistic hopes. Second, a question is posed to the panel, and the panel is asked to work silently and independently generating responses to the question. Each member of the panel is asked to write his or her responses on pads provided for that purpose. Our experience indi- cates tha t it is important to tell the community panelists that the writing is for their own use, that the researchers do not expect to see what they write. This dispels panelists' concern about their writing skills, handwrit- ing, etc. The writing process allows group pressure to encourage creativity. An individual who has run out of possible answers to the question will look around the room and find most individuals still writing. This will provide subtle encouragement to continue thinking about the question.

The exact question put to the panel depends on the project under study. With technical experts, a very direct question such as "What are the impacts

Page 5: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

Dietz/Pfund: An Impact Identification Method 14 1

of the project?" is useful, but we have also had success with "What could be done to improve the project?" The advantage of the la t ter question is that i t identifies problems and successes in the context of situations that are perceived to be changeable, rather than in the abstract. Thus the answers received are likely to be of considerable help in modifying the existing project or designing new projects.

The third step in the process is a round-robin listing of ideas. Each panelist is asked to provide one response to the question. Each response should be written by the convener on a large sheet of paper visible to all members of the panel. The convener does not make any comments regard- ing his or her opinion of the ideas expressed, but should encourage the group to continue. After each panelist has listed one idea, each is asked to provide a second, and so on until all ideas are listed. Panelists who have run out of ideas still should be queried to see if other responses have sparked new answers. Ideas should be copied as nearly verbatim as possible t o preserve the integrity of each contribution. This has a subtle but very important effect: the group members feel that the convener is in charge of the meeting but not in control of what is being said. Thus the group does not feel they are being manipulated. This invariably encourages open and frank contributions by the panelists. No attempt should be made to sum- marize or combine ideas a t this stage nor is any discussion of ideas per- mitted until the listing of all ideas is completed.

Fourth, each idea on the list is briefly discussed in turn. During this discussion, the expert panel may reach a consensus that several of the items on the list should be combined. If there is a lack of consensus, i t is best to err on the side of disaggregation. Fifth, after all items have been discussed, panelists are given index cards and asked t o identify the most important, second most important, and third most important items on the list. These scores a re collected, and a group priority ranking of each item is calculated and presented to the panel. The ranking is followed by open discussion of both the panel process and the results. If discussion indicates that some rankings may have changed, the ranking process can be repeated. When the evaluation is to include more than one question, a second question can be asked at this point, and the process repeated. The final step in the process is to make available to all panel members copies of the report produced as a result of the panel process. This is done to allow panelists to correct factual errors on the part of the researchers, and to inform panelists of the results of their efforts.

We have used this procedure as part of an evaluation of health and nutrition projects funded by the Inter-American Development Bank. A variety of projects in Trinidad and Tobago, Costa Rica, Guatemala, and Colombia were evaluated (Dietz & Pfund, 1983; Inter-American Develop- ment Bank, 1985). In general the technique worked quite efficiently across

Page 6: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

142

a wide range of program types, definitions of experts, and settings. The only problem occurred when members of a panel were illiterate, as was the case in a few community expert panels. Since the process is dependent on writing to aid panelists memories, when a member of a panel cannot use this aid, t he process is much less efficient. In such circumstances we assigned a l i terate aid to each panelist, but problem seems more one of process than of writing per se. When a person cannot write, he or she tends to think aloud, and other group members comment on these statements. In such circumstances one of t he major benefits of t he structured group process, t he independent generation of ideas, is converted into an unstructured discussion. From our experience to date, we conclude tha t t he technique should not be used among the illiterate.

The procedure also was used in an evaluation of urban development projects by the Inter-American Development Bank. Because of t he complex nature o f these projects and the number of different components in each urban project, t he expert panel method was used with members of a com- munity served by a si tes and services sub-project, the beneficiaries of a project t o promote micro-enterprises, and the users of a community center, all in Colombia. In both this work and in the evaluation of t he health and nutrition projects, most problems identified by panels revolve around in- adequate sociocultural assessments a t t he t ime projects were planned. We should note tha t every t ime we held a panel meeting with community experts, we discussed the results with the agency in charge of t he program. Although the goal of our work was to learn from the experience for future program planning, and not to correct t he present situation, in several instances, action was initiated to correct some of the problems identified by the community experts.

The major advantage of the technique is that i t can be used in such a variety of settings, and tha t i t provides a wealth of information a t a cost tha t would not have been sufficient to support a conventional evaluation of even a single project using traditional methods. This is particularly impor- tan t for large development agencies tha t must evaluate many projects operating in diverse countries, each with a unique sociocultural milieu. In many ways the use of expert panels is a compromise between the use of quantitative experimental or quasi-experimental designs and reliance on informal assessments or evaluations. Certainly a carefully conducted quan- titative impact assessment or evaluation can provide information tha t is more precise than the expert panel technique described here. When resour- ces a re sufficient to conduct a more elaborate quantitative analysis, t he expert panels a re useful for identifying issues tha t are salient and worthy of more detailed treatment--a "scoping" procedure.

Page 7: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

Dietz/Pfund: An Impact Identification Method ... 143

DIRECTIONS FOR FURTHER DEVELOPMENT

The expert panel method has many advantages as a tool for impact assessment and development project evaluation but i t is subject to the criticism that the results i t produces may not be valid in light of the threats to validity articulated in Campbell and Stanley (1963). The method presumes tha t experts, as defined within the process, have insights into the impacts of a development project, and that those insights are elicited by the group process. Our experience makes us confident that the technique does identify important impacts. But ultimately the only way in which the validity of the expert panel approach can be demonstrated is to compare i t with other methods. Since experimental and quasi-experimental designs are intended to estimate the magnitude of impacts t ha t have already been identified while the expert panel approach is intended to identify impacts, direct comparisons are difficult. One cross-validation approach would be t o use the expert panel method to identify impacts and then use quasi-ex- perimental methods to estimate the magnitude of those impacts and thus confirm that t he experts’ views reflect reality.

Because we have focused on impact identification, which does not require a judgment regarding the overall merits of a program, we have not dealt with one of the most critical intellectual and practical problems in evalua- tion and impact assessment, the problem of valuation. Projects produce a diversity of impacts, each measured in its own units. Even when the mag- nitude of all impacts can be quantified, there is no fully justified method for commensurating the multiplicity of impacts. And without a common metric there is no way to sum impacts into an overall judgement of the merits of a project. In the context of social impact assessment, this problem has been addressed by several wri ters (Frey, 1986; Freeman, 1986; Freeman, Frey, & Quint, 1982; Dietz, 1987) but no consensus has emerged and further work is badly needed.

We have turned to the impact assessment literature for a method to conduct development program evaluation because we view impact assess- ment and evaluation as two stages in the policy process. Ideally, the policy process should be one of adaptive planning, in which an initial proposal is subjected to an impact assessment, modified, re-assessed, and so on until an acceptable proposal is reached (Bridger, 1986). Once implementation of the project begins, evaluation studies should be conducted on a regular basis to provide guidance. These evaluations will be informed by the impact assessments, while the results of the ongoing evaluations can provide an empirical base for t he forecasting required in the impact assessment process. In turn , t h e evaluations of several projects provide generic guidance regarding general approaches to development which can guide the process of planning. At present, the processes of planning, impact assess- ment, implementation, and evaluation are rather disjointed, each linked to

Page 8: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

144

t he others only in a haphazard fashion in most agencies. We hope tha t in the future, these processes a re seen as subsets of a larger, more adaptive policy process (cf. Holling, 1978).

NOTES

'We refer to all development activities as projects for simplicity. The discussion applies equally to projects, programs, and policies.

2Wandesforde-Smith has noted tha t t he use of terms like "success," "useful," and "perform well" in this context are somewhat ambiguous. Our operational definition of success, utility and good performance are tha t the methods we suggest can be implemented in the field without great difficul- ty, produce evaluations of projects tha t have face validity to individuals knowledgeable about t he projects, and generate information tha t can be translated into recommendations perceived as useful by those who must make policy decisions regarding such projects.

REFERENCES

Bridger, G. (1986). Rapid project appraisal. Project Appraisal, 1 , 263-265. Campbell, D.T., & Stanley, J.C. (1963). Experimental and quasi-experimen-

tal designs fo r research. Chicago: McGraw-Hill. Cramer, J.C., Dietz, T., &Johnston, R. (1980). Social impact assessment of

regional plans. Policy sciences, 12, 61-82. Delbecq, A.L., Van de Ven, A.H., & Gustafson, D.H. (1975). Group techniques

for program p lann ing . Glenview IL: Scott, Foresman. Dietz, T. (1984). Social impact assessment as a tool for rangelands manage-

ment. In National Research Council (Ed.), Developing strategies for ran- gelands management . Boulder, CO: Westview.

Dietz, T. (1987). Theory and method in social impact assessment. Sociologi- cal Inquiry , 57, 54-69.

Dietz, T., & Pfund, A. (1983). An evaluation of the Trinidad and Tobago Heal th and Nutri t ion Project. Washington, DC: Inter-American Develop- ment Bank.

Finsterbusch, K., & Van Wicklin, W.A., 111. (1988, in press). Unanticipated consequences of A.I.D. projects: Lessons from impact assessment for project planning. Policy Studies Journal.

Freeman, D.M. (1986). Value judgement and social impact assessment: Strategic alternatives and theirproblems. Paper presented at the Annual Meeting of t he Midwest Sociological Society, Des Moines, IA.

Freeman, D.M., Frey, R.S., & Quint, J .M. (1982). Assessing resource management policies: A social well-being framework with a national level application. Environmental Impact Assessment Review, 3, 59-73.

Frey, R.S. (1986). Assessing the social impacts of alternative water manage- ment policies in southwestern Kansas. Paper presented a t t he Annual Meeting of t he Midwest Sociological Society, Des Moines, IA.

Page 9: AN IMPACT IDENTIFICATION METHOD FOR DEVELOPMENT PROGRAM EVALUATION

Dietz/Pfund: An Impact Identification Method.. . 145

€lolling, C.S. (Ed.). (1978). Adaptive environmental assessment and manage- ment . NY: Wiley.

Hoole, F. (1978). Evaluation research and development activities. Beverly Hills, CA: Sage.

Jnter-American Development Bank. (1985). Evaluation report o n health and nutr i t ion activities in the Inter-American Development Bank (RE-127). Washington, DC: Inter-American Development Bank.