33
Meta-evaluation s been prepared for educational evaluation course by Mohse Course Instructor: Prof. M. Chizari

Meta Evaluation

Embed Size (px)

Citation preview

Page 1: Meta Evaluation

Meta-evaluationIt has been prepared for educational evaluation course by Mohsen Sharifirad

Course Instructor: Prof. M. Chizari

Page 2: Meta Evaluation

Attend to Metaevaluation, Meta Evaluation and Meta-Evaluation Phrasalwords

Synonym of Metaevaluation is “Review Evaluation” Worn about same surfing result as “Meta Analysis”

Meta-evaluation Some Tips

It’s needed to be evaluating

Page 3: Meta Evaluation

“Meta- evaluation is the evaluation of evaluations—indirectly, the evaluation of evaluators— and represents an ethical as well as a scientific obligation when the welfare of others is involved” (p. 228).

What is Meta-evaluationMichael Scriven (1991) described meta-evaluation in his Evaluation

Thesaurus:

He continues to add that meta-evaluation should be conducted by the evaluator as well as by an

external entity.

Page 4: Meta Evaluation

Proactive meta-evaluation, which is designed to help evaluators before conducting an evaluation.

Types of Meta-evaluation

(Stufflebeam and Shinkfield, 2007)

Concurrent meta-evaluation is designed to take place alongside an evaluation, rather than before or after it.

Retroactive meta-evaluation, which is designed to help audiences judge completed evaluations.

(Carl E. Hanssen et al, 2008)

Page 5: Meta Evaluation

Proactive metaevaluations are needed to help evaluators focus,design,budget,contract,and carry out sound evaluations.

Types of Meta-evaluation

Retroactive metaevaluations are required to help audiences judge completed evaluations.

In the evaluation literature, these two kinds of metaevaluation are labeled formative metaevaluation

and summative metaevaluation.

(Stufflebeam and Shinkfield, 2007)

Page 6: Meta Evaluation

Meta-evaluation

The concurrent meta-evaluation differs from both formative and summative meta-evaluations (as defined earlier by Stufflebeam & Shinkfield, 2007) because concurrent meta-evaluation

(a) is conducted simultaneously with the development and implementation of a new evaluation method;(b) has both formative and summative components;(c) is comprehensive in nature; and (d) includes multiple, original data collection methods.

Page 7: Meta Evaluation

Meta-evaluation Standards

Patton (1997) suggested that questions to focus a meta-evaluation should include• Was the evaluation well done?

• Is it worth using?

• Did the evaluation meet professional standards and principles?

Guidance for conducting meta-evaluation using evaluation standards is found throughout the evaluation literature.

Page 8: Meta Evaluation

Meta-evaluation Standards

Scriven (1991) argued that meta- evaluation can be either formative or summative and can be aided through the use of check- lists or standards such as The Program Evaluation Standards (The Joint Committee on Standards for Educational Evaluation, 1994).

Page 9: Meta Evaluation

Meta-evaluation

The Joint Committee on Standards for Educational Evaluation (1994) prescribed کردند ,تجویز “The evaluation itself should be formatively and summatively evaluated against these and other pertinent standards, so that که طوری is روشits conduct بهappropriately guided and, on completionدرپایان, stakeholders can closely examine its strengths and weaknesses”.

Page 10: Meta Evaluation

Meta-evaluationExample

Concurrent Meta-Evaluation A Critique

Page 11: Meta Evaluation

Meta-evaluationExample

Concurrent Meta-Evaluation A Critique

Page 12: Meta Evaluation

Meta-evaluation ExampleMETA-EVALUATION OF QUALITY AND COVERAGE OF

USAID EVALUATIONS 2009-2012

EXECUTIVE SUMMARY

DEFINITION of 'U.S. Agency for International Development - USAID' An independent federal agency of the United States that provides aid to citizens of foreign countries. Types of aid provided by USAID include disaster relief, technical assistance, poverty alleviation and economic development.

Page 13: Meta Evaluation

Meta-evaluation ExampleThis evaluation of evaluations, or meta-evaluation, was undertaken to assess the quality of USAID’s evaluation reports.

The study builds on USAID’s practice of periodically examining evaluation quality to identify opportunities for improvement. It covers USAID evaluations completed between January 2009 and December 2012.

EXECUTIVE SUMMARY

Context and Purpose

Page 14: Meta Evaluation

Meta-evaluation ExampleMeta-Evaluation Questions

The meta-evaluation on which this volume reports systematically

examined 340 randomly selected evaluations and gathered qualitative data from USAID staff and evaluators

to address three questions:

Page 15: Meta Evaluation

Meta-evaluation Example

1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?

Meta-Evaluation Questions

Page 16: Meta Evaluation

Meta-evaluation ExampleThe framework for this study recognizes that undertaking an evaluation involves a partnership between the client for an evaluation (USAID) and the evaluation team. Each party plays an important role in ensuring overall quality.

• Information on basic characteristics and quality aspects of 340 randomly selected USAID evaluation reports was a primary source for this study.

• Quality aspects of these evaluations were assessed using a 37-element checklist.

Meta-Evaluation Methodology

Page 17: Meta Evaluation

Meta-evaluation Example

Question 1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

Over the four years covered by the meta-evaluation, there were clear improvements in the quality of USAID evaluation reports. On 25 of 37 (68 percent) evaluation quality factors rated, evaluations completed in 2012 showed a positive net increase over 2009 evaluations in the number that met USAID quality standards on those factors.

Evaluation Quality Findings

Page 18: Meta Evaluation

Meta-evaluation ExampleQuestion 1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

Ratings on several factors improved by more than 10 percentage points, including whether findings were supported by data from a range of methods, study limitations were identified, and clear distinctions were made between findings, conclusions, and recommendations. Improvements in evaluation quality factor ratings did not generally rise in a linear fashion, but instead fluctuated from year to year.

Evaluation Quality Findings

Page 19: Meta Evaluation

Meta-evaluation ExampleQuestion 1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

Not all evaluation rating quality factors improved over the study period. MSI, in addition to examining changes over time for the study sample as a whole, assessed changes between 2009 and 2012 on a regional basis, by sector, and for a subset of USAID Forward evaluations to which the Agency, after July 2011, paid special attention from a quality perspective. A t–test was used to compare USAID Forward evaluations with other evaluations. Its results were not significant.

Evaluation Quality Findings

Page 20: Meta Evaluation

Meta-evaluation ExampleQuestion 2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

Four clusters of evaluation ratings were used to determine where USAID excels on evaluation quality and where improvements are warranted. Evaluation quality factors on which 80 percent or more USAID evaluations met USAID standards were coded as “good.” Of 37 evaluation quality factors examined, 24 percent merited the status designation “good.”

Evaluation Quality Findings

Page 21: Meta Evaluation

Meta-evaluation Example

Question 2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

Quality standards for which 50 percent to 79 percent of evaluations were rated positively were designated as “fair.” USAID performance was either “good” or “fair” on half of the factors rated. On the remaining evaluation quality factors, USAID performance was deemed “marginal” on 20 percent of those factors and “weak” on 32 percent.

Evaluation Quality Findings

Page 22: Meta Evaluation

Meta-evaluation ExampleQuestion 2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

Among evaluation quality factors on which compliance was “weak,” MSI found that half addressed quality standards that had recently been introduced in USAID Evaluation Policy. Performance on these factors is likely to improve as familiarity with these new standards improves. Among factors rated weak, the most significant involve low levels of compliance with USAID’s requirement for the participation of an evaluation specialist on every evaluation team and its expectation that, wherever relevant, data on the results of USAID evaluations will be documented on a sex-disaggregated basis.

Evaluation Quality Findings

Page 23: Meta Evaluation

Meta-evaluation Example

Question 3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?

On an overall evaluation quality “score” based on 11 of the meta-evaluation’s quality rating factors, USAID evaluations averaged 5.93 on a 10-point scale—with a mode of 7 points and a relatively normal distribution.

Evaluation Quality Findings

Page 24: Meta Evaluation

Meta-evaluation Example

Question 3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?

Statistical tests conducted using this overall score showed that USAID evaluations completed in 2012 were of significantly higher quality than those completed in 2009.

Evaluation Quality Findings

Page 25: Meta Evaluation

Meta-evaluation Example

Question 3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?

MSI also found that evaluations reporting an evaluation specialist as a team member had higher overall quality scores than evaluations where an evaluation specialist was not reported to be involved. This finding was statistically significant at .05, .01, and .001 levels. Other comparisons were not found to be statistically significant.

Evaluation Quality Findings

Page 26: Meta Evaluation

Meta-evaluation Example

The overall picture of evaluation quality at USAID from this study is one of improvement over the study period, with strong gains emerging on key factors between 2010 and 2012.

The number of evaluations per year increased, and the quality of evaluation reports has improved. While this portrait is largely positive, the study also identified evaluation quality factors, or standards, that USAID evaluation reports do not yet meet

Conclusions

Page 27: Meta Evaluation

Meta-evaluation ExampleOn several core evaluation quality standards—such as clear distinctions among evaluation findings, conclusions, and recommendations—performance was found to be below USAID standards. Other significant deficiencies included the small percentage of evaluations that indicated that an evaluation specialist was a member of the evaluation team, which USAID has required for the better part of a decade, and low ratings on the presence of sex-disaggregated data at all results levels—not simply for input level activities. Low ratings were also found for several evaluation standards introduced in the 2011 Evaluation Policy, but this may simply reflect slow uptake or lack of awareness of standards.

Conclusions

Page 28: Meta Evaluation

Meta-evaluation Example

evaluation reports in those areas that offer opportunities for improvement:

• Recommendation 1. Increase the percentage of USAID evaluations that have an evaluation specialist as a fulltime team member with defined responsibilities for ensuring that USAID evaluation report standards are met from roughly 20 percent as of 2012 to 80 percent or more.

Recommendations

Page 29: Meta Evaluation

Meta-evaluation Example

• Recommendation 2. Intervene with appropriate guidance, tools, and self-training materials to dramatically increase the effectiveness of existing USAID evaluation management and quality control processes.

Recommendations

Page 30: Meta Evaluation

Meta-evaluation Example

• Recommendation 3. As a special effort, in collaboration with USAID’s Office of Gender Equality and Women’s Empowerment, invest in the development of practitioner guidance materials specific to evaluation. Of these three recommendations, the first is considered the most important for systematically raising the quality of evaluations across all sectors and regions. MSI’s second recommendation is intended to complement its first recommendation and encourage USAID to scale up evaluation management “good practices” already known within the Agency.

Recommendations

Page 31: Meta Evaluation

Meta-evaluation Example

1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

Page 32: Meta Evaluation

Meta-evaluation Example

2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

Page 33: Meta Evaluation

Meta-evaluation Example

3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?