75
ED 343 513 TITLE INSTITUTION SPONS AGENCY PUB DATE NOTE AVAILABLE FROM PUB TYPE EDRS PRICE DESCRIPTORS ABSTRACT DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task Force on Institutional Effectivenes. Council on Postsecondary Accreditation, Washington, D.C. Fund for the Improvement of Postsecondary Education (ED), Washington, DC. Jan 92 75p.; For a related document, see HE 025 379. Council on Postsecondary Accreditation, One Dupont Circle, N.W., Suite 305, Washington, DC 20036 ($10.00). Reports - Descriptive (141) -- Guides - Non-Classroom Use (055) MF01 Plus Posta9e. PC Not Available from EARS. *Accreditation (Institutions); *College Outcomes Assessment; Higher Education; *Institutional Evaluation; *Institutional Research; *Outcomes of Education; Postsecondary Education; Questionnaires; Research Methodology; *School Effectiveness This resource document presents four papers that were prepared by members of the Task Force on Institutional Effectiveneg.s of the Council on Postser7ondary Accreditation concerning issues related to assessments of institutional effectiveness and the use of assessment in the process of institutional accreditation. The first paper, "Outcomes Assessment, Institutional Effectiveness, and Accreditation: A Conceptual Exploration" (Peter T. Ewell), deals with some of the basic principles and issues involved'in the use of outccmes assessment in accreditation, explores some of the tencions and alternatives these create, and develops recommendations for resolution. The second paper, "Incorporating Assessment into the Practice of Accreditation: A Preliminary Report" (Ralph Wolff), reports the results of a survey of those responsible for assessment activities including their practices, their level of knowledge of assessment issues, and their perception of the effectiveness of their current outcomes assessment practices. The third paper, "Methods for Outcomes Assessment Related to Institutional Accreditation" (Thomas Hogan), reviews the various methods of outcomes assessment and discusses their advantages and limitations. The last paper, (Trudy W. Banta), represents a selected 62-item bibliography of articles and documents about outcomes assessment in higher education. (GLR) Reproductions supplied by EDPS are the best that can be made from the original document. ***********************************************************************

Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

ED 343 513

TITLE

INSTITUTION

SPONS AGENCY

PUB DATENOTEAVAILABLE FROM

PUB TYPE

EDRS PRICEDESCRIPTORS

ABSTRACT

DOCUMENT RESUME

HE 025 380

Accreditation, Assessment, and InstitutionalEffectiveness: Resource Papers for the COPA TaskForce on Institutional Effectivenes.Council on Postsecondary Accreditation, Washington,D.C.

Fund for the Improvement of Postsecondary Education(ED), Washington, DC.Jan 9275p.; For a related document, see HE 025 379.Council on Postsecondary Accreditation, One DupontCircle, N.W., Suite 305, Washington, DC 20036($10.00).Reports - Descriptive (141) -- Guides - Non-ClassroomUse (055)

MF01 Plus Posta9e. PC Not Available from EARS.*Accreditation (Institutions); *College OutcomesAssessment; Higher Education; *InstitutionalEvaluation; *Institutional Research; *Outcomes ofEducation; Postsecondary Education; Questionnaires;Research Methodology; *School Effectiveness

This resource document presents four papers that wereprepared by members of the Task Force on Institutional Effectiveneg.sof the Council on Postser7ondary Accreditation concerning issuesrelated to assessments of institutional effectiveness and the use ofassessment in the process of institutional accreditation. The firstpaper, "Outcomes Assessment, Institutional Effectiveness, andAccreditation: A Conceptual Exploration" (Peter T. Ewell), deals withsome of the basic principles and issues involved'in the use ofoutccmes assessment in accreditation, explores some of the tencionsand alternatives these create, and develops recommendations forresolution. The second paper, "Incorporating Assessment into thePractice of Accreditation: A Preliminary Report" (Ralph Wolff),reports the results of a survey of those responsible for assessmentactivities including their practices, their level of knowledge ofassessment issues, and their perception of the effectiveness of theircurrent outcomes assessment practices. The third paper, "Methods forOutcomes Assessment Related to Institutional Accreditation" (ThomasHogan), reviews the various methods of outcomes assessment anddiscusses their advantages and limitations. The last paper, (Trudy W.Banta), represents a selected 62-item bibliography of articles anddocuments about outcomes assessment in higher education. (GLR)

Reproductions supplied by EDPS are the best that can be madefrom the original document.

***********************************************************************

Page 2: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

0t41

ACCREDITATIM,ASSESSMENT ANDINSTITUTIONALEFFECTIVENESS

Resource Papers forthe COPA Task Force onInstitutional Effectiveness

Council on Postsecondary Accreditation

Project on "Accreditation for Educational Effectiveness:

Assessment Tools for Improvement"

Partially funded by a grant from the Fund for the

Improvement of Postsecondary Education.

January 1992-PERMISSION TO REPRODUCE THIS MATERIAL

IN OTHER THAN PAPER COPY HAS BEENGRANTED BY

k Vra.n e5k.C °Pp,

TO THE EDUCATIONAL RESOURCES

INFORMATION CENTER (ERIC)"

RFST 11.11PY MIMI I 9

U.S. DEPARTMENT Of EDUCATIONOffice of ER/colonel Research ertd Improvement

EDUCATIONAL RESOURCES INFORMATION

.1CENTER (ERIC)

Ttus document hes been reproduced aswowed from the person or orcNnItebonoriginating It

CI MOW changes hay* teen made to ornOt0v11

100tOdtIctton Quality

0 Points &tow at (onions st0100 on this doctrmen) do not necessarily represent otbcollOERI posthon or policy

Page 3: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

ACCREDITATION,ASSESSMENT ANDINSTITUTIONALEFFECTIVENESS

Resource Papers forthe COPA Task Force onInstitutional Effectiveness

Peter T. Ewell, Senior Associate, National Center for Higher EducationManagement Systems

Ralph A. Wolff, Associate Executive Director, Accrediting Commission forSenior Colleges and Universities, Western Association of Schools andColleges

Thomas P. Hogan, Dean of the Graduate School, University of ScrantonTrudy W. Banta, Director, The Center for Assessment Research and

Development, University of Tennessee-Knoxville

Council on Postsecondary AccreditationProject on "Accrediation for Educational Effectiveness:Assessment Tools for Improvement"

Partially funded by a grant from the Fund for the Improvement of PostsecondaryEducation

January 1992

Page 4: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Introduction

The four papers included in this resource document were prepared by members of theTask Force on Institutional Effectiveness of the Council on Postsecondary Accreditation. TheTask Force is one of thret, Task Forces integral to the Council's Project on "Accreditation forInstitutional Effectiveness: Assessment Tools for Improvement," a project funded in part by theFund for the Improvement of Postsecondary Education. The papers were prepared as backgroundpapers for the work of the Task Force. However, they are also relevant to the Project as a wholein that they provide background and suggestions for further study on issues related to assessmentgenerally and the use of assessment in the process of institutional accreditation in particular.

At the first meeting of the Task Force in March of 1991 the chairperson, James Rogers,Executive Director of the Commission on Colleges of the Southern Association of Schools andCoileges, identified four areas growing out of the charge to the Task Force which required indepth study as background for the work of the Task Force. The first of these areas involvedexploring and conceptualizing more clearly the relation of outcomes assessment and analysis foinstitutional effectiveness and improvement in the context of accreditation. Peter T. Ewell of theNational Center for Higher Education Management Systems agreed to develop a backgroundpaper on this topic and the result is the paper "Outcomes Assessment, Institutional Effectiveness,and Accreditation: A Conctpaial Exploration. " In the paper he deals not only with some of thebasic princip!es and issues involved in use of outcomes assessment in accreditation but alsoexplores some of the tensions and alternatives these create and develops recommendations forresolution.

The second area involved consideration of the effectiveness in practice of assessment andassessment related standards by institutional accrediting bodies. Ralph Wolff of the AccreditingCommission for Senior Colleges and Universities of the Western Association of Schools andColleges undertook this assignment. As the basis for his evaluation, he developed a questionnairewhich was sent to COPA recognized institutional accrediting bodies to gather information ontheir practices, the the level of knowledge of assessment issues of those primarily responsible forassessment activities, and their perception of the effectiveness of their current outcomesassessment practices. Dr. Wolff reports the results and their implications in "IncorporatingAssessment Into the Practice *of Accreditation: A Preliminary Report".

The need to develop some common understandings concerning methods appropriate toinstitutional assessment led to the identification of the third area for additional study. The resultis the paper, "Methods for Outcomes Assessment Related to Institutional Accreditation", byThomas Wigan, Dean of the Graduate School at the University of Scranton. In his paper, Dr.Hogan reviews the various methods of outcomes assessment and discusses their advantages andlimitations. This paper will be of major value to institutions and programs as well as accreditingbodies.

Finally, the Task Force concluded that a selective basic annotated bibliography of books,articles and documents would be of great value to accrediting bodies and ihstitutions. Trudy W.Banta, Director of the Center for Assessment Research and Development of the University of

4

Page 5: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Tennessee - Knoxville, agreed to undertake this assignment. The bibliography is designed toprt,vide those references that will give accrediting agency staff and those with whom they consulta comprehensive overview of the current status of the field of nostsecondary student outcomesassessment and the assessment of institutional effectiveness. The paper produced by Dr. Banta"Selected References on Outcomes Assessment in Higher Education. An Annotated Bibliography"serves that function extraordinarily well.

These papers greatly facilitated the discussions of the Task Force and the preparation ofthe "Report of the Task Force on Institutional Effectiveness." We believe that they also constitutea major contribution to the literature on assessment and that they will be of direct help toaccrediting bodies and institutions in reviewing or developing theif own assessment activities. Onbehalf of the Council on Postsmondary Accredhation, the Project and the Task Force, weexpress our thanks and appreciation to Drs. Ewell , Wolff, Hogan and Banta for their work inprepring these papers.

Marianne R. PhelpsSenior AssociateProject Director

Richard M. MillardProject Coordinator

Page 6: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Table of Contents

Introduction 00

Outcomes Assessment, Institutional Effectiveness, and Accreditation:A Conceptual Exploration, Peter T. Ewell 1

Some Key Concepts and Their Relations 1

A Possible Framework for Discussion 2Figure 1: A Conceptual Typology of Terms Relating to Institutional

Outcomes, Effectiveness, and Accreditation 2Outccmes Assessment as a Concept 4

Individual or Institutional Unit of Analysis 4Indicators or Standards 4Attained Outcome or Student Growth and Development 5Making Judgments or Making Improvements 5Institutional or Constituent Perspectives 6Long-Term or Short-Term Perspectives 6

Institutional Effectiveness as a Concept 7General Goal Attainment or Specific Institutional Attributes 7Any Mission or a Specifically Educational Mission 7Any Outcomes or Specific (Common) Outcomes 8

Institutional Accreditation as a Concept 9Validating "Quality" or Validating the Quality-Assurance Process 9Certifying Quality or Stimulating Improvement 10

Source of Legitimacy: Academic Community or Societyand the Wider Public 10

Focus of Concern: The Institutional Community as a Whole or the Sum ofthe Disciplines and Subunits that Comprise it 10

Figure 2: An Evolving Role for Accreditation? 11

Some Operational Dilemmas 12

How to Avoid Assessing Everything 12

How to Arrive at Reasonable Expectations Regarding Outcomes Assessmentin the Process Itself 13

How to Best Communicate the ResulL of Assessment 13

How to Get Institutions to Take the Process Seriously 14

How to Sustain Institutional Action and Attention to Assessment Over Time 15

How to Link Accreditation's Role in Outcomes Assessment with Thoseof Other Processes 15

Some Recommended Principles for Future Policy 16

Incorporating Assessment into the Practice of Accreditation:A Preliminary Report, Ralph A. Wolff 19

Design of the Survey 19

Page 7: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Survey Results 20Respondent's Personal Knowledge and Experience with Assessment 20Knowledge and Experience of Association Staff and Commission Members 20Standards and Guidelines on Assessment 21

Team Training Workshops 21Self Studies 22Team Visits 22Commission Actions 22

Personal Views of Respondents 23Additional Support Desired 24

Analysis and Conclusions 24Tabulated Responses 26

Methods for Outcomes Assessment Related toInstitutional Accreditation, Thomas P. Hogan 37

Preface 37Introduction 37

The First Key Question: Final Status or Value Added 39The Second Key Question: Final Answers or Meaningful Processes 39Program Effectiveness vs. Institutional Effectiveness 41The Importance of Goals 41A Note on Reliability and Vglidity 42

Methods for Assessing Outcomes 42Tests of Developed Abilities, Knowledge, and Skills 43

Areas to be Covered 43Sources of Development 43Externally Developed Tests of General Education 44Locally Developed Tests of General Education 45Externally Developed Tests of Specific Areas 45Locally Developed Tests of Specific Areas 45A Spezial Note on Portfolios 45General Strengths and Weaknesses of Tests 45Strengths/Advantages of Externally Developed Tests 46Weaknesses/Disadvantages of Externally Developed Tests 47Strengths and Weaknesses of Locally Developed Tests 47

Surveys, Questionnaires 48Sources of Development 49Target Groups 49Format 49Content 50Strengths/Advantages of Surveys 50Weaknesses/Disadvantages of Surveys 52

Institutional Records 52Strengths/Advantages of Institutional Records 53Weaknesses/Disadvantages of Institutional Records 53

Page 8: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Concluding Note on Institutional Records 54The Institutional Assessment Plan 54

Selected References on Outcomes Assessment in Higher Education:An Annotated Bibliography, Trudy W. Banta 57

Forward 57Assessment and Accreditation 57Measurement Issues 58State Policy Issues in Assessment 60Assessment in the Major 61Assessment in General Education 62Classroom Assessment Techniques 63Developing Goals for Assessment 63Assessment of Student Development 63Assessment in Community Colleges 64Books/Collections/Review Articles on Assessment Topics 64Assessmelit Bibliographies 67Assessment Periodical 67

Page 9: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Outcomes Assessment, Institutional Effectiveness, and Accreditation:A Conceptual Exploration

Peter T. Ewell*

In the last three years, most institutionalaccrediting bodies have adopted a policy on theassessment of institutional outcomes. Some, likethe Southern Association of Colleges and Schools(SACS) and the Western Association of SeniorColleges (WASC), have made "institutionaleffectiveness" a distinct standard for accredi-tation. Others, including the North CentralAssociation (NCA), the Middle States Associa-tion, and the New England Association, havemade the assessment of student and institutionaloutcomes a part of more general standardsregarding the achievement of institutionalpurpose. In part, these actions reflect an originaland historic interest in the quality of educationalresults on the part of the accrediting community.In part, however, they constitute a reaction to aset of recent eventsparticularly, the rapiddevelopment of a national "assessmentmovement" and the growing insistence ofgovernment authorities (both state and federal)that colleges and universities become moreaccountable to the public for their educationalproducts.

Because of this rapidly developingbackground, there has been little opportunity formembers of the institutional accreditingcommunity to develop from first principles asystematic conception of the proper role ofassessment in the process of which they are apart. This paper represents an initial attempt toaddress this condition. It contains two mainsections. The first preseuts a discussion of therelationships among key terms: "outcomesassessment," "institutional effectiveness, " and"accreditation." A primary premise is that suchterms currently have multiple meanings and it isdifficult to move forward until these aresystematically recognized and resolved. Thesecond section presents a set of more specific

operational dilemmas that typically arisewhenev,r "assessment" and "accreditation" cometogether. This list is based on the emergingexperience of both institutions and associations,and each is briefly discussed with respect toorigins, specific manifestations, and suggestedremedies. A final section proposes some initialrecommendations for action on the part ofinstitutional accrediting bodies, based on bothprior discussions.

Some Key Concepts and Their Relations

A challenge facing any commentator onassessment is one of definition. Through popularusage, the term has come to mean so manydifferent things in different contexts that clearcommunication is threatened. The same is true ofsuch terms as "quality" and "institutionaleffectiveness." A fffst step in determining theproper role of assessment in institutionalaccreditation, therefore, is to explore morespecifically exactly what we are talking about.

Two common difficulties here are distin-guishing more clearly between results andprocesses, and better specifying the intended unitof analysis. "Institutional effectiveness," forinstance, refers to a particular property of aninstitution, while "institutional assessment" refersto the means by which we determine that thisparticular property is present. Yet the primarycenter of gravity of many current discussions of"institutional effectiveness" in accreditationcircles (particularly, of course, in the southeast)has been much more narrowly focused: whatmeasures or procedures will provide acceptableevidence of student learning? Posing the problemin this way both shifts the concept's domain andits original unit of analysis. At the same time,the key terms in this relation themselves need

Page 10: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

further conceptual clarification. Given thischallenge, this section first describes some of theoverall conceptual difficulties associated withrelating the three terms "assessment", 'institu-tional effectiveness", and "accreditation"; it thengoes on to explore some particular difficultiesassociated with each component concept.

1. A Possible Framework for Discussion.Figure 1 premnts a first attempt to sort throughsome of this terminology. On one dimension, itdistinguishes the particular properties of interest("domains") from *he process used to determinethe degree to which such properties are present.Properties are chiefly results or conditionsattributes that apply to institutions, programs, orindividuals. "Processes" consist of various formsof evidence-gathering used to ascertain or tocertify that these results or conditions exist. Thediagram's second dimension emphasizes the factthat this distinction can operate simultaneously atmany levels and that considerable conceptualconfusion can arise from the fact that similarterms are used to describe quite different things.

As the figure indicates, at least four suchlevels, arranged in a descending hierarchy ofspecificity, are usually present. The first refersto the "quality" of an institution as a wholepublic validation of which is a primary object ofaccreditation. Embedded in the process of self-study and peer review, the precise content of thisdomain is appropriately broad and deliberatelyvague; typically, however, it embraces under asingle conceptual umbrella such diverse aspectsof a college or university as the appropriatenessand clarity of its mission, the adequacy of itsresources and underlying physical andorganizational structures, and the results that itachieves. Mission or goal attainment, in contrast,is a far more circumscribed concept, notedexplicitly as "institutional effectiveness" in SACSand WASC standards, and implied by otherregional bodies through statements such as "theinstitution is achieving its purposes" (NorthCentral Association Evaluative Criterion Three,

Figure 1A Conceptual Typology of Terms Relating toInstitutional Outcomes, Effectiveness, andAccreditation

Level ofConceptualSpecificity:

1. Overall"Quality"

4

2. Mission/goalAttainment

3. Educational(Student)Results

4

-vnitives

ConceptualDomain (AResult orCondition)

Process toDetermineCurrent Stateof Domain

"AccreditableStandard ofQuality

AccreditationProcess

"InstitutionalEffectiveness

"Assessment

"StudentOutcomes"

"StudentLearningOutcomes"

"Assessment(2)"

'Assessment

1988-90). While the unit of analysis hereremains the institution as a whole, this level ofspecificity embraces outcomes to the virtualexclusion of resources and processes.

Moving to the third level of specificityeducational resultsis critical, as it entails bothfurther circumscription of the conceptual domainand an implied shift in the optational unit ofanalysis. Though they are a partial result of whatinstitutions do, educational results actuallyhappen to students. Investigating them willnecessarily involve looking at individuals, thoughthe purpose remains to aggregate obtained resultsto inform wider judgments about how wellprograms and institutions are functioning.Moving to the fourth and final level is thenstraightforward: only cognitive results ofcollegiate instruction are considered, but theoperational assessment processes required arevery similar.

Page 11: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

A major current conceptual problem ofassessment in the institutional accreditationprocess is that levels two through four are notwell distinguished and it is not clear which is theprimary focus of the process. Partly this isbecause educational results, and particularlystudent learning outcomes, represent the oneclear common denominator among colleges anduniversities with otherwise vastly differentmissions and student clienteles. All institutionsshould therefore pay some attention to them inapproaching accreditation. Partly it is becauseobtained results (and particularly student learningoutcomes) have historically been neglected inaccreditation, and are consequently now beingparticularly emphasized in reaction. The term"assessment" , moreover, appears currently to beused almost interchangeably across all threelevels. As noted in figure 1, it is currentlyapplied to the process of gathering evidenceabout institutionzl goal attainment, the process ofdocumenting student outcomes of all kinds, andthe process of determining precisely whatstudents have learned.

Given this situation, one step towardclarifying the relationship between assessmentand institutional accreditation might be to betteremphasize in all communications and guidelinesthat level two constitutes accreditation's center ofattention. "Institutional effectiveness" requiresactive demonstration that claimed results arejeing achieved, and this means that concreteevidence of goal achievement must be visiblyincluded in the results of self-study. "Assessment(1)" therefore requires actual evidence to bepresented during self-study, while "Assessment(2)" and "Assessment (3)" can be primarilypresented and examined as processes. Consistentusage might then reinforce the notion that theinstitution remaitts the primary unit of analysis ininstitutional accreditation: all "assessment"should clearly support this common purposeregardless of its operational level of datacollection. Consistent usage would also help toclarify the point that while educational results are

indeed important as the common products of allcollege and university activity, they are not theonly aspects of institutional goal-attainment thatshould be investigated. Evidence of attainmentaround other goals should also, whereappropriate, be provided.

According consistent conceptual primacy tolevel two requires institutional accrediting bodiesto make two deceptively simple furtherstatements about what should happen in theaccreditation process. First, accreditation guide-lines should emphasize that the goal attainmentor "institutional effectiveness" constitutes only apart of what's required under an "accreditablestandard of quality. " Impressive outcomes maybe purchased in the short term at the price ofdeferred attention to the maintenance of keyinstitutional assets, imperiling long-termviability. Or goals themselves, though attained,may be inappropriate or contradictory.

Second, such statements should make clearthat when it occurs in conjunction withaccreditation, student outcomes assessment mustclearly focus upon establishing the ipstitution'sparticular contribution to the outcomes achieved.Simply running a testing program for a varietyof operational purposes is not enough. This pointspeaks to a particular operational confusion thatappears to be currently troubling accreditation inthe realm of assessment: confounding evidence-gathering procedures established to periodicallymonitor and improve instruction with thoseintended to certify student attainment (such asexit examinations) or to accord appropriateinstructional treatments !,, individual students(such as placemert examinations). Becauseinstitutions witb te latter are "practicingassessment," it is sokletimes asserted that theyare "investigati2g effectiveness" though theresults are never aggregated to the program orinstitutional level and are never used to activelyguide instnictional improvement. While prog-ram such as the:)z are surely important, theirproper place in the accreditation process is in theanalysis of an institution's curriculum and

Page 12: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

instructional practices, not in an assessment of itseffectiveness.

2. Outcomes Assessment as a Concept.Reflecting its evolution in higher educationgenerally, outcomes assessment in theaccreditation community is a relatively recentconcern. While most institutional accreditationstandards before 1985 in some way referencedthe achievement of institutional purposes, few ;.npractice required institutions to provide explicitevidence of educational goal achievement. Sincethat time, in parallel with assessment's growingprominence in state and federal higher educationpolicy, virtually all have adopted "assessment"requirements. But in many cases theserequirements were adopted hastily. Accreditingbodies were in part concerned that the process ofvoluntary accreditation not be left behind in apolitical world that was increasingly emphasizingconcrete results as the proof of "quality. " In thispress to adopt standards, however, importantconceptual issues inherent in the concept ofassessment remained obscure. To evolve anappropriate approach for the future, the institu-tional accreditation community must examineeach of these issues systematically, and mustmake appropriate collective choices about theappropriate path to take.

Six major conceptual tensions embedded inassessment appear particularly relevant. Eachdescribes a continuum of choices about how toarticulate, recognize, and enforce assessmentstandards in the process of self-study. In somecases, appropriate choices for accreditation seemclear, while in others, the determination of thebest location on the continuum will depend uponinstitutional mission and context. Each issue islisted below, together with a brief discussion ofits implications for self-study:

a Individual or institutional unit of analysis?The term "assessment" is currently beingapplied to widely disparate units of analysisincluding states, institutions, programs, and

individual students. As a result, an enor-mous range of practices within accreditationcurrently occur under a common label.Assessment practices submitted as evidenceof compliance with SACS "institutionaleffectiveness" standards and NCA "achieve-ment of institutional purpose" standards canvary from occasional large-scale, sample-based alumni surveys, through basic skillsplacement examinations regularly adminis-tered as a matter of policy to designatedstudents, to isolated "classroom research"projects designed and executed by individualfaculty members and academic departmentsfor their own purposes. All such practicesare exemplary and should be appropriatelyrecognized. But not all constitute evidenceof institutional effectiveness per se.Accreditation standards and practice,therefore, should primarily emphasizeprocesses intended to provide evidence ofinstitutional goal attainment. This is the levelat which concrete evidence should be presentand examined in self-study. Individual andclassroom assessment, if present, should benoted and discussed as processes andincluded in the self-study's description ofcurriculum and academic practice.Indicators or standards? Partially as a resultof confusions regarding unit of analysis, theuse of assessment evidence in accreditationoften entails a further difficulty: are statedoutcomes intended as minimum standards orinstead as indicative of broad missionattainment? In the first case, the focus ofassessment must be to determine that allstudents in fact meet the standardforexample, in the form of a given level ofassessed writing competence or a knowledge-based major field exit examination. In thesecond case, the focus of assessment is toprovide evidence that the "typical" student orgraduate embodies a set of characteristics thatthe institution seeks to produce more broadlythough particular individuals may vary

4

12

Page 13: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

considerably in the degrees to which thesecharacteristics are present and the ways inwhich they are manifested. (The situation issimilar 19 that r1f stated admissionsrequire,- ..dcs with respect to test scores.Minimum cutoffs nity exist below which theinstitution will not accept an applicant, butthe scores of admitted stab:lents may varyconsiderably around a stated average.)Accreditation may require both ldnds ofassessment, but in different places and fordifferent purposes. Where an institutionstates minimum standards for learningoutcomesand there ought always to besome of these presentit has an obligation todemonstrate that it has in place an adequatemechanism to determine if outcomes thatmeet these standards are in fact beingobtained, and must describe this mechanismas part of its educational program. Forbroader outcomes goals (for instance, thosetypically contained in a mission statement),what is needed in self-study is evidence thataggregate educational results are consistentwith these goals. Evidence of this kind canappropriately be collected periodically andmay show substantial variation acrossstudents; what's important for "institutionaleffectiveness" is what the central tendencyshows.Attained outcome or student growth anddevelopment? A slightly different issue is thedegi to which, through accreditation,institutions should be held accountable forcertifying stated outcomes (regardless of Iledegree to which they may have actuallyhelped to produce them) or for "developingtalent" through their activities (regardless ofthe final levels of achievement reached bystudents at the end of their programs). Thisis an underlying issue in assessment itself,embodied in an ongoing debate about theappropriateness of "value-added" studies.Most discussion here occurs at the individualunit of analysis and at this level it indeed

represents a legitimate analytic choice:should assessment provide evidence ofwhether students possess key skills atgraduation or should it instead be focusedupon determining the institution's cr .Iribu-tion to the development of these skills? Theanswer to this question will depend entirelyupon the purpose of assessment. As evidenceof "institational effectiveness," however,both are generally held to be importar ndthe assessment process should reflect this. Aspart of self-study therefore institutions shouldbe able to present acceptable evidence boththat broad goals are in fact typically beingattained and that the actions of the institutionhad something to do with them. This neednot (and in most cases ought not) imply a"test-retest" design as is sometimes advoca-ted. But it does require some considerationwhen discussing outcomes of both enteringstudent abilities and of actual educationalprocesses. To be really adequate, in fact, itrequires some demonstration of the wayeducational processes connect to intendedoutcomes, both in concept and in fact.Making judgments or making improvemems?Advocates of institutional assessment inhigher education increasingly emphasize itscontributions to instructional improvement.Engaging in the process itself, it is argued,can result in such major benefits as clearergoals and better common understanding ofinstructional activities across programs andinstitutions, often regardless of what isfound. Partly as a result, institutionalassessment practice is increasingly shiftingtoward locally-developed instruments andapproaches in which such benefits can bestbe realized. Accountability-orientedprocesses, in contrast, tend to rest uponcommon standards and assessment proceduresoften embodied in a standardized examina-tion. Because the process of institutionalaccreditation is simultaneously formative andsummative, managing this dichotomy can

Page 14: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

pose particular difficulties, and ideally,evidence of both kinds should be required."Achievement of purpose" demands that theinstitution build a convincing case that resultsare consistent with intentions. At the sametime, the process of asstssment should beone in which many members of the campuscommunity are involved and, through suchengagement, should be one through whichthe institution can demonstrably learn.Institutional or constituent perspectives?Assessment practitioners are also increasinglybeing forced to recognize that judgmentsabout goal attainment may differ consider-ably depending upon where one sits. What afaculty thinks is important for a givengraduate may or may not correspond to whatan employer thinks is necessary or, indeed,to what a student actually wants. How muchlegitimacy should each of these perspectivesbe accorded in assessment, particularly wilenit is conducted in the context of accredita-tion? Appropriate resolutions may dependlargely on the content of the institution'smission and its principal indicated constituen-cies. Institutions whose missions visiblyembody a "service" ethicfor example,community colleges or regional public insti-tutionsshould arguably take special care todemonstrate that the perspectives of studentsand employers have been taken into accountand to what degree these "customers" aresatisfied with what they have received.Entering student goal studies, informationabout student and alumni satisfaction, andemployer needs assessment should be particu-larly prominent in this respect. Because nosector of higher education exists in avacuum, moreover, all institutions oughtappropriately to recognize the existence ofexternal clients and how well they areserved, be they students, other postsecondaryinstitutions, or the public at large.Long-term or short-term perspectives? Manyof the most important outcomes of higher

education, it is commonly claimed, take along time to become manifest. Determiningan appropriate time-frame for assessment istherefore often an issue. Institutions mayclaim, for example, that they should not beheld directly accountable for "life-longiearning" until sufficient time has elapsed forthis phrase to become meaningful. On theother hand, they may contend that it is unfairto look at graduate abilities long aftergraduation because a variety of additionalexperiences may have mitigated or modifiedwhat was once learned in college. From thestandpoint of accreditation, three points withrespect to this issue seem clear. First, as inso many such issues, the institution's owngoal statements must be taken as a guide.Phrases such as "job-entry skills" or "lifelonglearning" in themselves suggest appropriatetimeframes and the institution's assessmentprocedures should reflect them. Secondly,the timeframe chosen should reflect theprimary intent of self-study as part of aprocess of institutional improvement. Verylittle of utility for current operations cangenerally be learned from an assessment ofstudents who attended the institution long agounder very different contextual conditions.As a consequence, whatever the outcome, thetimeframe chosen for its assessment shoulddemonstrably result in information that isrelevant to current circumstances. Finally,because instruction is a process that occursover time, observations about studentdevelopment over time are particularlysalient. As a result, assessments occurring atmore than one point or as a product oflongitudinal studies may be especially useful.

Choosing an appropriate location on each ofthese six dimensions must precede discussions ofassessment methodology in the context of institu-tional accreditation. In the past, both institutionsand accrediting agencies have tended to movetoo quickly to technical issues when discussing

Page 15: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

assessment, and have invested too little time indetermining what should be done about thesemore basic conceptual issues. All should berecognized more explicitly in regionalaccreditation guidelines and where needed, inbetter directions to visiting teams, and in self-study workshops.

3. Institutional Effectiveness as a Concept.Like the concept of "assessment", the history andconceptual clarity of the term "institutionaleffectiveness" is murky. While the term is nowexplicitly recognized in the standards of at leasttwo regional accreditors, and is implied by theguidelines of all others, it continues to meanquite different things to different people. Inparallel with the notion of assessment, threeparticular conceptual trade-offs have a bearing onthe process of institutional accreditation and mustbe specifically addressed:

General goal attdmment or Lpecificinstitutional astributes? While goal attainmentref -tains the center of gravity for accreditorswhen discussing the term "institutionaleffectiveness," it is important to rememberthat researchers in both higher education andin organizational theory identify at least threeadditional dimensions the term, all of whichare in sorle way relevant to the accreditationprocess. Managerial process views of effec-tiveness stress the organization's concernwith efficiency through the use of such "bestpractice" managerial techniques as clearorganizational structures, lines of communi-cation, efficient allocation and use ofresources, and availability and use ofinformation. Concern for such processespervade most self-study guidelines and in the"institutional research" clause, are alsopresent in the "institutional effectiveness"standards of SACS and WASC. Organiza-tional climate views of effectivenessemphasize the supportive aspects of theinstitution's culture from the point of view of

those who work there, and typically assesssuch elements as the perceived rewardstructure, overall employee satisfaction andmorale, and loyalty to the institution. Thoughless cited in self-study guidelines, suchconsiderations are often among theparamount concerns of visiting teams in theaccreditation process. Finally, Environmentaladaptation views of effectiveness stress theorganization's ability to acquire and maintainresources, and its vulnerability to environ-mental change; key elements of assessmenthere include analyses of "market niche" , ofparticular patterns of community and consti-tuency support, and the institution's ability toinnovate. Again, such concerns pervadeaccreditation, embodied, for instance, inNCA's standard four, which states "theinstitution can continue to accomplish itspurpose. " Clearly, the process of institutionalaccreditation is concerned with all four ofthese things. Guidelines that explicitlyreference "effectiveness", however, are forthe most part focused only on the attainmentof formally established goals. While studiesof goal attainment, and particularly ofeducational ,'mtlts, should remain a centralconcern, care should be taken by accreditingbodies that broader meanings of effectivenessare not thereby obscured. Ideally, thesebroader meanings should permeate both theself-study process and the document thatresults from it.Any mission or a specifically educationalmission? Centering the concept of "effective-ness" on institutional goal attainment,however, leaves unanswered the furtherquestion of whether any mission will do.Most colleges and universities are in thebusiness of instruction, but they devotevarying amounts of attention to this assign-ment in their mission and goals statements. Ifthe attainment of stated goals is taken as thesole and literal measure of institutionaleffectiveness, institutions will argue that they

Page 16: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

should not be penalized for failing to provideevidence about what they consider to be lowpriority activities. Should assessments ofeffectiveness be limited solely to what aninstitution says is important, or should it alsoinclude an analysis of the goals themselves?

Accreditation processes provide twopoints of attack on this issue. First, most doinclude an analysis of goal statements withregard to appropriateness and clarity. Reviewstandards that require an explicit analysis ofthe prominence, adequacy, and appropriate-ness of posed instructional goals might beespecially emphasized here. Philosophically,of course, this begins to change the nature ofthe accreditation process from institutionalvalidation toward standard settinga shiftwhich must be carefully considered. Behav-iorally, however, the fact that all institutionswill have some mention of instruction intheir missions means that attention can belargely directed toward making thesestatements more visible and more explicit.Once this is accomplished, assessmentfollows naturally. On the other hand,institutions can be required to explicitlyexamine the results of instruction regardlessof the content of their goal statementsineffact, the requirement of SACS standardthree. Here, however, institutions maydevelop statements of intended instructionaloutcomes that are not actually present in theirmission statements. While this approach hasthe virtue of not allowing the issue ofinstructional effectiveness to be ignored, thedanger is that "assessment" may be seen as aseparate process, unconnected to theremainder of the self-study. On balance,taking a more proactive stance regardinginstitutional goals and their appropriatecontents seems the more appropriate foraccreditation, though both approaches appearuseful.Any outcomes or specific (common)outcomes? Similarly, because of their

common engagement in instruction, shouldall colleges and universities be heldaccountable for the attainment of particularIdnds of learning outcomes? With the recentdevelopment of several common state-levelassessments of general education andincreasing momentum for an assessment ofcollegiate-level "communication, criticalthinking and problem-solving abilities" at thefederal level, this is a far more controversialissue, and an increasingly insistent one aswell. Most colleges and universities teach notonly undergraduate programs, but also someform of general education. Regardless ofexplicit goals statements, should they berequired, through accreditation, to assess theresults of this activity according to a set ofrecognized common attributes (for instance,critical thinking or communications ability)?In parallel, some accrediting agenciesmostnotably WASC in the area of diversityarerequiring that institutions pay specialattention to particular issues. Should thisextend to the curriculum, and more explicitlyto educational outcomes? In contrast tospecialized or professional acceditationwhere certifying the attainment of specificskills may be paramount, the best course forinstitutional accreditation is probably toaddress this topic through existing goalstatements. It is unlikely, for instance, thatany institution for which these are appropri-ate will avoid any mention of generaleducation or intended generic outcomes ofcollege in its mission statement. Guidelinesshould ensure that, when present, suchstatements are clear and are ultimatelyamenable to assessment. They might alsorequire institutions to address specific kindsof generic outcomes, such as oralcommunications or knowledge of othercultures. But they should probably stop shortof prescriptions of the exact levels ofknowledge and skills that ought to bepresent.

Page 17: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

All three of these issues are, of course,heavily centered on the instructional function ofcolleges and universitiesparticularly at theundergraduate level. This should not, however,be taken to mean that the domain of institutionaleffectiveness is limited to these areas. Manyadditional goal areas of higher education can andshould be addressed under the rubric ofinstitutional effectiveness including research,public service, and the development of particularvalues consistent with the institution's mission.A good point of departure in this regard,suggested by several commentators, is toexamine and analyze the institution's "impliedpromises", as noted in such publications at thecatalogue, recruitment view-books, publicitymaterials, and the like. Taken together, thesematerials often suggest an additional set of"thematic goals" in terms of which to structureevidence-gathering for demonstrating institutionaleffectiveness.

4. Institutional Accreditation as a Concept.The process of institutional accreditation has alsoembodied conceptual tensions from its outset. Atits best, accreditation is a complex, multi-facetedprocess in pursuit of multiple purposes. It issimultaneously the academy's primarymechanism for quality assurance and one of itsmajor potential avenues for self-improvement. Itis therefore a process that must carefully balanceinternal and external forces in determining whatis appropriate. Finally, it is a process in whichevidence is both critically important and islargely nonprescribed; in the tradition of self-study, it is equally unthinkable for the process toavoid investigating important areas of institu-tional functioning and for it to dictate what thecontents or form of investigation should looklike.

These characteristics of institutionalaccreditation are in marked contrast to apparentlysimilar processes. Special or progranunaticaccreditation processthough they also havedual purposesare far more directed toward the

task of assuring compliance with minimalstandards. Government regulation of publichigher educationthough equally concerned withaccountability and quality improvementis ofnecessity far more focused on the achievement ofpublic purposes and the determination of returnon public investment. For both these processes,evidence of results is crucial in itself. Forinstitutional accreditation, such evidence is morevaluable for the larger insights which it canprovide about how the institution as a whole isfunctioning and how it might function better.

As institutional associations increasinglyengage in outcomes assessment, many of themost basic conceptual issues inherent in theaccreditation process itself reappear in newguises. Among the most compelling of these arethe following:

Validating "quality " or validating the quality-assurance process? The "bottom-line" ofinstitutional accreditation is summative: acommunity judgment regarding the overallintegrity and viability of a college or univer-sity as an educational enterprise. Yet itsprimary mechanism is not inspection but self-study; the role of peer review is to ensurethat the self-study processes used areundertaken fairly and appropriately, mid nottypically to render a set of wholly indepen-dent judgments. So long as the primarydomain of self-study remained resources andprocesses, the conceptual tension betweeithese two elements was muted. With out-comes as a focus, however, the issue is moresalient. At the very minimum, submitted evi-dence must be directly examined by outsid-ers, if only in comparison to stated goals. Atthe same time, the assessment processes usedmust be externally validated and, as mostregional accreditors are currently discover-ing, this is often a far more complicatedprocess from both a technical and an organi-zational standpoint than is typical in review-ing the results of more traditional self-study.

Page 18: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Certifying quality or stimulatingimprovement? A somewhat similar issue isthe question of the degree to which theaccreditation process is intended to servepriintrily as a means for determining qualityor al; a mechanism for improvement. Bothare important and both are traditionallyclaimed as valuable results of self-study andpeer review. Again, however, explicitconsideration of outcomes stretches theability of a single process to do both jobs.The assessment mechanisms that are bestsuited to demonstrating effectiveness are notalways those that are the most helpful in thelong run for program improvement. Nation-ally normed standardized examinations mayprovide geater external credibility andimportant comparative information, despitetheir common inability to provide usefulguidance at the local level. Similarly, if theobject is to provide a convincing demonstra-tion of goal attainment, an institution may bebetter advised to invest its limited analyticalresources in a couple of periodic, well-designed, methodologically-sophisticatedassessment efforts than in a much widerrange of less elaborate hut more frequentevaluative efforts. Finally, quite differentmethodologies may be required to demon-strate the attainment of defined standardsthan to show continuous improvement.Current accreditation statements aboutoutcomes often do not provide sufficientguidance about which course to take:"evidence of achieving institutional purpose"suggests one set of choices while "creating aculture of evidence" may suggest one that isquite different.Source of legitimacy: academic community orsociety and the wider public? By its verynature, accreditation must play a mediatingrole between society and the academiccommunity. On the one hand, it is ownedand operated by higher education and aspreeminently peer review, it must largely

i

reflect the standards and the values of theacademy. On the other hand, it is a processwhose message is directed outward and inwhich the public is traditionally asked toplace its inst. As assessment mandatesbecome pervasive, moreover, the latter roleis increasingly shared with state and federalauthorities. So long as resources andprocesses constituted the primary domain ofaccreditation, primary questions of efficiencyand access were of interest to both parties.As results become equally paramount,however, judgments about both theirappropriate content and their quality maylegitimately differ. Current accreditationlanguage for the most part leaves this choiceup to the institution but given currentpolitical circumstances, this may be aposition that is increasingly untenable.Focus of concern. the institutional communityas a whole or the sum of the disciplines andsubunits that comprise it? Institutionalaccreditation is also one of the few processesin the academy that can force considerationof colleges and universities as completeentities. Special or professional accreditationtends to examine the functioning of units andprograms in isolation and often, it isclaimed, at the expense of overall institu-tional goals and priorities. State oversightthrough coordinating and governing boards,is often similarly focused on subunitoperationsparticularly in its most commonguise of program review and approval. Giventhis unique potential, should institutionalaccreditation be primarily concerned withexamining the entire academic communityand how it works? Again, the explicitintroduction of assessment into the processhelps make this choice explicit. Manyinstitutionsparticularly large researchuniversities where departmental anddisciplinary cultures are stronger thanallegiance to the institution as a whole'resat ds evidence of overall "effectiveness"

Page 19: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

largely the results of major field assessments,with little attention paid to what happens ingeneral education or ir1 the wider institutionalenvironment. Current accreditation guide-lines, with a few exceptions, do not addressthis issue explicitly: institutions are expectedto produce assessment evidence at both thegeneral and programmatic levels, followingthe logic of established goals. But giveninstitutional accreditation's unique focus onthe complete academic communiiy, consider-ation might be given to strengthening thiscomponent of evaluation by requiringinstitutions to specifically attend to cross-departmental "joint products" in theirassessments.

In all four areas, it appears, consideringoutcomes renders more explicit some conceptualissues that have always been present in theaccreditation process. At the same time, consid-ering outcomes may signal a changing role forthe process itself. Figure 2 attempts to capturethis evolution by contrasting the implications forappropriate evidence of two kinds of theoreticalaccreditation processes. The first, or "traditional"role has as its primary focus validating overallinstitutional quality from the standpoint of theacademy. The second, or extended role adds thenotion of validating the process of qualityassurance operated by the institution, but fromthe perspective of the wider society. Theunderlying philosophical shift is subtle butimportant. From simply reflecting the values ofan academic community, the process involves byimplication an attempt to actively shape thosevalues where they critically interact with society.At the same time, the critical underlying sourceof credibility for the process shifts fromexclusive reliance on a set of implied but rigidstandards of quality to include an additional setof pervasive institutional attitudes and activitiesvisibly directed toward improvement.

This shift has much in common with therecent "quality revolution" in business and

Figure 2An Evolving Role for Accreditation?

PRIMARYFOCUS OFPROCESS:

IMPLICA-TIONS FOREVIDENCE:

Content ofInformation:

Use ofinformation:

InformationalPerspective:

InformationalFocus:

TraditionalRole Extended Role

Validating"Quality"from thestandpoint ofthe AcademicCommunity

Validating"QualityAssurance"from thestandpoint ofSociety

Goals,resources and

Resources,processes

processes (in and resultsrelation to (in relatiln to"industry goalstandards") achievement"

Judgmental Indicative

The Academy Constituents

Minimum ContinuousStandards Improvement

industry, embodied in such organizational reviewprocesses as that associated with the MalcolmBaldrige National Quality Award, and it hasprofound implications for both the nature ofevidence and its use in improvement. As notedin Figure 2, at least four such implications arerelevant. First, the content of information movesfrom a comparison of institutional goals,resources, and processes with implied fixedstandards to include a comparison of obtainedresults with established goals. Second, theprimary role of evidence is to indicate progressrather than to certify attainment, andpatterns ofindicators are intended to be used in combinationto suggest effectiveness rather than to establishpiecemeal the degree to which individualstandards are met. Third, provided evidenceshould indicate a concern with the institution's

11

t 0

Page 20: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

"customers" (students, employers, andcommunity) as well as its own members.Finally, the primary case to be made is less thefact that the institution meets minimum fixedcriteria than the fact that it has the capacity, thewill, and the culture to continuously improve.

This evolving role for institutionalaccreditation is intriguing because, likeindustry's Baldrige Award, it is simultaneouslyoutwardly-directed and founded on peer review.Governmental assessment mandates, though inmost cases intended to stimulate improvement,appear unusually vulnerable to political pressuresthat threaten to turn them exclusively intoaccountability processes emphasizing confor-mance to fixed standards (the evolution ofTennessee's performance funding programprovides a case in point). Because of their highstakes, it is often hard for such processes toestablish an ongoing "culture of evidence, "despite their best intentions. By concentratinggreater effort on validating an institution'squality assurance processes from a widercommunity standpoint, accreditation might takeon a role that is both badly needed and for whichit is ideally suited. No other process in theacademy has currently claimed this potential"market niche." By reason of its history, culture,and primary attibutes, institutional accreditationmay well find its best future in this activity.

Some Operational Dilemmas

Regardless of the degree to which theconceptual evolution described in Figure 2constitutes a real change in kind, the introductionof outcomes assessment into institutionalaccreditation raises significant operationaldilemmas regarding how the process should beconducted. Some result directly from the newchallenges posed by assessment; some aretraditional difficulties that are particularlyexacerbated by the presence of assessment. All,however, appear to be difficulties that areincreasingly being faced in actual practice across

a wide range of institutional accreditingassociations and types of institutions.

Among the most persistent such issues arethe following. Each is rooted in more basicconceptual ambiguities noted earlier, but each isalso an operational problem worth discussing inits own right.

1. How to avoid assessing everything. If goalachievement becomes the primary mark ofinstitutional effectiveness and as a consequence,if assessment becomes the primary means fordemonstrating quality, must an institution closelyand formally examine every established goal andits implications? This problem is increasinglyapparent in assessment generally, and is widelyreported as an initial stumbling block byinstitutions attempting to meet SACS or WASCinstitutional effectiveness standards. Indeed, aclearly apparent negative externality of suchrequirements is a tendency for institutions toback off on important goals that they have nomeans of assessing. An equally compellingnegative consequence is to spread extremelylimited available resources for assessment acrosslarge numbers of goals a strategy that maymerely result in a lot of very bad evidence.

How can institutions remain true to the spiritof demonstrating effectiveness without fallinginto the common trap of "measuring everythingthat moves?" And how can the accreditationprocess help them to recognize and deal with thisproblem? One line of attack suggested byemet ging experience is to appropriately postponethe discussion of assessment techniques untilafter a thorough analysis of goals isaccomplished. Often, institutions have notthought through the process of systematizing andorganizing their goals; many discover that theyhave large numbers of de facto institutional"goals" scattered across different documents andintended for different purposes. Self-studyprovides an important occasion to sort throughthese statements in search of the few key"thematic goals" that effectively describe the

12

20

Page 21: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

institution and its aspirations. Often, these arethe areas in which the institution sees itself asspecial or distinctive, and which consequentlyshould be singled out for analysis. An intriguingpossibility here is to establish a more active earlydialogue between institution and accreditingagency about goals. As a long-term prelude toself-study, for instance, an institution might berequested to work with agency staff and visaingteam members to determine well in advance ofthe team visit a limited and mutually agreed-upon set of goals around which evidence ofeffectiveness will be organized in appropriateportions of the self-study.

2. How to arrive at reasonable expectationsregarding outcomes assessment in the processitself. At present, the real place andconsequences of outcomes assessment in theaccreditation process seems murky to manyinstitutional participants. After four years ofoperation, for instance, institutional reactions tothe self-study process under SACS institutionaleffectiveness standards range from perceptions of"no real change" to reports that the new processin no way resembled the old. Partly this is aresult of real variabilityparticularly in themanner in Arhich individual visiting teams chooseto recognize and emphasize assessment. In alllikelihood, as experience with assessmentdevelops, variability of this kind will diminish,but the phenomenon does point to some realneeds for additional professional development onthe part of visiting team members. Currentmeasures to address this difficultyfor example,use of agency staff to cover the "assessment"role, or designation of a specially-chosen teammember to address institutional effectivenessmay be effective in the short run. But such prac-tices run the risk of signalling to institutions thatassessment is a separate, specialized activity, andmay have precisely the opposite of their intendedeffect in creating a real "culture of evidence. "

Similarly, both institutions and agencies mustbe deterred from slipping into assessment as a

compliance exercisea phenomenon bestdescribed by the commonly-voiced institutionalcomplaint: "just tell us what you want us to doand let us do it. " Expectations for assessmentmust be clearly focused on the process itself, noton its particular manifestations or results. Again,this might be clarified at a far earlier in theprocess than generally now occursperhaps inearly dialogue among the team, agency staff, andinstitutional self-study committee. Again, oneoption here is to ensure early contact betweenthe institution and visiting team to explicitlynegotiate the goals and assessment processeswhich will serve as the focus for demonstrating"institutional effectiveness" in the self-study. Atthe same time, every effort should be directedtoward strengthening the language of currentguidelines regarding assessment to betteremphasize indicators of good process and the useof results rather than the mere presence oftechnically sound assessment evidence. As noted,the Malcolm Baldrige National Quality Awardcriteria currently used to recognize exemplarypractice in business and industry provide asuggestive model.

3. How to best communicate the results ofassessment. Once it enters a process likeaccreditation, the results of assessment areautomatically public. Regardless of the care withwhich it is collected, analyzed and communi-cated, a particular piece of information canreadily be removed from its appropriate contextand be used for other purposes. As a conse-quence, institutions will be understandably waryof sharing bad news, even in a supportive, peer-oriented process. This problem, of course, is notnew to accreditation and has been typical of anyreported weakness or deficiency. But informationabout outcomes has the reputation of being farmore publicly volatile than traditional informa-tion about resources or processes. Partly, ofcourse, institutional fears in this regard are amatter of lack of experience, and there is littleevidence from long-established state-based

13

Page 22: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

assessment mandates that institutional reputationssuffer major damage from such disclosure. Butpartly it is a real issueparticularly in the areaof student learning outcomes. Nor is the problemsolved by concentrating the primary attention ofaccreditation on the adequacy of the assessmentprocess alone rather than its results. As amplydemonstrated by past state-level assessmentexperience, the only way that an institution cancredibly demonstrate the presence of an adequateassessment process is to display a sample of itsresults.

One potential way to address this difficulty isto avoid putting assessment results all in oneplace. Reporting evidence of institutionaleffectiveness under a separate standard, despiteits other advantages, can lead to reading thissection of the self-study out of context.Similarly, institutions commonly report on suchassessment artivities as alumni surveys andstudent satisfaction studies in separate and self-contained parts of a self-study narrative, ratherthan weaving obtained results into appropriateareas throughout the document. At present,however, institutions receive little guidance aboutthe desirability and benefts of consistentlyreporting assessment evidence Li this fashion. Inparallel, experience in state-based assessmentprocesses convincingly suggests that all reportedassessment results be accompanied byinstitutional proposals regarding theirimplications and what will be done. Theadmonition "beware the unaccompanied number"applies equally well in accreditation as in state-based accountability reporting. While clearly akey component of accreditation philosophyregarding assessment, and visibly present inagency self-study guidelines, the principle ofusing obtained results to make appropriatechanges has yet to enter the institutionalbloodstream in conducting self-study. It shouldtherefore be made a more explicit focus of bothteam training and of evolving self-studyworkshops.

4. How to get institutions to take theprocessseriously. Again, this is an age-old problem foraccreditationparticularly for institutions oflarge size or established reputation that feel theydo not "need" the process. Partly it is a result ofuncertain consequences: the role of outcomes inaccreditation is sufficiently new that manyinstitutions are unsure of the seriousness withwhich the agencies themselves will take theprocess. And indeed, accrediting agencies arefaced with a major dilemma in this respect,shared by state agencies, because the sanctions attheir disposal tend to be "all or nothing."

Obviously, the only way in which institutionscan be induced to take assessment seriously is toconvincingly demonstrate that there is somethingat stake for them in doing so. Several incentivesare available, both positive and negative; in oneway or another most are now being activelypursued by accrediting agencies. Given thegrowing salience of state and federal assessmentinitiatives, one positive incentive is to convinceinstitutions that meeting accreditation require-ments will allow them to develop adequateassessment procedures to meet this challengegradually and in a far more benign environment.Another is to better communicate examples ofhow institutions have used assessment informa-tion to help garner reputation and resources. Themajor negative sanction practically available toaccreditors is simply not to let the institutionsalone on the topic if it is not adequatelyaddressed. Interim visits and reports, follow-upquestions, and similar mechanisms appearincreasingly to be used in this respect andinstitutions are induced to pay greater attention,if only to avoid this costly state of affairs. Atbottom, however, the fact that institutionalaccreditation as a whole lacks intermediatesanctions is a major problem. Rather than thecurrent "all or nothing" outcome, some form ofgraduated recognition of institutional meritthrough accreditation would help to addressmany of these difficulties, though it wouldundoubtedly also give rise to a host of others.

14

2 2.

Page 23: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

5. How to sustain institutional action andattention to assessment over time. Many havepreviously lamented the "episodic" quality ofinstitutional accreditation. What makes it aparticular problem in the context of outcomesassessment is that to be meaningful, assessmentprocesses require several years of evolutionarydevelopment and a significant change ininstitutional culture. They cannot be initiated or,more importantly, be reactivated at a moment'snotice. Those evidence-gathering processes thatare developed quickly are immediately apparentto outside reviewers in ways not true forenrollment or fmancial reporting.

One direct way to address this difficulty is toexplicitly require and to look for multi-yearassessment results in accreditation. In manycases, the kinds of data-gathering instrumentsand approaches used to collect outcomes infor-mation in connection with self-study are one-time, self-contained eventsfor example, majorsurveys or specially-administered generaleducation examinations. While not necessarilydiscouraging such approaches where appropriate,self-study guidelines might emphasize theprovision of more regularly-collected, lessextensive efforts. Another point of attack,already mentioned, is to place greater emphasison providing evidence of how outcomes informa-tion has been used to make improvements. Thisis not a process that can occur overnight, nor canit be improvised quickly to meet the needs ofaccreditation alone. Real utilization requires datato have been collected well in advance, and anadministrative follow-up procedure in place toensure that resulting improvement issues are keptsalient. Required intermediate reports betweenself-studies and possible targeted follow-up visitsor reports to address particular questions areother potential avenues for converting "episodic"assessment into a process more consistent withcontinuous improvement.

6. How to link accreditation's role inoutcomes assessment with those of other

processes. As noted throughout, outcomesassessment is becoming a pervasive processthroughout postsecondary education. More thantwo- thirds of the states, for instance, nowrequire institutions to engage in learningassessment processes substantially similar in kindto those evolving in institutional accreditation.How is assessment through accreditation to bekept distinctive in such an environment and whatis its unique contribution? If no answer to thisquestion is forthcoming, institutions will be farmore interested in investing scarce energy andresources in complying with government-man-dated procedures that have greater consequences.

A particular problem hdre is how to reconcilethe seemingly similar assessment proceduresrequired by different external authoritiesoftenexpressed in quite different languages. Insubstance, for instance, the kinds of evidencerequired to demonstrate student achievement bythe guidelines of the State Council on HigherEducation in Virginia (SCHEV) are notmarkedly different from SACS institutionaleffectiveness criteria. Yet many institutions havecomplained that they must engage in "duplica-tive" data generation and reporting. Thissuggests first that accrediting agency staff andvisiting teams be specifically aware of the kindsof assessment activities and reporting required tr.applicable state governments. It also suggeststhat agency staffs maintain much closer liaisonwith state coordinating or governing board staffsthan has typically been the case. Bothaccreditation and state authorities have an interestin keeping assessment meaningful and avoidingunnecessary duplications of process. At the sametime institutional accreditation may provide aunique contribution by emphasizing peervalidation of each institution's process of qualityimprovement. Government authorities areparticularly suited for (and are often explicitlycharged with) directly establishing the adequacyof obtained results and their match with publicpurposes. By their very nature, they are less ableto examine and stimulate vital ongoing quality-

1

23

Page 24: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

improvement processes. Through its existingstrengths of self-study and peer review,institutional accreditation might beneficiallyconcentrate its efforts on this important, but hardto sustain, external role in assessment.

Some Recommended Principles for FuturePolicy

The many conceptual gaps and unansweredquestions occurring throughout this documentbear witness to the fact that much learning aboutthe appropriate linkages between assessment andinstitutional accreditation remains to beaccomplished. Up to now, for a varilly ofjustifiable reasons to be sure, the institutionalaccreditation community has been largelyreactive to assessment. States and othergovernment authorities have seized the, highground" of external accountability with respect tothe issue, and some have argued that they allowlittle left for accreditation to do. Time is indeedshort for this condition to be reversed, andcertainly all regional agencies are activelyresponding to the challenge. But if it is to bemeaningful or effective, accreditation's responsemust also be distinctive and useful. This paperprovides no remedies. But it does suggest thatthe following summary principles be used forguidance in developing a consistent policy onassessment in the institutional accreditationprocess.

Assessment should not constitute a"component" of institutional accreditation,but should rather pernteate the entireaccreditation process. While guidelines andteam training should emphasize thedevelopment of techniques for gathering anpresenting evidence of institutionaleffectiveness as a part of self-study, theargument for effectiveness (and itsaccompanying evidence) should be visiblethroughout the process. Just as a successfulcollege graduate should be a "lifelong

learner", the presence and use of assessmentshould demonstrate that the institutionembodies a "culture of evidence" that enablesit to continuously examine and manage itselfresponsibly, both now and in the future.The primary, focus of assessment should beon the effectiveness of the institution as awhole. Institutional accreditt on representsone of the few available opportunities toexamine colleges and universities asfunctioning academic communities. Whileindividual units, departments, and disciplinesare all important in demonstratingeffectiveness, the focus of assessment in self-study should chiefly be placed upon theattainment of institutional goals. Seriousquestions should be raised when aninstitution submits as evidence of its overalleffectiveness a case founded only on theindividual products of its constituent units ordisciplines. Similar questions should beraised about cases that rest only on the factthat the institution currently has in placeassessment mechanisms whose sole purposeis to certify individual student achievement.A required focus of assessment should beplaced upon student learning. Institutionaleffectiveness is a far more inclusive conceptfor colleges and universities than simply theattainment of claimed ;earning outcomes. Rutstudent learning is the single common goal ofthe academic enterprise. As a result, the useof assessment in institutional accreditationshould visibly center on the presence,appropriateness, and ultimate attainment ofspecific student learning objectives.Consistent with the first principle, moreover,a concern with student learning should be avisible part of the institution's goals, andshould permeate the self-study itself.The primary use of assessment evidence inself-study should be to indicate progress, notto supply definitive judgments about quality.Assessment, like self-study itself, is oftenmore valuable for the questions that the

1

24

Page 25: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

process raises than/for the concrete resultsthat it may yield. At the same time, noknown assessment technique is sufficientlyprecise to provide defmitive answers aboutinstitutional goal attainment. For bothreasons, the process cannot appropriately beused to determine exactly where a givencollege or university stands against fixedstandards of "quality". Its chief utility insteadis to provide credible evidence that theinstitution is broadly able to attain the goalsthat it publicly proclaims. Following thislogic, guidelines should emphasize the use ofmultiple indicators and measures inassessment, and that avoid using assessmentresults judgmentally as the sole basis fordecisionmaking.The primary focus of assessment should bethe degree to which the process is directedtoward and results in continuousimprovement. Assessmeat is most valuable inthe institutional accreditation process when itconcentrates on the manner in which obtainedresults are used. The objective should not bea one-time demonstration of goal-attainment,intended solely to certify a given level ofquality. Other organizationsparticularlyspecial and professional accreditation and

public regulatory bodiesare both better suitedto and are more visible in this role. Assessmentshould rather be used to demonstrate that theinstitution is willing and equipped to examineitself on an ongoing basis, and is committed tousing the results of such examination to activelyimprove quality on an ongoing basis.

Use of these principles to develop a coherentapproach to assessment in the accreditationprocess might help better defme an evolving rolefor institutional accreditation in a rapidlychanging national context. By emphasizingaccreditation's role in providing a responsibleand visible peer review of the adequacy of eachinstitution's mechanisms for quality assurance,the process may acquire a vital continuing role inthe academic community. In increasinglyuncertain times, it appears a role well worthexploring further.

[*Peter T. Ewell is Senior Associate of theNational Center for Higher EducationManagement Systems (NCHEMS).

17

2 5

Page 26: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Incorporating Assessment into the Practice of Accreditation:A Preliminary Report

Ralph A. Wolff*

Design of the Survey

One of the responsibilities of the COPA TaskForce on Institutional Effectiveness is tosummarize "the currPnt and prospective use ofoutcomes assessment and analysis by institutionalaccrediting bodies in 1) accreditation standards,?) accreditation guidelines, and 3) suchaccreditation practices as (a) self study, (b)practices of visiting teams, and (c) deliberationsand actions of accrediting bodies. COPA staffhave surveyed all institutional accrediting bodiesand have compiled their accrediting standardsrelating to assessment and institutionaleffectiveness. To further the accomplishment ofthis charge, the Task Force called for a surveyof accrediting agencies to explore theeffectiveness in practice of assessment-relatedstandards by institutional accrediting bodies.

A survey was developed which focused onseveral broad areas: 1) the personal knowledgeand experience of the staff member within theaccrediting body primarily responsible forassessment-related issues; 2) the knowledge andexperience of the accrediting association's staffand accrediting commission with assessment; 3)information on how assessment is defined by theaccrediting body; 4) how assessment issues areincorporated into evaluation team training,institutional self studies and commission decisionmaking; 5) personal views of the respondents onbarriers and inducements to engage in assess-ment; and 6) areas of desired additional support.

The questionnaire was sent to the executivedirector of each of the fifteen institutionalaccrediting bodies recognized by COPA . Theexecutive director was asked to have the staffmember responsible for assessment within theassociation to complete the questionnaire.Questions were first asked about the personal

background and experience of the respondent, tounderstand whether assessment initiatives withaccrediting associations were being guided bystaff with considerable experience in the area.Further, information was gathered on the sourceon the staff member's experience: from readings,attendance at conferences, work with institutionson assessment, or participation in thedevelopment of the association's assessmentstandards on assessment.

Respondents were then asked to evaluate theknowledge and experience of other staffmembers with assessment. These questions wereasked to understand whether there was perceivedto be a significant difference between the staffmember assigned responsibility for assessmentand other associafion staff, as well as todetermine if special training needs existed forassociation staff.

Questions about the standards and guidelinesof each of the accrediting associations weredesigned to go beyond the collection ofassessment standards and criteria alreadyundertaken by COPA staff. Questions weredirected to learn how each association defmedassessment and to determine if there was anysignificant variation in this definition. Questionswere also asked to determine how recentlystandards on assessment were adopted or revised,and if the accrediting body followed up theadoption of such standards with special resourcematerials and workshops.

Sets of questions Nr-,:e asked about theapplication of the associations standards onassessment. They ranged from an evaluation ofthe overall importance of the assessmentstandards in relation to other standards appliedby the accrediting body, to questions on how theassessment standards were addressed in teamtraining, institutional self studies and in the

19

2 6

Page 27: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

actual deliberative processes and decisions of theaccrediting commissions.

In a section on personal views, respondentswere asked to express opinions about the chiefbarriers to the greater implementation of assess-ment within the institutions and the respondent'sown accrediting association. Opinions were alsosought about the reasons both institutions andtheir accrediting body have given greateremphasis to assessmnt. Finally, respondentswere asked what additional support they wouldlike to receive in the area of assessment.

Ovekall, the survey was designed to elicit, atleast from the perspective of the staff memberresponsible for assessment, views about thestatus of assessment within each institutionalaccrediting body.

Survey R esults

The survey was distributed to all fifteeninstitutional accrediting bodies recognized byCOPA. Responses were received from thirteen.A copy of the questionnaire with tabulatedresults is attached to this report. Where aquantitative response was requested, a Leikertscale was used with one (1) reflected a lowresponse and five (5) reflecting a high response.

1. Respondent's Personal Knowledge andExperience With Assessment

At the outset, it should be indicated that thereis no reference point to determine whether theresponses to any question was good, in anabsolute sense, or not. For example, at whatlevel should we consider the perceivedknowledge of staff about assessment as good? Amean score of 3 (out of 5), or 4? Thus while theresponses were revealing, there was no externalstandard against which to judge them.

All thirteen respondents rated their personalknowledge of assessment to be moderate to highwith a mean response of 3.6. Only onerespondent rated his or her knowledge to be

high, seven as moderately high and five asmoderate. It is not clear if the five who ratedtheir background as moderate were being modestin light of the increasing complexity and depthof the field of assessment or if they felt that thiswas not a significant area of expertise.

Questions about the source of the respon-dents' background on assessment revealedconsiderable experience by the staff membersresponsible for this issue. All thirteenrespondents indicated that their background withassessment consisted of moderate to extensivereadings in the field, with a mean score of 3.8,with eight of the thirteen indicating that they hadread widely. Nine of the thirteen respondentsindicated they had attended four or moreconferences on assessment, which is significantsince so much of the assessment movement hasgown through the use of conferences. All butone of the respondents participated in theformulation of their association's standards onassessment, and presumably the one respondentwho did not was employed after that associa-tion's standards were adopted. Ten of therespondents have had moderate to extensiveexperience with assessment activities atinstitutions, and several staff members indicatedthey have worked extensively with institutions inimplementing assessment activities (mean score3.3). All thirteen respondents have participatedsignificantly in the assessment activitiesconducted by their own accrediting association(mean score 4.3).

2. Knowledge and Experience of AssociationStaff and Commission Members

Only twelve responded to this set ofquestions, because the thirteenth association didnot have additional staff. Ratings by the twelverespondents of their fellow staff members generalknowledge of assessment was quite high, thoughnot as high as their own. The mean score forfellow staff was 3.33, above a rating ofmoderate (3), whereas the mean score of the

2 7

Page 28: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

respondent's self assessment of their knowledgewas 3.6. There was a significant difference inthe perceived depth of background of the otherstaff members, however. As to readings,respondents rated their fellow staff at a meanscore of 3.0 (moderate), whereas their ownreadings was rated at 3.8. Similarly, other staffhad attended fewer conferences and hadparticipaal to a much lesser extent in theformulation of their association's standards onassessment a mean score of 3.0 compared to4.6. Interestingly, the rating of other staffmembers' participation in assessment activities atinstitutions was rated higher than therespondents' a mean score of 3.5 compared to3.3 for the respondents. Positively, there appearsto be high participation of other staff in theassessment activities conducted by the association(mean score of 3.92).

The overall knowledge and experience of therespondent's accrediting commission was slightlylower than either their fellow staff based on themean score (3.30 vs. 3.33). The modal distribu-tion reflects a different picture, however. Six ofthe respondents rated their commissions to havemoderate experience or less, and none rated theircommission as high. This distribution reflects ajudgment that the level of background andexperience of commissions with assessment issuesis not as well established as commission staffs.

3. Standards and Guidelines on Assessment

All but one of the responding accreditingagencies have modified their standards dealingwith assessment in the last five years. Oneindicated that their standards have just beenrevised and will go into effect in 1992, and twoagencies indicated that there have been tworevisions in the past five years. Eleven of thetwelve indicated that the revisions made thestandards more rigorous, while the twelfthcharacterized the change as more "explicit."

Seventy-seven per cent (77%) of therespondents (ten of thirteen) indicated that

special resource materials have been developedto assist institutions address assessment andaccreditation expectations. One more indicatedthat such materials are in development.Similarly, 85% (eleven of thirteen) associationshave sponsored special workshops on assessmentfor their institutions in the last five years.

Defining assessment proved to be moreelusive. Of seven different areas encompassed by"assessment" all thirteen respondents agreed thattheir association's definition of assessmentincluded "accomplishment of institutional goalsand purposes." Twelve respondents agreed thatassessment also included two other areas:evidence of program success and linkage toinstitutional planning efforts. Eleven r.spondentsindicated that assessment included evidence ofstudent learning, which seems consistent with thethrust of the national assessment movement, andten indicated that assessment was o be linked toprogram review processes. The least agreementwas found on assessment including "evidence ofthe effectiveness of faculty and student research"(four respondents), "evidence of building amulticultural community " (five respondents), and"evidence of the effectiveness of the continuingeducation program" (six respondents).

Importantly, the importance given to theassociation's standards on assessment in relationto all standards was quite high, being given amean score of 3.85, with twelve respondents rat-ing the importance to be moderate (3) or above.

Team Training WorkshopsSeven associations (64%) of the eleven

responding indicated that they always includeassessment issues in their team training work-shops, while one reported never doing so (meanscore 4.18). In such workshops, respondentswere asked the proportion of time devoted toassessment. Only six resp . -led to this question,ranging from a high of 5u% to a low of 10%,averaging at 29% . In these workshops, all thoseresponding indicate that assessment is discussedin the context of all accrediting standards.

21

9

Page 29: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Self StudiesQuestions about how institutions treat

assessment in their self studies yielded interestingresults. The respondents were asked todistinguish between the treatment of assessmentwhich described activities, with those thatanalyzed data in the self study based onassessment, and those that identifiedimprovements made from assessment activity.On average, the mean scores of these threequestions did not differ significantly. Eleven oftwelve respondents indicated that most institu-tions described assessment activities (mean score3.58). A similar distribution of responsesoccurred about self studies analyzing data (meanscore 3.5). Not surprisingly, the mean score forinstitutions identifying improvements in their selfstudies was significantly lower, at 2.91, and fiverespondents indicated that institutions rarelyidentified improvements from assessmentactivities in their self studies.

Respondents were asked to venture a personalopinion as to the percentage of institutionsaccredited by their association in whichassessment was a serious enterprise. Theresponses reflected a surprising and widelydisparate range. One respondent indicated all,another 90%, but explained that the institutionsaccredited were mostly public institutions.lowest estimate was 25% . This seems at oddswith perceptions of those familiar withinstitutional responses to state mandatedassessment that institutional responsiveness isquite limited. Perhaps the question was perceiveddifferently by those responding.

Team VisitsSeveral questions were asked about how

assessment is treated in the composition of theevaluation team and in the evaluation teamreport. Accrediting teams do not always have atleast one team member with background andexperience in assessment for comprehensivevisits. Only one association always has a teammember with an assessment background, and one

association never does and another rarely hassuch a team member. Ten associations sometimesdo (mean rating 3.23). This suggests that theapplication of the accrediting association'sstandards on assessment in the course of a sitevisit might be limited by the lack of at least oneperson on the team with considerable experiencein the areas. When asked if there is a discussionof assessment in team reports, the same agenciesthat never or rarely have a team member with anassessment background report that assessment israrely or never discussed in the reports. Theother agencies reported that team reports includediscussions of assessment at a moderate level,slightly better than the rate at which teammembers with assessment backgrounds areincluded on teams (mean score 3.46). It mightbe assumed that even without an assessment teammember, the team is aware of commissionstandards and includes some information aboutassessment in the reports. Still, it would appearthat inclusion of assessment in team reports isstill not consistelidy practiced.

For visits other than comprehensive,assessment is included less frequently, whichwould be expected. Special visits typically focuson specific concerns identified by the accreditingcommission or a previous team, and assessmentwould not be an focus of the visit unless it wasone of the specific concerns. Four (4)associations reported that assessment is rarely orinfrequently included in special team reports, andthe mean response was 2.92.

On the other hand, assessment is the basis formajor team recommendations, with nearly allassociations reporting that major recommenda-tions are sometimes included in team reports.Major recommendations are given seriousconsideration by institutions since institutionalresponses to such recommendations are typicallyreviewed by the subsequent visiting team. (Meanresponse 3.23)

Commission ActionsRespondents were asked if their accrediting

2

Page 30: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

commission discusses assessment when reviewingteam reports. Only one association indicated thatassessment is always considered when action istaken following a comprehensive evaluation.Overwhelmingly, respondents indicated thatassessment is discussed when it is raised by theevaluation team, but much less frequently isassessment raised by the accrediting commissionon its own. A similar pattern emerges regardingwhether accrediting commissions discuss assess-ment when taking action following other types ofvisits. It is important to note that special reportsor visits involving assessment issues are nowbeing required by most commissions, most fre-quently when raised by the evaluation team, butalso occasionally by the commission on its own.

Interestingly, when asked what weight isgiven to assessment in making accreditingdecisions, respondents reported a wide range ofcommission practices. Two respondents indicatedthe weight given to assessment is low, whereasten respondents indicated that the weight given ismoderate or better (mean response 3.25)

4. Personal Views of Respondents

Respondents were asked to identify threechief barriers to the implementation ofassessthent within institutions. The mostcommonly cited barrier (ten respondents) is thelack of knowledge within institutions of how toconduct assessment as well as an understandingwithin the institution that assessment can lead toimprovement. Other frequently identified barriersincluded: lack of commitment from the chiefadministrative leaders of the institution (fourrespondents), lack of financial resources (fourrespondents), faculty resistance (threerespondents), lack of adequate instruments ortechniques of evaluation (three respondents).

Respondents were also asked to list threebarriers to greater implementation of assessmentwithin their own accrediting association. A widerange of responses were provided. The mostfrequently cited item (six respondents) was a lack

of understanding of assessment, a similarresponse to the previous question dealing withinstitutional implementation of assessment. Otherresponses included: lack of commitment fromassociation leadership, lack of resources com-mitted to provide workshops, fear of negativeresults or what assessment might reveal, staffunwillingness to change, a lack of definition ofwhat is acceptable assessment, and a concernabout requiring institutions to invest heavily inassessment when financial resources are alreadyso strained. One respondent indicated there areno barriers within that person's association, butit should be noted that this respondent alsoindicated that the association had not yet adoptedformal standards on assessment nor developedresource materials on the topic.

Respondents were next asked what has ledinstitutions served by their association toundertake assessment. Interestingly, the leadingfactors cited are largely pressures external to theinstitution, whether from the accreditingassociation itself (ten respondents), states (fiverespondents), or a broader notion of "publicaccountability" including pressure from theinstitution's constituencies and sponsoringdenomination(s) (seven respondents). Sevenrespondents also indicated that institutions doengage in assessment to learn about themselvesor to improve, but these comments were citedalong with citations of external pressures.

Finally, respondents were asked theiropinions about what led their own agency toincorporate assessment as part of the accreditingprocess. Three respondents emphasized thatassessment is not new to accreditation,notwithstanding all of the larger public concerns.The most frequently cited factor was the newassessment requirement of the Department ofEducation (five respondents), followed by moregeneral statements of public accountability (fourrespondents), COPA recognition requirements(three respondents), and a philosophical desire tofocus less on inputs and resources (tworespondents).

Page 31: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Additional Support DesiredRespondents were asked what additional

types of support they would like to receive in thearea of assessment. Eleven respondents indicatedthey would like to have workshops addressspecific assessment techniques; eight asked for aworkshop on how assessment is used by otheraccrediting agencies; and seven requestedsuggested readings. In the open commentsection, several respondents asked specificallyfor dialogues on important issues: how to trainevaluators in the area of assessment, how todetermine the acceptability of evaluation teamfmdings in the area of assessment, and how tobuild better understanding and consent on thepart of accrediting commissions o thatassessment can be more effectively incorpootivlinto the decision making process.

Analysis and Conclusions

What is clear from the survey responses, is thatthe respondents generally take great pride in theways in which their agencies incorporateassessment into the accreditation process. Fromthe responses provided, several major findingsappear warranted:

1. The accreditation staff responsible forassessment have considerable experience withassessment, by way of readings, conferenceattendance, some participation in assessmentactivities at institutions, and extensiveparticipation in the formulation of theiragencies' own standards on assessment.While it is not clear whether these staffwould be the ones attending a COPA work-shop on assessment, if they are, a workshopcould run the risk of "preaching to theconverted. " Perhaps special attention willneed to be paid as to the constituency desiredto attend COPA workshops, and then tailorthe material to the level of experience ofthose attending.

2. The background and experience of otherassociation staff is reasonably good, althoughother staff have less familimity withassessment through readings and conferenceattendance. One issue that emerges iswhether the COPA assessment project shouldnot target training materials for other staffthan the person primarily responsible forassessment within the association. Indeed, itmight be advisable to consider "training" theprimary assessment staff person to leadworkshops within their own associations fortheir fellow staff members and commissionmembers.

3. Respondents rated the overall knowledge andexperience with assessment of theiraccrediting commission to be moderate. Yeta lack of understanding of assessment by thecommission was the most frequently citedbarrier to further implementation ofassessment within the accrediting body.Special attention should be paid tocommissioner experience and understandingof assessment. Respondents also reported thatdiscussions of assessment at the time accred-iting decisions are made by the accreditingcommission are most frequently raised onlywhen an issue relating to assessment arises inthe evaluation team report. This would seemto suggest that a fruitful area of furtherattention is how to improve the means bywhich assessment is incorporated in theactual decision making process.

4. Twelve of the thirteen accreditingassociations responding to the surveyindicated that they had modified theirstandards relating to assessment in the pastfive years, and these modifications havemade the association's expectations morerigorous. This is an important statementabout the responsiveness of the accreditingcommunity to the issue and should be givenwider visibility.. Further, there has been a

24

31

Page 32: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

widespread effort on the part of accreditingassociations to develop resource materialsand special workshops for institutions onassessment issues. This is a new role foraccrediting associations both in terms ofadopting standards on a leading issuesomewhat ahead of institutional practice, andthen supporting institutions to move morefully in this area. As a result, accreditingassociations are moving away from an exclu-sively evaluative role, but are consciouslyworking to lead institutions toward betterassessment practices. This shift in role isworthy of further discussion and comment.

5. There is some variation of the ways in whichthe accrediting associations surveyed defineassessment. This is not surprising given howrecently most associations have adopted stan-dards on assessment and how many defini-tions there are for assessment nationally.Nonetheless, there is a need for institutionsand team members to understand how theiraccrediting association defines assessmentand expects such a definition applied. Furtherdiscussion would seem warranted withinaccrediting associations and betweenassociations on how assessment is defined,and what must be included or may be.

6. Responses to questions on the quality of selfstudy reports dealing assessment indicate thatinstitutions are better able to describeassessment activity than identifyimprovements which have come about as aresult of assessment. Further attention shouldbe paid to a) how different accreditingassociations expect institutions to report anJanalyze assessment activity, and b) givingwider dissemination to examples ofimprovements brought about by assessmentto assist in institutional understanding of howassessment can benefit the institution.

7. The requests for additional support from therespondents seem to indicate that the greatestneed is a better understanding of specificassessment techniques. Based on open-endedcomments, this would seem to mean there isan interest to know how specific techniquesor methodologies can be used to improvequality at the particular range of institutionsserved by the accrediting associationssurveyed.

8. Finally, there seems to be a need to addressin workshops how teams should evaluateinstitutional assessment efforts as well as howcommissions should review team findingsabout assessment.

A postscript: One respondent, in the opencomment section to the survey, observed that"outcomes assessment is an over-reaction toprocess oriented accreditation. The pendulumhas swung too far assessment is not a solutionto measuring educational quality. . . . Hopefully,assessment will just fade away and becomeanother tool in the accreditor's toolkit." All ofthe respondents indicated that when assessmentis introduced in team training workshops, thereare discussions about how to view assessment inthe context of the overall evaluation process. Nomatter what we do, collectively and individuallyin our accrediting associations, it will always beimportant to remember that assessment is ameans and not an end in and of itself.

*Ralph A. Wolff is the Associate ExecutiveDirector of the Accrediting Commission forSenior Colleges and Universities of theWestern Association of Schools and Colleges.

25

32

Page 33: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

TABULATED RESPONSES

Survey of Accreditation and AssessmentCOPA Task Force on Institutional Effectiveness July 1991

A. Persona Knowledge and Experience with Assessment

1 . Please rate your own knowledge of assessment.

Responses 5 7 1

1 2 3 4 5Low Moderate High

MR = Mean Response

MR = 3.6

2. My backgmund and experience with assessment consist of the following:

Responses

Responses

Responses

Responses

Responses

a. Readings5 5 3

2 3 4 5None Moderate Extensive

b. Attendance at conferences[Circle number attended]

1 1 1 5 I 30 1 2 3 4 5 5+

MR = 3.8

c. Participation in the formulation of my association's standards on assessment

2 101 2 3 4 5

None Moderate Extensive

d. Participation in assessment activities at institutions

3 .5 3 21 2 3 4 5

None Moderate Extensive

MR = 4.6

MR = 3.3

e. Participation in assessment activities conducted by my association

3 3 71 2 3 4 5

None Moderate Extensive

f. Other (please explain)

MR = 4.3

26

3

Page 34: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

B. Knowledge and Experience of Association Staff and Commission Members

1. I would rate the general knowledge of other association staff about assessment as

Responses 1 7 3 1

1 2 3 4 5Low Moderate High

MR = 3.33

2. TN, background and experience of other staff members consist of the following:

Responses

Responses

Responses

Responses

Responses

a. Readings4 4 4 MR = 3.00

1 2 3 4 5None Moderate Extensive

b. Attendance at conferences

1 5 3 3 MR = 2.661 2 3 4 5

None Some Many

c. Participation in the formulation of my association's standards on assessment

1 6 3 1

1 2 3 4 5None Moderate Extensive

d Participation in assessment activities at institutions

1 6 3 21 2 3 4 5

None Moderate Extensive

MR = 3.00

MR = 3.5

e. Participation in assessment activities conducted by my association

1 3 4 41 2 3 4 5

None Moderate Extensive

MR = 3.92

f. Other (please explain)

3. 1 would rate the overall knowledge and experience of my accrediting commissionwith assessment to be

Responses 3 3 71 2 3 4 5

Low Moderate High

MR = 3.30

27

34

Page 35: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

C. Standards and Guidelines on Assessment

1. My accrediting association has modified its accrediting standards or criteria in thelast five years to address assessment.

1987 (3), 1988, 1989 (3), 1990, 1992Responses _1.2_ Yes L No If yes, specify date: 1984 & 1991, and 1989 & 1991.

2. If the answer to #1 above was yes, what was the net effect of these changes on thoseaccrediting standards dealing with assessment?

Became less rigorousStayed about the same

Responses 11 Became more rigorousResponses 1 Became more aplicit

Comments:

3. In the last five years, my accrediting association has deve!oped special resourcematerials to help institutions address assessment and accreditation expectations.

1987 & 1989 (2), 1989, 1990 (4)Responses IQ_ Yes 3 No If yes, specify date(s): 1990-91

4. My accrediting association has developed special workshops on assessment forinstitutions in the last five years.

Annually since 1988 (3),Responses _IL Yes 2 No If yes, specify date(s): 19% (3), 1991, April-May

1991 & annual meeting

5. Assessment in my accrediting association is specifically defined to include:[Check all that apply.]

.J.L. Accomplishment of institutional goals and purposes11 Evidence of student learning8 Evidence of teaching effectiveness

12 Evidence of program success5 Evidence of building a multicultural community4 Evidence of the effectiveness of faculty and student research6 Evidence of the effectiveness of continuing education

_1,0 Linkage to program review processes12 Linkage to institutional planning efforts

6. The overall importance of my agency's standards on assessment in relation to allaccrediting standards is

Responses 1 4 4 41 2 3 4 5

Low Moderate High

MR = 3.85

28

Page 36: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

D. Team Training

1. Assessment is included as a part of team training workshops.

Responses 1 2 1 71 2 3 4 5

Not at all Sometimes Always

MR = 4.18

2. The proportion of time devoted to assessment in team training workshops is roughly%. 10%, 15%, 20% (2), 30%, 50%, and increased since 1990

3. The focus of training about assessment is directed toward:[Check all that apply.]

Standards on assessmentGuidelines to interpret assessment standards

_1 Different types of assessment methods institutions might undertake_11_ How assessment should be viewed in the context of the total evaluation

process

E. Applications of Assessment in the Accrediting Process

SELEMIDIES

1. In operational terms, self studies currently being submitted describe assessmentactivities.

Responses 1 3 4 41 2 3 4 5

Not at all Extensively

MR = 3.58

2. Self studies currently being submitted display analysis of data gathered by assessmentactivities.

Responses 1 6 3 21 2 3 4 5

Not at all Extensively

MR = 3.5

3. Self studies currently being submitted identify improvements made in the institutionas a result of assessment.

Responses 5 4 2 1

1 2 3 4 5Not at all Extensively

MR = 2.91

4. In my personal opinion, assessment is a serious enterprise in % of theinstitutions my agency accredits. 25%, 30% (2), 33%, 40%, 50%, 70% and growing,80%, 90% (public institution), all

29

36

Page 37: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

TEAM VISITS

1. At least one team member with background and experience in assessment is includedon comprehensive evaluation teams.

Responses 1 1 6 4 11 2 3 4 5

Rarely Sometimes Always

MR = 3.23

2. A discussion of institutional assessment efforts is typically a part of comprehensivevisiting team reports.

Responses 1 1 2 9 MR = 3.461 2 3 4 5

Rarely Sometimes Always

3. Assessment is included in team reports of other types of visits.

Responses 2 2 5 3 11 2 3 4 5

Rarely Sometimes Always

MR = 2.92

4. Major recommendations about assessment are included in team reports.

Responses 1 8 41 2 3 4 5

Rarely Sometimes Always

COMMISSIQLLACTIONS

MR = 3.23

1. My accrediting comm' sion discusses assessment when it takes action following acomprehensive evaluation.

Never7 Sometimes on its own

10 When it is raised by the evaluation teamAlways

2. My accrediting commission discusses assessment on other types of reports.NeverIllm

8 Sometimes on its own9 When it is raised by the evaluation team

Always

3. My accrediting commission requires special reports or visits addressing assessment.Never

4 Sometimes on its own$ When it is raised by the evaluation team

_4_ Regularly

30

3 '7

Page 38: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

4. The weight given to assessment in the overall accrediting decision is

Responses 1 1 5 4 1

1 2 3 4 5Low Moderate High

F. Personal Views

MR = 3.25

1. In my opinion, the three chief bathers to greater implementation of assessmentwithin institutions are

Ignorance, lack of presidential and CAO commitment, faculty resistance

Lack of time and staff, lack of knowledge, lack of 'from the top" support

An institution not understanding Ehat it wants to do, and knowing hay to do it

Legitimate concern about how the outcomes data collected might becomparatively analyzed and (mis-)used; distaste for standardization in aprofession that prizes the autonomous teacher and institution; continuedconfidence in the belief that good inputs assure good outcomes.

Lack of a clearly focused appropriate mission statement, lack of explicitly statedmeasurable goals and objectives, and lack of local ownership of assessment as acritical element in demonstrating institutional effectiveness

Lack of in house academic expertise in many theological schools, philosophicaldifferences about appropriate outcomes of theological education, and typicalsmall size of seminaries

Faculty resistance, finances, apathy

Lack of Support from the President or Chancellor, faculty resistance to theconcept, and lack of understanding/ knowledge of the benefits to the institution.

Availability of trained evaluators, mix of subjectivity/ objectivity inherent inevaluation process, methodologies that will capture adequate outcomes.

Lack of knowledge regarding assessment technicians, lack of assessmentinstruments in specialty fields, lack of resources.

Institutional culture, which views assessment with alarm; some institutionalleaders are not yet sufficiently committed to provide necessary leaders/up; andlack of sophistication about evaluation.

3 /

Page 39: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

nine, knowledge of process and understanding of benefits, and ability to applyresults in planninglimplementing.

The absence of financial resources to implement assessment programs.

2. The three chief barriers to greater implementation of assessment within myaccrediting association are

Ignorance, fear of having to require institutions to spend more money, concernover having institutions do something they don't know how to do - being tooprescriptive.

Meaningless of assessment in relation to accreditation, lack of understanding, itwas forced on us by the government.

There are no barriers.

Staff time that can be devoted to institutional assessment is, perforce, limited;institutional assessment programs are idiosyncratic and, therefore, relativelyhard to understand; Commissioners have only moderate amounts of assessmentexperience: there is no real even on our Board.

Institutions that traditionally focus on "describing" vs analyzing andimplementing change based on data analysis. Lack of evaluation programs andlimitation of feedback mechanisms. The Association's/Commission's lack ofexperience and expertise with assessment activities.

For the most part, I would be inclined to say they reflect the associationcounterpart of the three issues noted for schools above

We are presently implementing an assessment requirement that calls for allinstitutions to document student academic achievement.

The nebulous nature of assessment and making qualitative judgments about theplanning and evaluation process, lack of definition of what is acceptable, giventhe variety of institutions which constitute the membership, and unfamiliartemitory not easily evaluated.

Lack of adequate team training (compounded by turnover of team personnel),lack of more specific criteria, and lack of needed assessment instruments.

The apparent belief that 'someone' will develop the definitive system, which canthen be adopted with a minimum of pain; competing economic imperatives whichprevent some institutions from staffing appropriately; the flurry of demands for

32

3J

Page 40: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

accountability from too many sources with too many agendas in addition toinstitutional improvement.

Desire of members not to face realities of shortcomings, fear of consequences ofnegative actions, lack of deeper understandinglappreciation of assessmentbenefits.

Staff unwillingness to change methods and reduce to writing past practices thatwould give evidence of assessment, the unwillingness to use information providedfor program enhancement, and the resources necessary to conduct appropriateworkshops.

3. The three chief factors leading institutions served by my association to undertakeassessment are

The assessment initiative of our Commission, the nationalal assessmentmovement, and the desire to learn more about themselves

Required by law, required by accrediting body, pod& utility in petfectinginstitutional marketing dort

Commitment to institutional quality and improvement. State mandates (someinstitutions have not developed that commimtent as of yet.)

Pressure from their respective states; pressure from consumers; felt need toaddress their own failures

Increased emphasis by regional accrediting agency to provide evidence ofinstitutional effectiveness and de-emphasizing self studies which are merelydescriptive in nature. Increased emphasis by regional accrediting agency tofocus on "useful" self-studies, tied to strategic planning. Public demands foraccountability.

Pressure from An or one of the regional associations, internal desire of theinstitution, and pressure from sponsoring denomination

State/ government pressures, accreditation requirements, increased accountability

Requirement for gaining and maintaining menthyrship, state requirements,finding.

Types of education lend themselves to assessment/ outcomes, need to capture"essence of schools," ability to use evaluation results in working with schools inimproving their effectiveness

33pl

Page 41: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Desire for self improvement, it is required by self-study process, and demandsfor accountability on the part of the constituency.

General and growing recognition that the burden of demonstrating effectivenessrests with the institution defensive reasons exist, in addition to planningimperatives; the recognition that effective planning rests on systematicevaluation; steady pressure from a variety of sources (1) accreditation (we werefirst), state agencies, and categorical finding.

A sincere desire to improve, understanding benefits of assessment in planningchanges, fear of punitive action such as being dropped.

Compliance with standards of membership and willingness to use interpretiveguidelines and take steps necessary to be in compliance WM standards ofmembers*.

4. The three chief factors leading to assessment being used as part of the accreditingprocess are

Leadership of SACS, national and federal concerns, and staff leaders*

Federal Recognition Criteria, COPA

Part of the definition of an accredited institution is its demonstrated capacity toaccomplish its purposes and that it continues to do so. The institution's capacityto accomplish purposes is an indicator of institufional quality. The ability of theinstitution to plan and evaluate are valid indicators of institutional quality.Assessment has always been a part of accreditation.

Outcomes assessment provides evidence of congruence between stated goals andeventual outcomes; Like all of American higher education, accrediting bodies arenwving toward standardized expectation; there is a gro ;ling consensus onrelatively reliable tests and measures

Clanfication and strengthening references to assessment in Accreditation&ma& Expeaation by Commissior for institutions to provide evidence ofinstitution,-' effectiveness as a larger matter than simply learning outcomes.Requests through the EiccratatiorLliandkok, 1988 edition, for "Analysis andAppraisal" and "Evidence of Effectiveness."

Need to guide schools toward better internal decision-maldng processes, andgeneral concerns in accrediting community.

We have long had assessment; it is not new! There is new emphasis because of:federal requirements, COPA, and greater need for accountability.

34

Page 42: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Less emphasis on resources and more focus on the use of resources to further thegoals of the institution, more emphasis on a clearly defined purpose andarticulation of goals, enable review of the non-traditional programs.

Types of education lend themselves to assessment/ outcomes, need to capture"essence of schools," ability to use evaluation results in working with rchools inimproving their effectiveness.

Public demands for accountability, requirements of DEICOPA, and increaseduse of non traditional delivery systems.

Institutional improvement rests on a knowledge base, effective planning requiresassessment, accreditation decisions require more than fuzzy opinions aboutinstitutions.

Desire to create standards which result in improvement, new information whichshows how to apply assessment techniques in school reviews and evaluations,new criteria and demands made by public agencies and USDE regulations foraccreditors.

Institutions have always been involved in assessment at the technical and careerlevel and it is now a matter of reducing past activities to writing, and therecognition that the assessment of institutional effectiveness can lead to futureprogram enhancement.

5. What types of additional support would you like to receive in the area of assessment?

Suggested readingsWorkshop on how assessment is used by different accrediting agenciesWorkshop on specific assessment techniquesOther (please list below)

What has worked well - models of g.e. review, for example. A survey of goodpractices.

Outcomes assessment is an over-reaction to process oriented accreditation. The

pendulum has swung to far Assessment is not a solution to measuringeducational quality. In most cases, it's a phony exercise in dubious statistics.Hopefully, assessment will fade away, and become just another tool in theaccreditor's

Opportunities for dialogue among those of us .w:thin the accreditationcommunity.

35

Page 43: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Guide or framework for training and evaluating (for example: what evaluativecriteria should be brought to bear? how? when?) outcomes assessmentevaluators.

Please note, too, our Commission's extensive publications on assessment:Assessment Workbook, Spring 1991, Fall 1990 NCA Quartet', "Sharpening theFocus on Assessment," Fall 1991 NCA Qyarterly will also be on assessment.

Workshop on how to review the acceptability of findings which result lomevaluation.

Help in development of instrumentation appropriate to specializedinstitutionslprograms.

The community of regional accrediting agencies should establish a commonbaseline of minimum expectations for institutional assessment. Literatureavailable is diffiise, sometimes contradictory, and frequently too technologydependent. A well-crafted testimonial policy statement should be succinct,describing systems that are achievabk with modest resources, comprehensible bynon-researchers, and focused on matters of greatest importance to the teaching-learning enterprise.

Thank you for completing this survey. Please make any additionalcomments on the reverse side or attach a separate sheet.

36

Page 44: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Methods for Outcomes AssessmentRelated to Institutional Accreditation

Thomas P. Hogan*

Preface

This paper was prepared for the Council onPostsecondary Accreditation (COPA) "OutcomesAnalysis Project," funded by the Fund for theImprovement of Postsecondary Education(FIPSE). More specifically, the paper wasdeveloped for the project's Task Force onInstitutional Effectiveness (one of three taskforces operative within the project). Among theseveral topics addressed by this task force, thepaper attempts to respond to the call for a guideto the methods and instruments related tooutcomes assessment, with an analysis of theirstrengths and weaknesst all within the contextof institutional effectiveness.

The paper begins by identifying a number ofconceptual issues which may have significantinfluence on the selection of particular methodsof assessment, then goes on to an overview ofvarious methods or instruments, withcommentary on their strengths and weaknesses.I have aimed the paper principally at thoseindividuals who have considerable experience inhigher education but not much experience inassessment and, hence, are feeling overwhelmedor confused by the rapidly developing demandsfor outcomes assessment; and at those individualswho may have considerable ,Aperience inassessment methodology but not in the context ofhigher education's concern with accreditationmatters.

This paper is complemented in importantways by three other papers prepared by membersof the Task Force on Institutional Effectiveness.Peter Ewell's "Outcomes Assessment,Institutional Effectiveness, and Accreditation: AConceptual Exploration" provides an insightfulanalysis of the role of assessment in theaccreditation process. Trudy Banta's "Selected

References on Outcomes Assessment in HigherEducation: An Annotated Bibliography" providesan excellent overview of the literature on thetopics addressed by the Task Force and, in away, serves as the references for all of the otherpapers. Ralph Wolff's "Incorporating Assessmentinto the Practice of Accreditation" providesresults of a survey of members of the accreditingcommunity on topics related to the concerns ofthe Task Force.

While I accept full responsibility for thecontents of this paper, I would like to thank theother members of the Task Force on InstitutionalEffectiveness (Peter Ewell, Trudy Banta, RalphWolff, Dorothy Fenwick, James Rogers, theTask Force Chair, and Richard Millard, projectcoordinator) for helpful comments on and afruitful discussion of an earlier draft of thepaper.

Introduction

A real estate agent confidently tells acustomer: "This house is located in the bestschool district in the area." A high schoolprincipal, beaming, reports to a PTA meeting:Our graduates this year have gone on to some ofthe best universities in the country." A prouduncle tells his golfing partner: "My nephew wasjust accepted at St. Basil's College; I hear it'sthe best small college in the tri-state area."

Comments such as these are commonplace.But ihat can they possibly mean? What basis dopeople have for formulating these judgments? Orfor accepting the judgments rendered by others(in these apparently weighty mt..ters? Perhapsmore to the point: Can we identify any rationalbases for making statements about the quality ofeducational institutions? If so, can we applythese bases in the formal method we have

37

4 4

Page 45: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

developed for determining the success ofeducational institutions: the accreditationprocess?

Historically, our judgments about the qualityof educational institutions have been basedprincipally on what are called inputs. A "good"institution was one that paid high salaries to itsteachers, had a good physical plant, investedheavily in its library, provided up-to-datelaboratories and computer facilities, and so on.An institution which had lower salaries, crowdedclassrooms, last year's computer models, etc.was thought to be second-rate.

Increasingly in recent years, people havebecome dissatisfied with, or at least suspiciousof, this "input" approach to defining quality. Thesuspicions arose first, not at the post-secondarylevels of education, but at the elementary andsecondary levels with the advent of the"accountability movement." This movement liadits origins in the latter half of the 1960's, whichwitnessed massive infusions of federal dollarsinto educational programs. Tied to theseinfusions were requirements that the programs beevaluated. In all too many cases, it was notedthat the new funding made little or no differencein student learning. In fact, at a macro level, itwas noted that, as spending increased, studentachievement seemed to deteriorate. Totalexpenditureslocal, state, and federalcontinued to increase at a steady clip. But suchindices as SAT scores, college-going rates,retention rates, literacy rates, etc. declined. Inthis milieu, attention turned to the outputs ofeducation. The 1970's witnessed an outpouringof "school effectiveness" studies. These studiesstarted by identifying schools which seemed toproduce good outcomes (usually defined in termsof student learning as measured by tests), thenattempted to determine the "input" or "process"characteristics of the schools. By the 1980's andcontinuing to the present, it has been an acceptedfact at the elementary and secondary schoollevels that the quality of a school must bedefined by its outcomes.

Postsecondary education has come late to thisgame, but it is tracing the same path as thattraversed by lower levels of education. Forexample, the staple of the accreditation processat the postsecondary level has been an inputmodel. Library resources, classroom space,faculty salaries, lab facilities, etc. have been thekey subjects of study. The process has operatedon the assumption that if the inputs are in placethe outcomes will inevitably follow. Thatassumption has increasingly been called intoquestion, just as it was in an earlier day at thepre-collegiate level. Hence, the postsecondaryaccreditation process, while clearly notabandoning steady attention to inputs, callsincreasingly for outcomes assessment. (Specificreferences to outcomes assessment inaccreditation materials is documented in anotherpaper prepared for this COPA project; seeOutcomes Assessment and Analysis: A ReferenceDocument for Accrediting Bodies, June, 1991;available from the Council on PostsecondaryAccreditation for $6.00.) Concern for outcomesassessment has clearly been one of the two orthree major topics of discussion in post-secondary education in the past decade.However, the methodology appropriate foroutcomes assessment is still somewhat unfamiliarterritory for many people in higher education,including those involved in the accreditationprocess.

The principal objective of this paper is toidentify various methods which might be usefulfor outcomes assessment, and to comment on therelative strengths and weaknesses of thesemethods, particularly in relation to their use forassessing institutional effectiveness. However,before moving directly to a treatment of thesemethods, we must treat a number of ether issues.Our experience in working with variousassessment methodsand, more especially, inworking with various faculty, staff, andadministrators in both educational institutions andaccrediting bodiestells us that the strengths andweaknesses of the methods can be discussed

Page 46: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

fruitfully only after some more general questionsare confronted. Further, while many peoplethink that the real problems of outcc,assessment are technical or methodologicalquestions, in fact, the really difficult problemslie in the realm of these general questions. So,we proceed first to treat these general questions,then refer to them in the discussion of particularmethods.

The First Key Question: Final Status or ValueAdded. The first key question to be answered,or choice to be made, before deciding aboutmethods to be employed for assessinginstitutional effectiveness, is whether toconcentrate on the final status of institutionalresults or the value added by the institution. Thenames for these two approaches are largely self-explanatory, but let us comment briefly on each.

In the final status approach, we areconcerned only with the final outcomes evidentfor an institution's "products. " (The "products"may be graduates, community services, facultyresearch, or whatever else is called for in theinstitution's catalog of intended outcomes.) Itmakes no difference what the starting point was.It makes no difference whether the institutioncontributed to the development of the finalstatus. Perhaps more appropriately, it is simplyassumed that the institution contributed, at leastto a significant extent, to the final status.

In the value added approach, we areconcerned primarily with the extent to which theoutcomes have been influenced by the institution.Are graduates getting better jobs than they wouldhave acquired without attendance at theinstitution? Has writing ability developed beyondwhat would have been expected from simple age-related maturation? Do a large percentage of aninstitution's graduates enter prestigious graduateschools because t1 .:. institution trains its studentsexceptionally well, or just because it starts witha very bright group of freshmen?

We should make three observations about thechoice between the final status and the value

added approaches to outcomes assessment. First,nearly everyone agrees that the value addedapproach is the one that should be used. Second,despite this agreement, nearly all of theoutcomes measures actually used by institutionsare of the final status type.

Third, and most important for purposes ofthis paper, there are clear methodologicalimplications involved in the approach that isadopted. Specifically, the methods of collecting,analyzing, and interpreting information areenormously more complex in the value addedapproach than in the final status approach. In thefinal status approach, we deal with just onemeasurement at a time, e.g. a test score, jobplacement rate, or index of satisfaction. In thevalue added approach, we need at least twomeasurements. The two measurements can occurin either one of two contexts. First, they cancome from a "before-and-after" design, i.e.before and after association with the institution.Second, they may come in a "comparison"design, i.e. after association with the institutionvs. after non-association with the institution(which in practice may mean association withsome other institution or association with noinstitution at all). Ideally, both designs should beused in a coordinated manner, thus involving atotal of four measurements.

It is clear that the value added approach,admittedly the preferable one, presentssignificant challenges' . In many ways, one isthrust back on the final status approach, with itsattendant inferential uncertainties. Thoseinterested in outcomes assessment related toinstitutional effectiveness need to be aware of thecontrasts presented by these two approaches:both the questions they answer and the methodsrequired to yield their answers.

The Second Key Question: Final Answers orMeaningful Processes. The second key questionto be answered or choice to be made relates towhat has emerged as an importantphilosophicalor perhaps more accurately, a

G

Page 47: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

stylisticapproach. On the one hand, somemaintain that it is possible to develop relativelydefinitive answers to the questions of whetherdesired outcomes are being achieved. And, theoutcomes assessment should provide these finalanswers.

On the other hand, some maintain that it istoo difficult to provide the final answers, but thatdoes not mean that we should abandon thepursuit altogether. Rather, what is important,according to this position, is that we assure thatmeaningful processes are established tocontinually generate and examine informationrelated to outcomes assessment.

Is this a distinction without a difference?That is, do the final-answers and meaningful-processes positions amount to the same thing interms of accreditation concerns? By no means.There are at least two major differences in thepractical consequences of these positions in termsof accreditation reviews.

First, there is a difference in who is involvedand in how they are involved in outcomesassessment activities. In the final-answersapproach, it makes little difference who isinvolved or how they are involved (except toinsure the technical correctness of procedures, asper the second issue, considered below). Thesole concern is with providing correct, validanswers. According to this position, it isprobably best to have one or a few psychometricexperts simply provide a package of answers toan institution about its outcomes. Then, anaccrediting group can examine the package ofanswers.

In contrast, according to the meaningful-processes approach, we need to worry aboutbroad involvement of the constituencies affectedby the outcomes assessment, both with respect toselection of methods for assessment andinterpretation of results. An accreditation reviewwould be no more ink, ested, perhaps even lessinterested, in the results of the assessment thanin how the results were obtained and used. Thereview would treat, for example, what groups of

.1111Ii

faculty, staff, and other groups were involved indeveloping the assessment methodology; howwidely the results of the assessment weredisseminated and discussed; and similar issues.

The second major difference between thefinal-answers and meaningful-processesapproaches to outcomes assessment relates to thetechnical charactetistics of the methodsemployed. The technical characteristics to whichwe refer include such matters as reliability andvalidity of measures, sampling techniques (ifsampling is employed), adequacy of normativedata (if normative comparisons are involved),and other such matters.

In the final-answers approach, one needs toworry a great deal about such matters. It iscritical that the institution bring expertise to bearon these matters. And, it is critical that membersof an accrediting team or body have suchmethodological expertise.

In contrast, with the meaningful-processesapproach, one need not worry overmuch abouttechnical purity. Of course, one does not want totolerate outright rubbish. But measures whichprovide some used information are quiteacceptable, provided the people involved in theassessment have a commitment to their use.

In this paper, we do not express a preferencefor one or the other of the two positions we havebeen discussing. At first blush, the final-answersapproach has considerably more appeal. Afterall, if one asks whether an institution isachieving its objectives, presumably one wouldlike an answera final answer to the questionand not simply find out who is involved in tryingto answer the question, how they went about thetask, etc. The meaningful-process approachseems solipsistic, a bottomless quagmire.

Despite the first-blush appeal of the final-answers approach, the experience of manypeople who have labored in this vineyard for along time suggests that the meaningful-processesapproach is the more fruitful one. The reasoninggoes like this: Obtaining final, definitive answersto questions about outcomes is so difficult, so

40

Page 48: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

fraught with technical limitations, so long andarduous, that if you insist on final, clearanswers, you are likely to despair; in frustration,you will give up the pursuit altogether. Further,since only the methodological experts wereinvolved in the first place, no one much cares ifthe pursuit is abandoned. Better, so the reasoninggoes, to concentrate on establishing the processesthat will ensure continuing pursuit of answers toour questions about outcomes; and to involve themany people who have a stake in the answersand in the processes that can influence change inthe answers.

It is an interesting contrast in positions.Those engaged in the accreditation business needto acknowledge the existence of these twodifferent approaches and be ready to discussthem among themselves and with representativesof institutions served by accrediting agencies.

Program Effectiveness vs. InstitutionalEffectiveness. In this paper, we treat outcomesassessment related to institutional effectiveness.A companion paper is being written as part ofthe overall "COPA Outcomes Analysis Project"on outcomes assessment related to programeffectiveness. What are the differences, if any,in the issues to be considered when treatingprograms vs. institutions? Is an institution'seffectiveness defined as the simple arithmeticsum of the effectiveness of its various programs?If so, the problems of measuring institutionaleffectiveness are no more than the problems ofmeasuring program effectiveness. To someextent, the answer to these questions depends onwhat is meant by "programs. "

In most contexts, "programs" meansacademic programs designed by academicdepartments (faculty) or analogous bodies totrain or develop students, particularly withrespect to intellectual content or skills. In thiscontext, institutional effectiveness (except for afew, very specialized institutions) goes wellbeyond the simple sum of measures forprograms. Nearly all institutions specify desired

outcomes beyond those related to their academicprograms. Principal among such other outcomesak f.a) development of non-academic outcomesfor students such as values, attitudes,perspectives, etc., and (b) outcomes largelyunrelated to students, such as communityservice, research productivity, etc. Institutionaleffectiveness, obviously, must be assessed for allsuch outcomes.

Of course, one could defme "programs" verybroadly, going well beyond the traditionaldefinition of academic programs, to include allsystematic efforts which an institution undertakesto accomplish its manifold goals. With thisexpanded definition, is there any differencebetween institutional effectiveness and the simplesum of the effectiveness of the institution's manyprograms? Yes, there is - or, at least, there maybe.

It may be that the various programssponsored by an institution interact in such away that certain transcendent institutional goalsare achieved (or fail to be achieved) beyondwhat can be accounted for by examining thesuccesses and failures of individual programs. (Ifthe statistically trained reader wonders if we usethe term "interact" in the same sense in which itis used in analysis of variance, the answer is anunqualified "yes. ") Hence, institutionaleffectiveness requires examination beyond simpletallying of the degree of effectiveness of all theseparate programs of the institution.

The Importance of Goals. In the midst of muchdisagreement on many topics related to assessingthe quality of educational institutions, there isone proposition on which everyone seems toagree: Quality, at least to the extent that it isdefined by outcomes, cannot be evaluated orjudged without reference to the mission, goals,or objectives of the institution. (There are someuseful distinctions among these three termsmission, goals, objectivesin certain contexts,but for our purposes we use them more or lessinterchangeably.)

41

4 F,

Page 49: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Agreement on the proposition that outcomesassessment must start with institutional goals isa function, at least to some extent, of thediversity of higher education as it is practiced inthe United States and similar countries. While allor nearly all of these institutions have some goalsin common, chiefly the development ofintellectual capacities, communication skills, andcareer preparation, there are also noteworthydifferences among the institutions.

Many efforts to build an outcomesassessment plan for an institution prematurelyconsider the assessment methods to be employed.In fact, the first step in the process must be acareful analysis of the institution's goals.Methods, measures, indices, and procedures arethen developed for specific application to thesegoals. Hence, well developed assessment plansalways eventuate in some type of goals-by-methods matrix, such as that represented below.The task of those responsible for developing anassessment plan then becomes a matter ofjudiciously identifying which cells will be activein the matrix, i.e., which methods areappropriate for which goals. This idea of agoals-by-methods matrix should be kept in mindas we treat the various methods which are themain topic of this paper.

A Note on Reliability and Validity It isinevitable, as it should be, that when issues of

InstitutionalGoals

2

3

4

etc.

1111111111111111111111111111.

Methods, Measures

A B C etc.

methods of assessment or measurements arediscussed, the questions of the reliability andvalidity of the measures will be ra:sed. In ourtreatment of methods in the next section, we willnal be discussing these issues. They areenormously important matters. However, theymust be treated with respect to very specific usesof particular measures, especially with respect tovalidity. Hence, at this point, we simply note theimportance of raising questions about validityand reliability of measures in the coniext of aninstitution's individual assessment plan; but wedo not treat these matters in detail in thefollowing discussion of methods of assessment.

Methods for Assessing Outcomes

Considering that the main topic for this paperis "methods for outcomes assessment," thereader may, by now, be somewhat impatient toget on with the meat of the matterwhichindeed we do take up in this section. However,we re-emphasize tic, need to reflect on thequestions treated in earlier sections beforetreating particular methods and instruments,since directions set by the earlier considerationswill influence the selection of methods andinstruments.

As we begin the treatment of particularmethods and instruments, two generalobservations should be made. First, the methodsavailable for outcomes assessment are virtuallyinfmite, being limited only by the types ofoutcomes specified for an institution and theimagination of those concerned with assessingthe outcomes. There is no fixed list of methods.In what follows, we identify categories of themore widely used methods and types ofinstruments, but there is no suggestion that thelist presented here is defmitive and exhaustive.

Second, when presenting particular methodsor types of instruments, we may occasionallyrefer to an example of a currently usedinstrument. But we have not attempted toproduce a complete catalog of specific

42

4 13

Page 50: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

instruments, e.g. of all the currently usedstandardized tests aimed at assessing outcomes ofgeneral education programs. When we do referto a particular example, it should not be inferredthat this is an endorsement of the example as thebest or most frequently used item within itsgeneral category.

Tests of Developed Abilities, Knowledge, andSkills

The first major category of methods forassessing outcomes for educational institutionsconsists of testi. The word "tests" can beconstrued broadly to include any systematicsource of information about characteristics ofindividuals (whether those "individuals" bepeople, institutions, or other entities), in whichcase "tests" becomes virtually synonymous withthe word "methods" as used in this paper.However, the more popular use of the term"tests" refers to such things as intelligence tests,final examinations, the GRE's, etc. It is in thismore popular sense that we use the term here.

At the heart of every educational institutionis the intent to develop in students certain areasof knowledge, abilities, and/or skills. Ordinarily,the most direct assessment of whether or notthese outcomes have been achieved is providedby students' performance on some kind of test.On that proposition there is near universalagreement. The real difficulty comes inanswering the following three questions:

1. Does there exist a test (or group of tests)which validly assesses the particularoutcomes of interest to an institution?

2. If not, is it possible to construct such a test(or group of tests)?

3. Assuming some degree of positive answer tothe first two questions, how does one specifythe level of acceptable performance on thetest(s)? (This is the classical problem ofstandard-setting whi0 has received so muchattention in the accountability movement.)

Let us try to provide some guidance inanswering these questions as they relate tomeasuring institutional effectiveness. It will beconvenient to organize the matter in terms of thechart shown below.

Source of

Development

External

Local

Areas to be Covered

General SpecificEducation Areas

Areas to be Covered. Within the broad categoryof knowledge, abilities, and skills, wedistinguish first between the areas to be covered,and subdivide the areas into general educationand specific areas. The specific areas are usuallyrepresented by content fields corresponding tomajors or programs, such as psychology,history, marketing, auto mechanics, etc. Each ofthese specific areas, in turn, can be resolved intoever-increasing levels of detail.

For some institutions, the general educationcategory is simply irrelevant; the institution hasno goals that can be categorized as generaleducation. Such might be the case for institanionswith an entirely vocational orientation; it willalso be true for major schools (e.g. graduateschools) within institutions which concentrateonly on specific areas of knowledge or skill. Onthe other hand, nearly all four-year colleges anduniversities consider general education outcomesto be among their most important goals.

All educational institutions have goals relatedto specific areas of knowledge, abilities, orskills.

Sources of Development. The other basis forcategorization of tests is in terms of the sourceof dvelopment for the tests. Some areexternally developed, i.e. outside of the

43

5 0

Page 51: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

institution. External developers are typically areof two types. First, there are "commerrial"publishers (either profit or non-profit) whosebusiness is, at least in part, the development ofeducational tests. Examples of such publishersare Educational Testing Service, the AmericanCollege Test Program, the PsychologicalCornoration, Riverside Publishing (formerly partof houghton-Mifflin Co.), and McGraw-Hill/California Test Bureau. Second, there are profes-sional associations which, as part of their serviceto their members, provide tests applicable totheir particular areas of concern. Examples ofsuch associationsthere are hundreds of themare the National League for Nursing, theAmerican Chemical Society, and state BarAssociations.

The other source of development is local,i.e., people at the institution develop their owntest. Such development may take place entirelywithin a single department; it may be done by aninstitution-wide committee; or it may be done asa combined effort of a department or committeeand test experts in an educational research officeat the institution.

We have not represented in our scheme for"sources of development" an intermediate cate-gory which might involve groups of institutionscooperdng in development of a test. While thisis certainly a theoretical possibility, the simplefact of the matter is that there is little of it takingplace. (A notable exception is the developmentof the Area Concentration Achievement Tests, aF1PSE-funded project directed by AnthonyGolden at Austin Peay State University.) Peoplewho are using tests to assess knowledge,abilities, and skills generally either use anexternally developed test or develop one strictlywithin the confmes of their own institution,although with some borrowing of experiencefrom other institutions.

The strengths/advantages and weaknesses/disadvantages of tests as methods for assessinginstitutional outcomes are partly the same andpartly different for each of the cells in the chart

given above. Before beginning to analyze thesestrengths and weaknesses, let us comment brieflyon the nature of each one of the cells.

Externally Developed Tests of GeneralEducation. Externally developed tests aimed atthe general education outcomes e post-secondaryeducation are a relatively new phenomenon.They have started to become available as the"assessment movement" has grown in highereducation in just the past fifteen years or so.Whereas broad-based measures of schoolachievement have been available at theelementary and secondary levels since the 1930'sand such measures have long been used in nearlyall elementary and secondary schools, com-parable measures at the post-secondary level arestill few in number and infrequently used.

However, there are now available severalexamples of measures (what are often called"batteries" of tests) developed by professionaltest-makers which are aimed at the generaleducation component of a college degreeprogram. Perhaps the most widely referenced isthe COMP (College Outcomes MeasurementProject) Test, produced by the Americaa CollegeTesting. Other examples include College BASE,produced by Riverside Publishing Company; theAcademic Profile, produced by EducationalTesting Service; and the Collegiate Assessmentof Academic Proficiency (CAAP), produced byAmerican College Testing.

One will sometimes hear reference topossible use of the Scholastic Aptitude Test(SAT) or the American College Test (ACT) withcollege seniors as a "post-test" following use ofthe same measures prior to admission as anattempt to assess growth in general education.But such a position is not accorded muchcredibility by measurement experts because thesetwo widely used tests were specifically designedto be general predictors of success in collegerather than as measures of the achievement ofcollege outcomes. (One must admit, however, aconsiderable communality, from a strictly

44

51

Page 52: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

empirical viewpoint, in what is covered by testsdesigned for these differing purposes.)

Locally Developed Tests of General Education.There is, at present, a great deal of activitydevoted to local development of tests related togeneral education goals. In some instances,institution-wide committees are attempting todevelop "batteries" of tests along the lines ofexternally developed tests, but (presumably)more carefully attuned to local definitions ofgeneral education goals. In other instances,institutions are attempting to "embed" testsrelated to general education goals in existingcourse evaluation procedures, then aggregateresults across courses.

Externally Developed Tests of Specific Areas.Externally developed tests of specific knowledge,abilities, and skills have been available forcertain areas for mPny years. Examples includethe advanced or subject tests of the GraduateRecord Examinations, the Major FieldAchievement Tests, both of the latter beingpublished by the College Board, and the testsadministered by professional organizations, suchas the National League for Nursing (NLN) testsand CPA examinations. More such tests becomeavailable with each passing yt.ar.

Locally Developed Tests of Specific Areas.Locally developed tests of specific knowledge,abilities, and sldlls are legion. They includecomprehensive, end-of-program examinations inevery conceivable content area. They aretypically developed by the faculty responsible fora particular program. They may look very muchlike an externally developed exam such as aGRE or they may be combinations of wri tentests and behavioral ratings of skill performance.

A Special Note on Portfolios. Much work isbeing carried out these days on the use of"portfolios" for assessment of student learning.The work is generally being conducted by local

institutions; some of it is aimed at generaleducation goals, especially with respect towriting and speaking skills, while in otherinstances it is aimed at major fields of study.Portfolio assessment offers some interestingpossibilities, particularly in contrast to what arereferred to as traditional paper-and-pencil tests.

In terms of the questions to be asked aboutthis particular approach to assessment andanalysis of its strengths and weaknesses,portfolio assessment is not generically differentfrom other types of tests. That is, one must beconcerned about questions of validity, reliability,efficiency, interpretability, and so forth, for thissource of information about achievement in thesame manner as considering these matters forother sources of information.

General Strengths and Wealatesses of Tests. Asnoted above, strengths and weaknesses of tests asassessments of outcomes must be analyzed, atleast to some extent, separately for each cell inour scheme, especially with respect to the sourceof development. However, there are somegeneric strengths and weaknesses of ters whichwe can discuss first.

The major strength of tests as an outcomeassessment is very obvious. At the heart of mosteducational enterprises is the development ofknowledge, ability, and skill. A test, at least awell developed one, is the most direct way to"get at" such development. A test has adirectness to it which is not present in most othermethodologies for purposes of assessingdevelopment of knowledge, ability, and skill.There is good reason why tests play such acentral role in most educational settings.

A second strength of tests is that they havefairly wide applicabilitypartly because of theway most instructional goals are conceptualizedand the way instruction is delivered. The way weeducate students usually lends itself to testing theoutcomes. Educational critics sometimes lamentthis situation, but the simple fact is that the wayin which much education takes place lends itself

45

Page 53: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

to what might be called traditional testingmethodology.

A third strength of tests is that we now havea fairly well-developed set of principles,contained within the field of psychometrics, fordetermining, or at least discussing, the quality ofmeasurements provided by tests. These principlesare much more developed than they are for theother methods to be considered later in thisPa Per.

Tests have three major weaknesses forpurposes of outcomes assessment related toinstitutional effectiveness. First, tests areappropriate for only a few of an institurion'sgoals, although these few goals are usuallyamong the most important for the institution.This is not a fault of tests as such but a fault inour typical thinking about outcomes assessment.We alluded to this phenomenon earlier. Whenasked about assessing outcomes for aninstitution, a frequent initial response is: Thereisn't a test for all the things we are trying to do.Indeed, that may be true, and almost certainly istrue. But there are tests for some of the goals,and other methods for other goals.

The second weakness of tests as a genericmethodology is the fact that we usually cannotcombine information across many differentfields. This weakness is peculiar to assessment ofinstitutional effectiveness as opposed to programeffectiveness. The typical institution has manyprograms. Some of these programs may havewell developed tests applicable to their particularoutcomes; other programs may have poorlydeveloped tests or no tests for their outcomes.Usually, there is no meaningful way to combineall of this information (and lack of information)to make statements about the institution as awhole. Even if every program had a "good test"for its outcomes, . is unlikely that theinformation could be combined in anymeaningful way. One program may use theGRE's, another the NLN exams, another alocally developed paper-and-pencil test, stillanother a laboratory-based practicum as an exit

measure. Each of these may provide a goodmeasure for an individual program, butgeneralizing across the entire institution isdifficult at best, demanding a level of inferencethat could only be describt.4 as ethereal.

A third generic criticism of tests relates tothe relationship between test performance andactual practice or application, usually defined ina post-graduation time frame. People alwayswonder whether the knowledge displayed on atest will translate into proper application later on(or even whether the knowledge will be retainedsix months later).

Strengths/Advantages of Externally DevelopedTests. With the understanding that the strengthsand weaknesses discussed above apply to thefollowing discussion, let us turn now to specificcategories of tests. Externally developed testshave a number of strengths or advantages, mostof which (but not all) apply to both tests ofgeneral education and tests of specific areas.

The first advantage of externally developedtests is the professional care taken in theirpreparation. Ordinarily, the developers of thetests have a high degree of expertise and havedevoted a considerable amount of time andresearch to the test development process.

The second advantage of externallydeveloped tests is that (usualy) they have sometype of norm, i.e. an external reference point forcomparing local performance. This is a majoradvantage for "making sense" out of localfindings.

The i advantage of externally developedtestsa I.:tidy obvious oneis the readyavailability of materials and services. If aninstitution decides that a particular test isappropriate for some use, it can usually order thematerials and have them on hand in a matter ofweeks. Further, such services as machine scoringand reporting, consulting, technical manuals, etc.are already in place.

A fourth advantage of externally developedtests is their credibility with a variety of

Page 54: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

audiences. Whether the credibility is justified ornot is a separate matter. The fact is that manygroups "pay attention" to GRE scores, passingrates on the CPA exams, etc.

A final characteristic of at least someexternally developed exams, which shouldprobably be listed as an advantage or strength, isthat they serve as the definition of successfulperformance in the field. There is simply no wayto avoid them, other than dropping out of thefield. This characteristic applies in many"professional" fields such as nursing, physicaltherapy, and education.

Weaknesses/Disadvantages of ExternallyDeveloped Tests. The greatest disadvantage ofexternally developed tests is, quite simply, thatthey do not exist for many fields of knowledgeor skill which represent important outcomeswithin an institution.

A second disadvantage of externallydeveloped tests, really a milder form of the firstdisadvantage, is that the test may only partiallycover the outcomes specified at the localinstitution. This is, of course, a matter ofdegree. And it is the source of much controversywhen contemplating the use of an externallydeveloped test. How much overlap (goes thereneed to be between locally specified outcomesand the content of an externally developed testbefore one declares that the "fit" is closeenough?

A third disadvantage of externally developedtests, very much related to the latter point, istheir relative inflexibility. In most cases, usingsuch a test is an all-or-none affair. One cannotordinarily eliminate some of the content and/oradd locally developed content, although someexternally developed tests give limited flexibilityin this regard.

A fourth (potential) disadvantage ofexternally developed tests is entirelypsychological in nature. It often afflicts facultyinvolved in specifying what measures areappropriate for their program. There is,

sometimes, not a "sense of ownership" when anexternally developed test is used. Because thetest cannot, in most instances, be tailored to thelocal curriculum even a little bitthere is atendency to reject the whole notion of using anexternal test. Of course, not all faculty feel thisway; some welcome the objectivity of anexternal measure. But the lack of "ownership"feeling is encountered sufficiently often towarrant mention here. A related concern is that"tests will drive the curriculum" or that facultywill "teach to the tests." These fears arementioned more often with respect to externallydeveloped tests, although they may be just asproblematic for local tests.

A fifth disadvantage of externally developedtests is their costs. Because a great deal ofexpertise, research, and time has been devoted totheir development and their developers wouldlike, at a minimum, to not lose money, the costsare naturally passed along to the users.

With respect to this disadvantage, twocomments are worth making. First, where cost isa real consideration, the cost can be reduced bysampling, either by student within year, e.g.testing a 20% sample of the students, or bystudent cohorts across years, e.g. testing studentsevery third year. For puiposes of institutionalassessment, such sampling schemes are perfectlyacceptable. Second, if costs of externallydeveloped tests are compared with the true costsof local development of tests, it may actually becheaper to use the external tests; but institutionsrarely calculate the true costs of localdevelopment, being particularly adept atdisregarding the huge amounts of faculty andstaff time which may be devoted to the testdevelopment enterprise.

Strengths and Weaknesses of Locally DevelopedTests. For the most part, the strengths of locallydeveloped tests are the flip side of theweaknesses of externally developed tests; and theweaknesses of locally developed tests are the flipside of the strengths of externally developed

47rz

Page 55: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

tests. And, it is not difficult to see what the "flipsides" are. Let us treat the matter in summaryfashion.

TP e strengths/advantages of locally developedtests are:

1. They are potentially available for allareas, dependent only on the local will todevelop them.

2. They can be tailored exactly to fit thelocal defmition of outcomes.

3. They are infinitely flexible. They can bemodified from year to year or even forvarious sub-groups of students.

4. The sense of ownershir can be verystrong for locally developed tests. This isperhaps their major advantage. Ratherincidentallybut very importantly,especially if one opts for the "process"approach to assessmentthe engagementof local personnel in building a testusually leads to intense discussion ofwhat the goals of education really are,how they can be achieved, what changescan be made to achieve them, etc.all ofwhich might be far more important forthe overall quality of the institution thanany results of the test.

5. It can be relatively inexpensiw todevelop tests locally (at least if onedisregards the personnel costs involved).

The weaknesses, disadvantages of locallydeveloped tests incluG

1. Expertise in test development may be inshort supply, with attendant deleteriousconsequemes. It is hard to build a goodtest; it is easy to build a poor one. Inparticular, locally developed tests tend toeither one of two extremes: either veryfactual and mundane or so esoteric as tobe uninterpretable.

2. Locally developed tests almost neverhave any external reference points

=01

(norms). When confronted with the finalresults of the test, it is almost impossibleto answer the question: Is this resultgood? It may be possible to have faculty,with a thorough knowledge of the localcurriculum, establish some "criterion-referenced" benchmarks. But this is oftena more difficult process than one firstsupposes it to be. Of course, afta alocally developed test has been used morethan once, it will have its own "localnorm" which can be used as a referencepoint for future uses.

3. It is a long and arduous journey todevelop a local test. From the point ofdeciding to develop a test to the point ofhaving a test which is useable is oftenseveral yearsand sometimes you neverget there. Quite apart from developingthe test itself, one must worry aboutprocedures for scoring them,summarizing results, printing thematerials, handling inquiries abouttechnical characteristics of the test, etc.

4. Locally developed tests rarely have thecredibility with outside audiences (i.e.,outside the hiculty/staff who developedthem) that external tests have.

5. The fifth item listed as a "strength" ofexternally developed tests (i.e., being the"coin of the realm" for certain agencies)does not really have a correspondingweakness for locally developed tests,except the perfectly obvious one thatlocal tests do not serve this function.

Surveys, Questionnaires

The second major method employed forassessing educational outconrs is the very broadcategory of surveys and questionnaires. Foreconomy, we will use the single term "survey"in this paper to represent this category. Thecategory includes such a diffuse, flexible, variednumber of applications and examples that it is, at

48

Page 56: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

first, difficult to summarize. In particular, thecontentwhat you ask questions aboutinsurveys is limitless. Nonetheless, let us attemptto summarize the category with the followingorganizational scheme: by source ofdevelopment, target group, format, and contentorientation.

Sources of development. The sources ofdevelopment for surveys are the same as for"tests, " with one or two exceptions. First, thereare a great variety of externally developedsurveys. They are available from commercialpublishers, an example being the Alumni Surveypublished by the American College TestingProgram. They are also available fromprofessional associations, for example the surveysent to graduates of rehabilitation programs bythe Council on Rehabilitation Education whenrehabilitation programs are being accredited.Several well-known surveys are also availablefrom educational research organizations, whichfunction, in this regard, partly like commercialpublishers. Two examples in this category arethe College Student Experiences Questionnaire(sometimes called the Pace survey, after itsprincipal author, Robert Pace) produced by theCenter for the Study of Evaluation at UCLA;and the Freshman Survey (sometimes called theAstin survey after its principal author, AlexanderAstin) produced by the Cooperative InstitutionalResearch Program also at UCLA.

Then, of course, there are locally developedsurveys, by the hundreds, even thousands. Everyinstitution has a great variety of locallydeveloped surveys, many of which relate, in oneway or another, to outcomes assessment.

There are also a number of examples ofgroups of institutions cooperating in thedevelopment and use of surveys for outcomesassessment, for exampl for follow-up ofgraduates. Under sources of development fortests, it was noted that institutions don't seem tocooperate in developing tests; but they docooperate in developing survey instruments.

Target Groups. Target groups for surveys arevirtually limitless, but there are about a half-dozen typical target groups, including thefollowing:

1. Students, further oubdivided by level,e.g. pre-admission level, entry level, in-process (e.g. end of sophomore year),and exit level (approximately the time ofgraduation).

2. Graduates, further subdivided by numberof years after graduation, e.g. after sixmonths, one year, five years, ten years,etc.

3. Employers of graduates of the institution.4. Faculty and staff of the institution,

obviously susceptible to any one ofseveral further su'Alivisions, e.g.humanities faculty, junior vs. seniorfaculty, academic vs. non-academicadministrators, etc.

5. Community members, again obviouslysusceptible to a variety of subdivisions,e.g. community leaders, a randomsample of area residents, etc.

6. Personnel at peer institutions.

Format. It should be mentioned, perhaps as partof the definition of what is meant by "surveys,"that this method encompasses a number ofpossible formats. The most typical is a printedform, on which the respondentthe member ofthe target grouprecords answers to items. Theitems may be of the "check off" variety usingrating scales, yes-no answers, etc. and/or of the"free response" variety in which the respondentwrites out an answer to a question.

Since most people think of surveys as askingonly about subjective feelings, special mentionshould be made here of the "behavior checklist"type of item which may be used in a survey. Inthis type of item, respondents are not asked theiropinions about something but whether or notthey have performed some action or how oftenthey do it. For example, students may be asked

4r-,

Page 57: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

how often they visited the library last week.Faculty may be asked how many students visitedtheir office last week. Alumni may be asked howmany professional associations they belong to.

We also include within the scope of surveysany type of interview. These may take place inperson or by telephone. They may also occurwithin the context of a "focus group" whichessentially iniolves interviewing a small group ofpersons all at once. Any of these formats forinterviews may vary from highly structured,almost like a "check off" questionnaire, to quiteunstructured affairs.

Content. It is difficult to characterize thepossible content of surveys. This difficulty isprecisely the strength of surveys. One canconstruct a surveyor fmd an existing onetocover just about anything. However, we can listsome of the types of items often encountered insurveys which are relevant to the question ofoutcomes assessment for institutional assessment,while re-emphasizing that one can cover almostanything with survey methodology.

Surveys of students often ask about tamemain topics. First, after some period ofassociation with the institution, students areasked to rate their perceived growth with respectto knowledge, skills, or abilities. Second,students are queried about their attitudes,dispositions, and habits, for example with respectto matters related to life-long goals, racial andsocial attitudes, religious and civic practices, etc.Third, students are asked to rate various aspectsof the institution, from the mundane (foodservice, parking facilities, etc.) to the sublime(overall satisfaction with instruction,opportunities for research, etc.).

Surveys of gaduates cover much the sa neterritory as surveys of students, with lessemphasis on the more mundane characteristics ofstudent life, which may have changed substan-tially (for better or worse) since the graduate wasin attendance. Additionally, surveys of graduatesinquire about current job status, relationship

between job and degree program, and involve-ment in community and professional activities.

Surveys of employers usually attempt todetermine whether the employee who is agraduate of the institution possesses certainrequired skills and whether the employer issatisfied with the employee/graduate.

Surveys of community members usuallyrelate to those institutional goals which areoriented to community service. Do communitymembers perceive the institution as providingimportant community services, e.g. culturalevents, assistance to local businesses, etc.?

Surveys of faculty and staff usually ask forratings of the extent to which these employeessee the institution as achieving its goals. Suchsurveys often add questions about employee sat-isfaction with various institutional services (muchlike the questions asked of students) and aboutsatisfaction with working conditions, questions inthese two latter categories being useful forpurposes of human resources management butperhaps not of much relevance for outcomesassessment.

In this discussion of content orientation,special mention should be made of instrumentswhich were originally designed to be personalitytests, but are occasionally used for purposes ofoutcomes assessment. Examples include theOmnibus Personality Inventory and the Allport-Vernon-Lindsey Study of Values. Althoughoriginally designed for purposes other thaneducational outcomes assessment and althoughtechnically "tests" hence includable within theearlier section of this paper on various types oftests, these personality tests seem to fit best,functionally, in our treatment of surveys in thispaPer.

We emphasize again that the above listingsimply provides examples of the many topicswhich can be covered by surveys and of thetypes of target groups for these surveys.

StrengthslAdvantages of Surveys. As we beginthe analysis of strengths and weaknesses of

Page 58: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

surveys for use in outcomes assessment, weshould note that in terms of the contrast betweenexternally developed and locally developedsurveys, the strengths and weaknesses are almostidentical to those discussed with respect to thisissue under "tests" above. To summarize justbriefly, strengths of the externally developedinstrument (test or survey) are immediateavailability of materials and services, quality ofdevelopment, external reference points (norms),and credibility to outside audiences. Theweaknesses tend to be lack of fit with localemphases and circumstances, political acceptance(ownership), and possibly costs.

There is one exception to this generalization.Externally developed surveys often, although notuniversally, allow for inclusion of locallydeveloped questions (items) as a supplement tothe main part of the survey. This tends not to betrue for externally developed tests. Hence, thecriticism that externally developed tests havelittle flexibility for adaptation to local emphasesmust be tempered, although not eliminatedentirely, as applied to externally developedsurveys. (The real difference here is not so muchbetween surveys and tests as between the typicalstructure of surveys and tests, particularly as thatstructure serves as the basis for interpretation.Many surveys are interpreted on an item-by-itembasis, thus adding items, which will also beinterpreted on an item-by-item basis, is notproblematic. On the other hand, many tests areinterpreted in terms of scores derived from asummation of responses across items; and it is atleast messy, often impossible, to insert extraitems which will be summed into the totalscores.)

The overwhelming strengths of the survey asa method of assessing outcomes are its excep-tional flexibility and near universal applicability.One can construct or find an existing survey tocover virtually any topic, for any type of targetgroup. This makes the survey a particularlyattractive alternative for "getting at" outcomeswhich do not seem to be susceptible to other

types of assessments. One can always just askstudents, faculty, community members, etc.,whether they believe that some outcome has beenachieved, so that at least some information isavailable on the matter.

Part of the flexibility of surveys relates totheir administration. Ordinarily, the adminis-tration does not need to be tightly controlled.Surveys can be mailed so that respondents cancomplete them on their own. Or they can beadministered in group settings, without worryingovermuch about security, cheating, etc. This isin contrast to cognftive tests for which adminis-tration often needs to be tightly controlled.

Another advantage of the survey is that it canbe relatively easy to construct. It certainly takessome degree of expertise to do a reasonable jobof constructing a survey (or local, supplementaryitems to be used with an externally developedsurvey). But the degree of expertise requiredtends to be considerably less than what isnormally required to construct a good copitivetest.

There is one advantage of surveys that isspecially relevant to their use for institution-wideassessment as opposed to program assessment.Survey questions can be phrased in such a waythat they apply to whatever program therespondent is associated with (for example,students in their various majors, faculty in theirvarious departments), with responses summarizedacross all programs, i.e., across the entireinstitution. This contrasts with the difficulty ofcombining test information across variousprograms, as noted in the discussion of tests.

A final advantage of the surveyat least inmost instances is that persons who are notmethodological experts usually feel relativelycomfortable looking at survey data. Such data isusually reported in the form of percentages ofresponses to individual items, hence is fairlysimple to handle. This is in contrast to reports onstandardized tests such as the GRE's or MCAT'swhere scores are given in "standard scores" orother modes unfamiliar to the non-expert. This is

58

Page 59: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

an important advantage for institution-widea:sessment, which usually involves audiences(including accrediting teams and associations)which are diverse with respect to their interestsand levels of methodological expertise.

Weaknesses/Disadvantages of Surveys. Thcre isone overwhelming weakness to the survey forpurposes of use in outcomes assessment (or forjust about any other purpose): The responses aregenerally subjective assessments. Thus, thestudent may say, on a survey, that he has growntremendously in writing ability as a result of hiscollege experience, when hi fact he has gotten nobetter at all. A student may say, on a survey,that the university library is horrible, when infact, according to a number of objective criteria,it is quite a good library. Graduates may say,on a survey, that they are very tolerant ofpersons from other races and cultures, when infact, any objective analysis of the graduates'behavior contradicts this report. A facultymember may say, on a survey, that pitifully fewstudents are involved in research projects at theinstitution, when in fact the fifteen percent ofstudents who are involved is a relatively highpercentage for institutians of this type. And so itgoes with other types of responses to surveys.They can provide us with a great deal ofinformation but we must always be asking aboutthe correspondence between reality and thesubjective responses given on the surveys.

The situation is, of course, not hopeless.Much research has been done on the validity ofsurvey responses; and much of that researchsuggests that many surveys have some degree ofvalidity. However, the correspondence betweenreality and survey responses is far from perfectand, worse, we are never quite sure when thecorrespondence might be nil. Hence, we mustalways he looking for ways in which tosupplement survey data.

A final weakness of surveys is the flip sideof the strength we identified above as ease ofconstruction. While it is relatively easy to

construct reasonably good surveys, it is also easyto construct some really horrible ones. Andalthough most people recognize that someexpertise is required to construct cognitive tests,there are too many people who believe that anyneophyte can "throw together" a survey: Adangerous attitude. Care must be exercised insurvey construction if meaningful results are tobe obtained.

Institutional Records

The third major category of methods whichmay be used for the assessment of outcomes forinstitutions is that of institutional records. This isprobably the most neglected of all types ofmethods for assessing outcomes. It seems to bea matter of not seeing the forest because of allthe trees.

Every institution keeps a great variety ofrecords. Nearly all of them are kept for somereason other than specifically for the purposes ofassessing outcomes. Nevertheless, many of theserecords can, in fact, be used for outcomesassessment. Often, all that is required is a littleimaginationand finding out who keeps whatrecords and getting their cooperation in using therecords for something other than their primarilyintended purpose. We provide here a number ofexamples of the use of institutional records foroutcomes assessment, prefacing each examplewith reference to a possible institutional goal.

1. An institution which aims to servemincrity students in its region can useinformation from its admissions databaseto answer many questions. How many"inquiries" have been generated frompotential minority students? How manyapplications? How many actual first-timeregistrations? How have these figureschanged over the years?

2. An institution which aims to have astrong liberal arts component in itsbachelor's degree programs can analyze

Page 60: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

transciipts of graduating seniors todetermine if, in fact, students havepursued programs which can becharacterized as liberal arts inorientation. (It should noted that inmany ins; ances, institutions havecurricular regulations which seem torequire a liberal arts orientation, butstudents fmd ingenious ways tocircumvent the regulations.)

3. An institution which aims to encourageresearch on the part of its students candetermine, from institutional records,such things as: (a) numbers of publishedpapers or presentations at professionalmeetings which involve joint faculty-student authorship, (b) number of grantproposals which include money forstudent assistants, (c) patterns of libraryusage by students, e.g. number of bookschecked out.

4. An institution which aims to fostercommunity involvement on the part of itsstudents can track patterns over the yearsin the numbers of students volunteeringfor various community projects, numberof student organizations involved,numbers of hours devoted to the projects,etc. An institution may very well haveone office coordinating volunteeractivities and this office must have thistype of information just to serve itscoordinating function.

5. An institution which wants to encouragefurther study by its students can track thenumber going on to higher levels ofeducation, e.g. to a four-year collegefrom a community college, to doctoralprograms following a bachelor's degrezprogram. Such information is oftenobtained as a by-product of job placementdata collected by career service offices.

6. An institution which wishes to encourageparticipation in religious activities cannote attendance patterns for chay mass,

religious retreats and other suchfunctions.

7. An institution which purports to serve thelocal business community can report thenumber of businesses served throughagencies such as a Small BusinessDevelopment Center, a technologycenter, or other outreach offices.

StrengthslAdvantages of Institutional Records.For purposes of use in outcomes assessment,institutional records have a number of verysignificant strengths or advantages. First, andperhaps foremost, the information has alreadybeen collected, hence a minimum of additionaleffort is requiredalthough some effort isneeded, as noted below. Second and relatedly,the information has often been collected over aperiod of time, perhaps for many years, thusallowing for identification of trends. And, it islikely that whoever collects the information willcontinue to do so in the future. Third, the infor-mation often has a directness and simplicity to it,a kind of "face validity," which is refreshingFinally, by their nature, institutional recordsordinarily do relate to the entire institution ratherthan to just a single program or small segment ofthe institutional community.

Weaknesses/Disadvantages of InstitutionalRecords. Institutional records also have certainweaknesses or disadvantages with respect to theiruse for purposes of outcomes assessment. First,while the records do exist, it is often the casethat someone whose principal concern isoutcomes assessment must collate, summarize,analyze, and interpret the information containedin the records; the records do not often speak forthemselves. Second, since the records are oftencreated originally for some purposes other thanthose of primary concern in this paper, one oftenwishes that the records had been created andstored in a somewhat different manner:something which one can only lament after thefact. In relation to this 'point, it often happens

53

Cu

Page 61: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

that the person, gi oup, or office interested inusing the institutional records (which are underthe control of some other person, group, oroffice) for purposes of outcomes assessmentwishes to change the way the records are createdor stored. Suggestions to do so, experience tellsus, must be done with the utmost tact. Theprincipal custodians of the records usually do notlike "tampering" with their systems; they areusually suspicious of any "outsiders" who wantto use their data and suspicion can turn tooutright hostility when the outsider not onlywants to use the information but actunlly changehow it is obtained.

A fmal weaknessperhaps better termed acautionto be noted about the use ofinstitutional records for outcomes assessment isthat one must become thoroughly familiar withthe way in which information is collected beforerelying on it heavily. The simplicity of therecords can sometimes be deceptive. Theoperational defmition of some count may bequite different &In the common-senseunderstanding of the name given to it. Forexample, a report of "numbers of studentsattending a cultural event" may be based on anactual "turnstile" count, or it may be based onsomeone's "eyeball" estimate of the size of thecrowd. "Number of applications from minoritystudents" may mean number of application formsreceived (regardless of whether all necessarysupporting materials are received) or it maymean number of complete application foldersprocessed (with all necessary supportingdocuments). "Number of businesses served" mayinclude even casual phone inquiries from anybusiness in the area, or it may mean at least oneformal meeting with the business followed up bysome type of written report.

The offices which originally collect suchinformation know the details of these matters andthey usually have good reasons for collecting theinformation in a particular manner. Persons whonow want to use the information for purposes ofinstitutional outcomes assessment must become

thoroughly familiar with the way the informationis collected and tailor conclusions accordingly.. Ifthe information is used in comparisons withother institutions, it is obviously important thatall institutions involved are using the sameoperational defmitions.

It is also important that persons involved inthe accreditation enterprise be sensitive to thismatter of the operational definitions used forinstitutional records. When such records are usedto demonstrate institutional effectiveness in theanainment of some goal, it is sometimesnecessary to do a little probing in order tounderstand exactly what the records are showing.

Concluding Note on Institutional Records.Institutional records are perhaps the most under-utilized of all methods for purposes of outcomesassessment. All institutions collect great amountsof information. Ordinarily, the information iscollected by one office for its own purposes,without much thought about its use in somewider context of institutional assessment. Personsresponsible for thinking about institutionalassessment should devote some time todetermining what information is routhrlycollected by various offices in the institution,then reflectingor, perhaps better, brain-stormingabout what parts of it might be usedin an overall institutional assessment plan.Personnel involved in the accreditation processshould be alert to look for institutional recordswhich may reveal important information aboutachievement of institutional goals, even when theinformation is not presented as part of theinstitution's "assessment" information.

The Institutional Assessment Plan

Following some introductory comments ongeneral issues which must be taken into accountwhen considering methods of assessment, wehave concentrated on identification of particularmethods for outcomes assessment and theirstrengths and weaknesses, especially 'with respect

Page 62: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

to institutional effectiveness. In practice, ali ofthese thoughts and information must becombined into some type of overall plan for theassessment of outcomes. Constructing such aplan is a large task. It is a topic not treated inthis paper. However, we want to conclude thepaper by noting the importance of the overallplan. It is in this plan that all of the issuescovered in this paper are treated "live." Mostfrequently the plan will incorporate somemixture of externally developed tests, locallydeveloped tests, externally developed surveys,locally developed surveys, and certaininstitutional records. Some parts of the plan willbe well developed, others poorly developed, atleast for a time. Again, it is not the purpose ofthis paper to treat the construction andimplementation of such plans in detail, but ourconsideration of methods would not be completewithout reference to the fact that the methods areultimately incorporated into some type of overallassessment plan.

*Thomas P. Hogan is Dean of theGraduate School at the University ofScranton.

Endnotes

1. Throughout this paper, we have avoidedcitations in the professional literature. However,an exception is made here to cite two workswhich relate directly to the methodologicaldifficulties involved in using the value addedapproach. The two works, both recentlypublished and both being chapter-longappendices to books, provide excellentsummaries of these problems and attempts todeal with them. The first is an appendix entitled"Statistical Analysis af Longitudinal Data" inAstin, A.W. (1991). Az.15sment for Excellence:The Philosophy and Practice of Assessment andEvaluation in Higher Education. New York:American Council on Education/MacmillanPublishing Company. The second is an appendixentitled "Methodological Issues in Assessing theInfluence of College" in Pascarella, E.T. andTerenzini, P.T. (1991). How College AffectsStudents. San Francisco: Jossey-Bass Publishers.

2. In fisting tests first in order of presentationhere, we do not mean to imply that they havesome type of qualitative primacy, i.e., that theyare the best or most important types ofassessment. The order of presentation is purelyarbitrary. What is the best or most importanttype of assessment depends on the goal that isbeing addressed.

5

Page 63: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Selected References on Outcomes Assessment in Higher Education:An Annotated Bibliography

Trudy W. Banta*

Foreword

This bibliography on postsecondary outcomesassessment is by no means exhaustive. As thetitle of the work implies, the several individualsand groups who provided guidance for itsdevelopment made conscious selections from therather substantial body of literature that hasgrown up since 1985 around the topic ofoutcomes assessment in colleges and universities.These selections were made in response to thequestion, "What are the key references that willgive accrei.iiting agency staff and those withwhom they consult a compiehensive overview ofthe current status of the field of postsecondarystudent outcomes assessment and assessment ofinstitutional effectiveness?"

While the bibliography is subdivided into adozen sections for ease of reference, in somecases, the placement of a given work is some-what arbitrary. Several of the citations could beplaced in two or more categories, and the entriesin the Books/Collections/Review Articlessection contain material that belongs underseveral or all of the preceding sub-headings.

Though I must accept responsibility for thefinal decisions about materials to include orexclude, I would like to acknowledge the capableassistance I received in the process of developingthe annotations from the following members cfthe staff at the Center for Assessment Researchand Development: Margery Bensey, John Stuhl,Francine Reynolds, Gary Pike, and Ann-MariePitts.

Assessment and Accreditation

Commission on Colleges. Southern Associationof Colleges and Schools. (1989). Resource

manual on institutional dectiveness. Secondedition. Decatur, GA: ALthor. This manualwas designed to provide assistance on addres-sing the SACS institutional effectivenesscriterion for those who conduct the self-studyrequired for reaffirmation of institutionalsccreditation. The contents emphasize thatself-study should be viewed not as an episod-ic event but as a continuous process aimed atimproving the college or university. Assess-ment of institutional effectiveness is definedas a systematic comparison of institutionalperformance to institutional purpose.

Commission on Colleges. Southern Associationof Colleges and Schools. (1991). Programfor peer evaluators: Training manual.Decatur, GA: Author. This manual wasdeveloped to train members of accreditationvisiting committees. The topics coveredinclude: The peer review process, preparingfor the visit, gathering evidence during thevisit, applying specific accreditation guide-lines in evaluating evidence, and preparing areport for the institution.

Commission on Institutions of Higher Education.North Central Association of Colleges andSchools. (1991). Assessment workbook.Chicago, IL: Author. This resource manualwas developed for use in a series of regionalseminars conducted in Spiing 1991 by theNorth Central Association entitled, "Asses-sing Student Academic Achievement in theContext of the Criteria for Accreditation. "Topics in the manual include: Implementirgthe NCA accreditation statement on assess-ment, developing an assessment program,and six case studies on institutionalassessment of student academic achievement.

57

Page 64: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Folger, LK., & Harris, LW. (1989). AssessmentIn Accreditation. Decatur, GA: SouthernAssociation of Colleges and Schools. Thisbook is designed to give direction fordeveloping an institutional assessment systemthat will provide ongoing information abouteffectiveness. It is aimed more particularly atinstitutions responding to new accreditationrequirements for systematic and ongoingassessment of results. Appendices identifypotential assessment instruments and otherresources.

Ewell, P.T., & Lisensky, R.P. (1988). Assessinginstitutional effeaiveness: Redirecting theself-study process. Washington, D.C. :Consortium for the Advancement of PrivateHigher Education. This book uses the resultof a national demonstration project on thelinkage between institutional assessment andregional accreditation to develop chapters oninstitutional goal defmition, assessment forinstitutional distinctiveness, assessing generaleducation, and organizing for assessment. Anappendix provides an "Analytical Table ofContents for Self-Study" and formats fordepartmental data collection in connectionwith self-study.

Measurement Issues

Adelman, Clifford (Ed.). (1988). Ped'ormanceand judgment: Essays on principles andpractice in the assessment of college studentlearning. Washington, D.C. : Office ofEducational Research and Improvement. Acollection of essays including, Assessing thegeneric outcomes of higher education, Baird,L.L.; Assessment of basic skills in mathe-matics, Appelbaum, Mi.; Assessing generaleducation, Centra, J.; Assessment throughthe major, Appelbaum, M.I.; Assessingchanges in student values, Grandy, J.;Computer-based testing: Contributions ofnew technology, Grandy, J.; States of art in

the science of writing and other performanceassessments, Dunbar, S.; and Using theassessment center method of measure lifecompetencies, Byham, W.C.

American Educational Research Association,American Psychological Association, &National Council on Measurement inEducation. (1985). Standards for educationaland psychological testing. Washington,D.C.: American Psychological Association.The fifth edition of the AmericanPsychological Association's TechnicalRecommendations for Psychological Testsand Diagnostic Techniques addresses majorcurrent uses of tests, technical issues relatedto social and tegal concerns and needs of allparticipating in the testing process. TheStandards comprise four components: Part I,Technical Standards for Test Constructionand Evaluation; Part II, ProfessionalStandards for Test Use; Part III, Standardsfor Particular Applications; and Part IV,Standards for Administration Procedures.

Anrig, G.R. (1986). A message for governorsand state legislators: The minimumcompetency approach can be bad for thehealth of higher education. Unpubishedaddress. Princeton, New Jersey: EducationalTesting Service. The president of theEducational Testing Service argues that testsmeasuring minimum competency are unfit foruse in assessing higher education. Instead ofthese minimum competency measures, theauthor urges faculty at each institution toidentify types of knowledge and particularskills they mtend students to acquire, then todevelop instruments designed to assess theselearned abilities.

Baird L.L. (1988). A map of postsecondaryassessment. Research in Higher Education,28, 99-115. Students' knowledge and skillscannot be appropriately assessed in the

.......1

Page 65: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

absence of knowledge about how theirdevelopment is influenced by other aspects idpostsecondary education. These aspects aredescribed in a "map", consisting of twentypoints, which depicts the flow of studentsthrough institutions and experiences fromprecollege to adulthood. The map suggestswhere better assessments and models areneeded; for example, for adult learners,graduate and professional education, andplans of college seniors.

Banta, T.W., & Pike, G.R. (1989). Methods forcomparing outcomes assessment instruments.Research in Higher Education, 30, 455-469.A general process is outlined for faculty usein comparing the relative efficacy of collegeoutcomes assessment instruments for gaugingstudent progress towz.rd goals consideredimportant by the faculty. Analysis of twostandardized general education examstheACT COMP and the ETS Academic Profileillustrates the process.

Baratz-Snowden, J. (1990). The NBPTS beginsits research and development program.Educational Researcher, 19(6), 19-24. Theauthor, vice-president for assessment andresearch at the National Board forProfessional Teaching Standards, discussesthe board's guidelines for setting nationwidestandards for assessing entry-level skills ofteaching candidates.

Council for the Advancement of Standards(CAS) for Student Services/DevelopmentPrograms. (1988). CAS standards andguidelines for student services/developmentprograms: General-division level self-assess-ment guide. College Park: The University ofMaryland, Office of Student Affairs.Self-assessment guides focus on seventeenareas of self-study, such as admissions,orientation, career planning and placement,and counseling services.

Mines, R.A. (1985). Measurement issues inevaluating student development programs.Journal of College Student Personnel, 26,101-106. An introduction to psychometricissues related to developmental assessmentand recommendations for needed instrumentrefinements are provided. In addition, a briefoverview of instruments available to assessdevelopmental stages and tasks is included.

Oster lind, S.J. (1989). 'Constructing test items.Boston: Kluwer Academic Publishers. Theauthor addresses the issues of functions andcharacteristics of test items, item formats,methods for assessing quality of test items,and issues related to use of test items.Chapter titles include: What is constructingtest items?; Definition, purpose, andcharacteristics of items; Determining thecontent for items: Validity; Starting to writeitems; practical considerations; Style,editorial, and publication guidelines for itemsin the multiple-choice format; Style,editorial,and publication guidelines for items in othercommon formats; Judging the quality of testitems: item analysis; and Ethical, legalconsiderations, and final remarks for itemwriters.

Pike, G.R., Phillippi, R.H., Banta, T.W.,Bensey, M.W., Milbourne, C.C. &Columbus, P.J. (1991). Freshman-to-seniorgains at the University of Tennessee,Knoxville. Knoxville: The University ofTennessee, Center for Assessment R esearchand Development. In 1989, a comprehensivestudy of student growth in college, asmeasured by freshman-to-senior gain scoreson the College Outcome Measures Program(COMP) Objective Test was undertaken atthe University of Tennessee, Knoxville.Analyses of scores for 942 students revealedthe following: 1) students with the highestgain scores had the lowest entering aptitudeand achievement scores; 2) gain scores were

65

Page 66: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

not significantly related to measures ofstudent involvement; 3) gain scores were notmeaningfully related to coursework; and 4)serious problems existed with the reliabilityand validity of gain scores. Analysis offollow-up interviews with thirty studentsrevealed that involvement and courseworkwere related to perceived growth anddevelopment.

Rogers, G. (1988). Validating college outcomeswith institutionally developed instruments:Issues in maximizing contextual validity.Paper presented at the Annual Meeting of theAmerican Educational Research Association,New Orleans, LA. The author clarifies thesignificance of contextual validity in thecreation and use of assessment instruments.Using examples from the Alverno CollegeOffice of Research and Evaluation, Rogersdemonstrates the elements and strategicspracticed in improving the contextual validityof Alverno's assessment instruments.

Widgor, A . K. , & Garner, W . R. (Eds . ). (1982).Ability testing: Uses, consequences ,andcontroversies. Part I. Washington: NationalAcademic Press. This report is the work ofthe Committee on Ability Testing,functioning under the direction of theNational Research Council. Its purposes areto describe the theory and practice of testing,to clarify the competing interests that bearupon the issue of testing, and to encourageimproved, informed decision-making withregard to assessment. Part I presentsdiscussions of testing issues, grouped in thefollowing categories: Chapters 1-3 provide anoverview of testing controversies, conceptintroduction, methods, terminology, a briefhistory of testing in the U.S., and legalrequirements that have arisen around testing;Chapters 4-6 describe educational andemployment uses of tests, common misuses,and recommendations for improved use; and

11110.1111P

Chapter 7 examines limitations of standard-ized tests and identifies contexts, i.e.cultural, societal, by which these limitationsare affected.

State Policy Issues in Assessment

Aper, J.P., Culver, S.M., & Hinkle, D.E.(1990). Corning to terms with theaccountability versus improvement debate inassessment. Higher Education, 20, 471-483.While many postsecondary institutionsconsider a mandate for outcomes assessmentan accountability issue, state governmentsappear to be more concerned about im-proving higher education than about makingsummative judgments. These two perspec-tives frame a fundamental debate about thenature and direction of outcomes assessment.

Berdahl, R.D., Peterson, M.W., Studds, S., &Mets, L.A. (1987). The state's role andimpact in improving quality in undergraduateeducation: A perspective and framework.College Park, MD: National Center forPostsecondary Governance and Finance. Thisstudy explores the role of state governmentin improving the quality of undergraduateeducation and the impact of related stateactions on colleges. Following a review ofthe background for state interest in quality,state approaches to quality improvement areconsidered, including coordination strategiesand impact on iiistitutions and state highereducation systems.

College Outcomes Evaluation Program AdvisoryConunittee. (1987). Report to the New JerseyBoard of Higher Education from the AdvisoryCommittee to the College Outcomes Evalua-tion Program. Trenton, NJ: Department ofHigher Education, Office of CollegeOutcomes. The College Outcomes EvaluationProgram (COEP) Advisory Committee wasappointed in June 1985 by the New Jersey

Page 67: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Board of Higher Education for the purposeof developing a comprehensive highereducation assessment program for NewJersey. The report suggests why a statewideassessment program should be developed inNew Jersey and which areas of undergradu-ate education should be assessed, namely,basic skills, general intellectual skills, modesof inquiry, appreciation of the humancondition/ethical issues, and the major.Emphasis is given to the need forinstitutional assessment of the personaldevelopment of students.

El-Khawas, E. (1991). Campus trends, 1991.Higher Education Panel Report, No. 81.Washington, D.C.: American Council onEducation. A survey-research programestablished by the American Council onEducation, the Higher Education Panelreports findings to policy makers ingovernment, in the associations, and ineducational institutions across the nation. The1991 report, eighth in the series, givesspecial attention to the fmancial circum-stances facing American higher education,particularly the nature, extent and earlyimpact of budgetary cuts that have affectedmany colleges and universities.

Ewell, P.T. e,1990). Assessment and the "newuccountability": A challenge for higher

(cation's leadership. Denver, CO:Education Commission of the States. Apolicy framework is proposed forunderstanding state-based assessment madatesin terms of an emerging new conception ofaccountability. The framework is based onnine state case studies conducted in 1987-90by the Education Commission of the States.Results are summarized in terms of an"external" and an "internzl" agenda foraction for higher education to make betteruse of assessment processes in building thepublic case for higher educatiod.

Ewell, P.T., Finney, J.E., & Lent, C. (1990).Filling in the mosaic: The merging pattern ofstate-based assessment. AAHE Bulletin, 42,3-7. Summary results of a national survey ofstate-based assessment mandates and initia-tives are reported. Detailed reviews of eachstate's effort are provided in a companiondocument by C. Poulsen (1990) published bythe Education Commission of the States(Denver, CO).

National Governors' Association. (1988). Resultsin education, state-level college assessmentinitiatives-1987-1988: Results of a 50-statesurvey. Washington, D.C.: National Gover-nors' Association. This report provides anoverview of assessment in the fifty statesbased on responses to a survey conducted inthe spring of 1988 for the purpose ofdetermining what initiatives each state hadundertaken to assess higher educationoutcomes.

Assessment in the Major

American Assembly of Collegiate Schools ofBusiness. (1990). Report of the AACSB TaskForce on Outcome Measurement. St. Louis,MO.: American Assembly of CollegiateSchools of Business. Sections cover historicalcontext, goal analysis, and examples of goalstatements. References are included also.

Banta, T.W., & Schneider, J.A, (1988). Usingfaculty-developed exit examinations toevaluate academic programs. The Journal ofHigher Education , 59, 69-83. The experienceof faculty at the University of Tennessee,Knoxville in developing examinations in themajor field for purposes of assessing andimproving curriculum and instruction isdescribed. Difficulties encountered by thefaculty, as well as the benefits they realizedfrom the process, are identified anddiscussed.

67

67

Page 68: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Fong, B. (1988). Old wineskins: The AACexternal examiner project. Liberal Education,74, 12-16. The author describes a FIPSE-sponsored project involving eighteeninstitutions divided into six geographicallyproximate clusters of similar institutions.Faculty in three disciplines within eachcluster cooperated in preparing andadministering written and oral examinationsto one anothers graduating seniors.

Stark, J.S., & Lowther, M.A. (1988).Strengthening the ties that bind: Integratingundergraduate liberal and professional study.Ann Arbor: The University of Michigan.This report of the professional preparationnetwork identifies outcomes consideredimportant by educators in eight under-graduate fields and suggests means offacilitating interdisciplinay communication.

Stone, H.L. and Meyer, T.C. (1989).Developing an ability-based assessmentprogram in the continuum of medicaleducation. Madison: University of WisconsinMedical School. This is an instructionalmanual for developing programs which cango beyond measuring the acquisition of aknowledge base to identifying genericabilities that cross disciplinary lines but canbe assessed with tasks unique to thedisciplines. It contains twelve appendixeswith sample criteria and rating forms.

Assessment in General Education

Alverno College Faculty (1979). Assessment atAlverno College. Milwaukee: AlvernoProductions. Alverno College faculty havedefined eight general student outcomes thatform the basis of their instructional programand their assessment activities: effectivecommununications ability, analytical capabil-ity, problem-solving ability, valuing in

decision-making, effective social interaction,effectiveness in individual/environmentalrelationships, responsible involvement in thecontemporary world, and aesthetic respon-siveness. Each of these outcomes has fourlevels at which students may attaincertification through their coursework. Thispublication explains how the Alverno Facultyhas designed and developed its assessmentsystem.

Banta, T.W. (1991). Contemporary approachesto assessing general education outcomes.Assessment Update, 3(2), 1-4, The authorargues for a planning-implementing assessingmodel for general education and illustrateseach of these steps with current examplesfrom institutional practice.

Forrest, A. (1986). Good practices in generaleducation. Iowa City, Iowa: AmericanCollege Testing Program. This loose-leafcollection provides examples from almostfifty institutions of good practice in statinggoals for general education, setting curricularrequirements, providing remedial support,orienting new students, designing courses,and writing examination questions.

Forrest, A. (1990). 7ime will tell: Portfolio-assisted assessment of general education.Washington, D.C.: American Association forHigher Education Assessment Fortun. Thisreference serves as a guide for colleges inestablishing or improving the use ofindividual student portfolios in evaluating thegeneral education program. The experienceof seven institutions making extensive use ofportfolios is related. Five sets of decisionsform the basis for this work's organization:developing a definition of general education,deciding what to include in a portfolio andwhen, determining how and by whom theportfolios are to be analyzed, building acost-effective process, and getting started..1

Page 69: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Gaff, J.G., & Davis, M.L. (1981). Studentviews of general education. LiberalEducation, 67(2), 112-113. A surveyadministered as part of the Project onGeneral Education Models at twelve collegesand universities reveals student opinionsabout general education. The results indicatethat students value a broad general education,especially as self-knowledge and preparationfor a career. Students prefer some freechoice, active learning methods, concrete-ness, and integration in their studies. Theywant to acquire communication competences,master thinking skills, and become proficientin personal and interpersonal relationships.

Classroom Assessment Techniques

Cross, K.P., & Angelo, T.A. (1988). Classroomassessment techniques: A handbook forfaculty. Ann Arbor, Michigan: NationalCenter for Research to Improve Postsecon-dary Teaching and Learning. The authorssynthesized research literature concerning theimpact of college on students in order tocreate a better focus for self-assessment ofcollege teaching. The handbook describesthirty classroom techniques for : -essingstudents' academic skills, learnk kills,knowledge, self-awareness, and reactions toteaching and courses. Each technique comeswith instnictions for use, recommendationsfor analyzing data, suggestions foradaptation, and a list of pros and cons.

Developing Goals for Assessment

Gardiner, L.F. (1989). Planning for assessment:Mission statements, goals, and objectives.Trenton: New Jersey Department of HigherEducation. This 255-page report is a step-by-step guide to planning for assessment,reviewing mission statements, and settinggoals, with tables of sample outcomes goals

and objectives. A chapter on resourcesinciudes reference materials, organizations,institutions with outcome-based programs,and an annotated bibliography.

Stark, J.S., Shaw, K.M., & Lowther, M.A.(1989). Student goals for college andcourses: A missing link in assessing andimproVing academic achievement. ASHE-ENC Higher Education Repo:- t, No. 6. Wash-ington, D,C.: Association for the Study ofHigher Education. Getting students to takeactive responsibility for their own educationmay depend on whether or not what thestudents themselves hope to accomplish istaken into consideration. Helping studuitsdefine and revise their own goals is a valideducational undertaking.

Assessment of Student Development

Hanson, G.R. (1989). The assessment of studentoutcomes: A review and critique ofassessment instruments. Trenton, N.J.: NewJersey Department of Higher Education,College Outcomes Evaluation Program Thisreport, published to help faculty andadministrators evaluate assessment instru-ments, provides an overview of assessmentissues, strategies, instruments, and tech-niques. Hanson evaluates instruments thatassess personal, career, moral, and cognitivedevelopment; student satisfaction; behavioralevents; social attitudes; and learning styles.

Kozloff, J. (1987). A student -centered approachto accountability and assessment. Journal ofCollege Student Personnel. 28(5), 419-424.Assessment of student outcomes has mostoften been described from the perspectives offaculty and administrators. The author main-tains that the student perspective should alsobe considered. Procedures for developing astudent-centered model are described.

6 9

Page 70: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Pace, C.R. (1990). The undergraduates: Areport of their activities and progress incollege in the 1980s. Los Angeles:University of California, Los Angeles,Center for the Study of Evaluation. Pacediscusses five types of institutions, the sixmillion undergraduates who attend them, theCollege Student Experiences Questionnairethat he developed to survey them, and theresults of his survey of more than 25,000students.

Assessment in Community Colleges

Bray, D., & Belcher, M.J. (1987). Issues instudent assessment: New Directions forCommununity Colleges, No. 59. SanFrancisco: Jossey-Bass. This sourcebookcontains 12 chapters by 15 different authorsthat deal with accountability issues and thepolitical tensions they reflect; assessmentpractices, the use and misuse of testing, andemerging directions; and the impact ofassessment, which includes issues of studentaccess and opportunity, technologicalapplications, expanded models forassessment, and increased linkages betweenhigh schools and colleges as a result ofassessment information. The final challengeissued in the volume is to move assessmentfrom political mandate to impetus forimproving the curriculum and the quality ofcollege teaching and learning.

Kreider, P.E., & Walleri, R.D. (1988). Seizingthe agenda: Institutional effectiveness andstudent outcomes for community colleges.Community College Review, 1, 44-50. Theauthors urge community colleges to definetheir educational outcomes and usemulti-dimensional approaches to assess theirachievements. Examples of initiativesundertaken in California, Florida, and NewJersey are given.

Liague for Innovation in the CommunityCollege. (1990). Assessing institutionaleffectiveness in community colleges. LagunaHills, California: League for Innovation inthe Community College. The authors of thismonograph encourage community colleges tobuild assessment programs around theirstated missions. Chapters provide assessmentguidelines for issues central to traditionalcommunity college missions, such astransfer, career preparation, basic skillsdevelopment, continuing education, andaccess. Each mission is discussed in thecontext of clients, programs, assessingeffectiveness, and assessing the missionitself. Two appendices discuss resources andinstruments.

The National Alliance of Conununity andTechnical Colleges. (1988). Institutionaleffectiveness indicators. This concise chart,showing major categories and specificindicators of institutional effectiveness incommunity and technical colleges, wasdeveloped by National Alliance membersthrough an eighteen-month process.

Books/Collections/Review Articles onAssessment Topics

Agin, A.W. (1991). Assessment for excellence.New York: Macmillan Publishing Co. Theauthor provides a detailed critique oftraditional assessment policies and addressesthe major issues related to assessment,including its underlying philosophy,classroom assessment, comprehensiveassessment program design, statistical dataanalysis, and dissemination of results. Adetailed explication of the input-environment-output (I-E-0) assessment modelfurnishes an organizing framework for thetext.

64

Page 71: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Erwin, T.D. (1991). Assessing student learningand development: A guide to the principles,goals, and methods of determining collegeoutcomes. San Francisco: Jossey-BassPublishers, Inc. Intended as a place to start,this book is a practical guide for faculty,student affairs professionals, andadministrators who are involved in assessingthe effectiveness of their programs. Thereference provides common terms,principles, issues, and selected literaturefrom a variety of disciplines.

Ewell, P.T. (1988). Outcomes, assessment andacademic improvement: In search of usableknowledge. In J.C. Smart (Ed.), Highereducation: Handbook of theory and research(Vol IV, pp. 53-108). New York: AgathonPress. This chapter reviews the literature onassessment and undergraduate improvement,suggesting useful policy levers that can beemployed by administrators at theinstitutional level. It contains a classificationof issues and lessons from campus-basedassessment programs reported in theliterature through 1988.

Ewell, P.T. (1991). To capture the ineffable:New forms of assessment in highereducation. In G. Grant (Ed.), Review ofResearch in Education, 17, 75-125 .

Washington, D.C.: American EducationalResearch Association. The author gives anhistorical and contextual account ofassessment, concluding that an agenda ofreform is in assessment's essential character.Ewell observes that assessment operates ontwo contradictory imperatives: academicimprovement and external accountability. Ataxonomy of approaches to assessment isoffered, along with a critical review ofcurrent practice and recommen(iations for thefuture shape and direction of the field.

Farmer, D.W. (1988). Enhancing studentlearning: Emphasizing essential competenciesin academic programs. Wilkes-Barre, Penn.:King's College Press. This monographdescribes the implementation of anoutcomes-oriented curriculum and courseembedded assessment model at King'sCollege. Peter Ewell remarks in his forewordto the book, "If the integrity of thecurriculum is maintained and its effectivenessdemonstrated, external benefits will naturallyfollow. "A measure of that integrity at King'sCollege is for faculty to be able to tellstudents how the intended outcomes of thecurriculum are related to the college'sdefinition of an educated person. Five yearsof faculty development preceded discussionof curriculum changes, with the concept ofassessment intrcluced only after faculty hadaccepted an outcomesoriented curriculum.Excellence at King's College meansmeasuring what etually happens to studentswhile attending college and helping studentstransfer liberal learning skills across thecurriculum.

Gray, P.J. (Ed.). (1989). Assessment goals andevaluation techniques. New Directions forlEgher Education, No. 67. San Francisco:Jossey-Bass. This voluune focuses onanalysis of the current state of assessment,differences between assessment andevaluation, guidelines, challenging aspects,and uses for information.

Halpern, D. (Ed.). (1987). Student outcomesassessment: What institutions stand to gain.New Directions for Higher Education, No.59. San Francisco: Jossey-Bass. Halpernsuggests in the overview that institutionschange priorities in order to focus on studentlearning. Chapters include campus-basedassessment programs, inadequacy oftraditional measures, public policy issues,and models of student outcomes assessment

Page 72: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

from Tennessee, California, Missouri, andNew Jersey. In the final chapter Halpernnotes eight factors that have been crucial tosuccessful programs.

Hutchings, P., & Marchese, T. (1990).Watching assessment: Questions, stories,prospects. Change, 22, 12-38. The authorsreview the development of the assessmentmovement and consider how it is practiced.The assessment goals and programs of fouroimpuses are described in detail as examplesof how assessment is being carried out indiverse colleges and universities. Commoncharacteristics, major themes, andrecommendations for the future are offered.

Jacobi, M., Astin, A., & Ayala, F. (1987).College student outcomes assessment: Atalent development perspective. Washington,D.C.: Association for the Study of HigherEducation. This monograph describes severalfactors that conhibute to useful outcomesassessment: a) assessment data explicateissues facing educational practitioners, b)assessment yields information about students'growth and development, c) longitudinal datadescribe students' educational experiences sothat their effects can be evaluated, and d)results are analyzed and presented in amanner that facilitates their use bypractitioners. Approaches to assessment thatmake these contributions are described.

Light, R.J. (1990). The Harvard AssessmentSeminars: Explorations with students andfaculty about teaching, learning, and studentlife. First Report. Cambridge, MA: HarvardUniversity, Graduate School of Educationand Kennedy School of Government. TheHarvard Assessment Seminars were begun inorder to encourage innovation in teaching,curriculum, and advising, and to evaluate theeffectiveness of each innovation.Administrators, faculty, and students work

together in small groups on long-termresearch projectsdesitaing, implementing,and assessing innowtions. The reportsummarizes findings, lists participants andreferences, and provides descriptive charts.

Nichols, J.O. (1991). A practitioners handbookfor institutional effectiveness and studentoutcomes assessment implementation. NewYork: Agathon Press. This book provides ageneral sequence of events for conductingoutcomes ast,..ssment to evaluate institutionaleffectiveness. The model described by theauthor is intended to be adaptable to virtuallyall institutions and require only modest levelsof funding. The book also contains detailedreviews of literature and practice related tocritical points in the author's model.

Pace, C.R. (1979). Measuring outcomes ofcollege. San Francisco: Jossey-Bass. Pacesummarins fifty years of research related tostudents' achievements during college andafter college and the impact of c4;liege onstudent growth. He provides evidence thatcollege graduates do develop in ways thatdiffer from the development of nongraduates.

Pascarella, E.T. , & Terenzini, P.T (1991).How college affects students: Findings andinsights from twenty years of research. SanFrancisco: Jossey-Bass. Over 900 pageslong, this volume contains nearly 200 pagesof indexes and references, as well as a usefulforeword and preface recommending ways touse the book. Introductory and summarychapters are provided for casual readers,while more invested readers may wish toread chapter summaries or entire chapters.Chapters cover issues such as theories andmodels of student change; development ofattitudes, skills, morals, and values;psychosocial changes; educational attainmentand cereer development; economic benefitsand quality of life after college. A chapter on

'7

Page 73: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

implications of the research for policy andpractice concludes the book.

Seymour, D. (1991). on Q: Causing quality inhigher education. New York: Macmillan.The autlior uses anecdotes from highereducatiori. and industry and quotations frominterviews with chief academic officers fromacross the country to illustrate the principlesof quality improvement. The book addressessuch issues as defining quality, thedifferences between quality in industry andquality in highe: education, the strategicimplications of quality, communicating anemphasis on quality, the costs of quality,recruiting for quality, and the culture ofquality in higher education.

Terenzini, P.T. (1989). Assessment with openeyes: Pitfalls in studying student outcomes.Journal of Higher Education, 60, 644-664.This article notes several purposes ofassessment and analyzes issues such asinvolving administration and faculty,coordinating offices, determining politicaland practical effects, and calculatingexpenses. Also discussed are assets andlimitations of different types ,_tf assessmentmeasures and analyses of measures.

Wergin, J. F., & Braskamp, L.A. (Eds.).(1987). Evaluating administrative servicesand programs. New Directions forInstitutional Research, No. 56. SanFrancisco: Jossey-Bass. This sourcebooksuggests methods of assessing administrativeand support programs as part of routineadministrative practice.

Assessment Bibliographies

Center for Assessment Research andDevelopment. (1990). Bibliography ofassessment instruments: Information onselected educational outcomes measures.

Knoxville: The University of Tennessee,Knoxville, Center for Assessment Researchand Development. Assessment instrumentreviews in this collection are grouped in thefollowing categories: general education; basicskills; cognitive development; the major;values; and involvement, persistence, andsatisfaction. Selected "Assessment Measures"columns from Assessment Update are aisoincluded.

Winthrop College Office of Assessment. (1990).The SCHEA Network annotated bibliographyof higher education assessment literature:Selected references from 1970-1989.Winthrop, South Carolina: Winthop CollegeOffice of Assessment. The South CarolinaHigher Education Assessment Network(SCHEA) collection presents works pertinentto issues of assessment of studentdevelopment and learning in highereducation. The articles are compiled underthe following headings: assessment: generalissues and principles; theoretical andeducational aspects of assessment; assessmentin the states; assessment of college readinessskills; assessment of general education;assessment of majors/specialty areas;assessment of career preparation; assessmentof personal gowth and development; dataanalysis and interpretation; survey methods;minority students: assessment and educationalissues; non-traditional students; and retentionand attrition.

Assessment Periodical

Assessment Update: Progress, trends , andpractices in higher education. San Francisco:Jossey-Bass. This bi-monthly newsletter isedited at the Center for Assessment Researchand Development at the University ofTennessee, Knoxville. The 16-page format ofeach issue includes several feature articles byleaders in the field of assessment, shorter

67

'7 3

Page 74: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

articles on assessment mrthods being tried onindividual campuses, notes on currentpublications, and columns by Peter Ewell,Peter Gray and Gary Pike on statedevelopments in assessment, campusassessment programs, and assessmentinstruments, respectively.

1*Trudy Banta is the Director of the Centerfor Assessment Research and Development atthe University of Tennessee-Knoxville.

Page 75: Force on Institutional Effectivenes. INSTITUTION D.C. ($10.00).DOCUMENT RESUME HE 025 380 Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task

Council on Postsecondary AccreditationOne Dupont Circle, N.W., Suite 305Washington, D.C. 20036Telephone: 202/452-1433, Fax 202/331-9571