9
This chapter summarizes the diverse assessment methods and strategies for measuring student academic attain- ment and describes the key elements in one university’s assessment implementation. Assessing Student Learning in the Major Field of Study J. Fredericks Volkwein Assessing student attainment in the major field of study is increasingly important to employers and accrediting bodies alike. Construction and manufacturing firms do not like engineers who design faulty bridges and airplanes. Marketing firms want to hire students who understand the differ- ence between a niche market and a global market. School districts want teachers who understand the different levels of Bloom’s taxonomy in order to adapt their teaching technique to student learning styles. Graduate schools want students who are knowledgeable in their field. Moreover, prac- titioners in many fields not only need a degree from an accredited program, but also need to pass a licensure exam or undergo individual certification (such as in accounting, architecture, civil engineering, law, medicine, nurs- ing, social work, and teaching). This chapter outlines the foundations for assessing student learning in the major, summarizes the instruments and strategies used for assessing these outcomes, and discusses some important considerations in the process. While identifying the separate assessment domains (basic skills, gen- eral education, attainment in the major field of study, and student personal development), we need to understand how fluid the boundaries are among them—how, in fact, each institution creates an educationally interdepen- dent environment. For example, student attainment of basic reading and writing and mathematics skills directly supports student performance in 101 7 NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH, Assessment Supplement 2009, Spring 2010 © Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/ir.333

Assessing student learning in the major field of study

Embed Size (px)

Citation preview

Page 1: Assessing student learning in the major field of study

This chapter summarizes the diverse assessment methodsand strategies for measuring student academic attain-ment and describes the key elements in one university’sassessment implementation.

Assessing Student Learning in the Major Field of Study

J. Fredericks Volkwein

Assessing student attainment in the major field of study is increasinglyimportant to employers and accrediting bodies alike. Construction andmanufacturing firms do not like engineers who design faulty bridges andairplanes. Marketing firms want to hire students who understand the differ-ence between a niche market and a global market. School districts wantteachers who understand the different levels of Bloom’s taxonomy in orderto adapt their teaching technique to student learning styles. Graduateschools want students who are knowledgeable in their field. Moreover, prac-titioners in many fields not only need a degree from an accredited program,but also need to pass a licensure exam or undergo individual certification(such as in accounting, architecture, civil engineering, law, medicine, nurs-ing, social work, and teaching). This chapter outlines the foundations forassessing student learning in the major, summarizes the instruments andstrategies used for assessing these outcomes, and discusses some importantconsiderations in the process.

While identifying the separate assessment domains (basic skills, gen-eral education, attainment in the major field of study, and student personaldevelopment), we need to understand how fluid the boundaries are amongthem—how, in fact, each institution creates an educationally interdepen-dent environment. For example, student attainment of basic reading andwriting and mathematics skills directly supports student performance in

101

7

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH, Assessment Supplement 2009, Spring 2010 © Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/ir.333

Page 2: Assessing student learning in the major field of study

102 ASSESSING STUDENT OUTCOMES: WHY, WHO, WHAT, HOW?

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

general education and in the major. The educational breadth of a generaleducation program and the intellectual depth of the major are mutually rein-forcing. Moreover, students must assume personal responsibility for theirown growth in order to meet their responsibilities to the faculty. One’s intel-lectual development cannot be easily separated from one’s personal andsocial development, nor can liberal learning and disciplinary expertise becompletely independent of each other.

Nevertheless, institutions usually choose different paths. Congruentwith their separate missions, community college staff usually concentratemore heavily on basic skills assessment, faculty at private liberal arts col-leges often devote greater attention to general education assessment, andfaculty at research universities and in professional disciplines are muchmore interested in assessing student attainment in the major and graduateprograms. The larger and more fragmented multiversities usually find it dif-ficult to galvanize faculty efforts in basic skills, general education, andnoncognitive development, but faculty in nearly all institutions can see thebenefits of examining student attainment in the major concentration. Thischapter focuses on assisting faculty with strategies for assessing studentattainment in the major field of study.

Previous chapters in this volume have referred to some of the key ques-tions for assessing student learning:

• What should students learn, and what ways do we expect them to grow?These are questions about goals and the answers require goal clarity.

• What do students learn, and in what ways do they actually grow? Theseare measurement questions, and the answer requires measureable evi-dence of learning and growth.

• What should we do to facilitate and enhance student learning andgrowth? This question about educational improvement requires effectiveuse of assessment results.

Thus, the goals question and the improvement question are bothstrongly linked to the challenges of measurement and implementation. Werecognized this reality in Chapter Two, which discusses the institutionaleffectiveness model. Campus assessment programs need to speak to boththe inspirational and the pragmatic, the simultaneous needs for internalimprovement and external accountability. Moreover, these generic assess-ment questions drive research design and data collection:

• Do we meet the standards?• How do we compare?• Are we meeting our goals?• Are we getting better?• Are our efforts productive?

Page 3: Assessing student learning in the major field of study

103ASSESSING STUDENT LEARNING IN THE MAJOR FIELD OF STUDY

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

Methods for Assessing Student Attainment in the Major

At universities with well-developed strategies for assessment in the major,each department constructs an appropriate means for assessing studentattainment using a broad range of possible assessment designs. In somedepartments where the numbers of majors are small, the faculty might focuson the achievement of all students. Other departments, perhaps with largenumbers of majors, may decide to study representative groups of students.The aim of the assessment is not to magnify any particular student’s successor failure but rather to judge, in whatever ways we can, students’ abilitiesto construct knowledge for themselves and assess the work of faculty innurturing, enhancing, and enabling those abilities. Departments might com-bine some of the elements discussed in this chapter or might design aunique approach not considered here. The objectives are to examine whatit is students are acquiring when they major in a particular discipline andto use that information to enhance the learning experience for future stu-dents. The assessment strategies listed here each have advantages and dis-advantages, some of which are outlined in Chapter Six, Appendix B.

Comprehensive Exams. Comprehensive exams can be locally or com-mercially designed. When such exams are designed locally by the faculty,they have the advantage of being shaped to fit the department’s curriculum.Departmental exams have the disadvantages of needing labor-intensiveannual revision and local scoring by the faculty and lack a comparisongroup. Commercially designed standardized instruments provide scoringservices and more reliable and valid comparison groups, but they may ormay not fit the department’s curriculum and are not useful in disciplineswishing to go beyond a multiple-choice format.

Some departments have attempted to use the Graduate Record Exami-nation (GRE) score in the discipline as an assessment tool, even though itis generally not appropriate for this purpose. GRE scores have several defi-ciencies as assessment tools. First, the scores are relational and answer onlythe question, “How do you compare?” You do not know what a score of 600means in terms of student achievement because you do not know howmany questions were answered correctly and incorrectly. Second, it uses thewrong comparison group. Instead of comparing each GRE score againstthose of all college graduates, student GRE scores are in relation to a grad-uate school–bound population. Third, there are no GRE subfield scoreswithin each major field, so you cannot tell if student performance is con-gruent with the student’s curriculum.

The ETS Major Field Exam was constructed in response to these weak-nesses in the GRE. The exam scores not only are normed on populations ofgraduating seniors and not only indicate the number answered correct (non-relational), but also report scores by subfield, thus providing useful infor-mation for analyzing the curriculum.

Page 4: Assessing student learning in the major field of study

104 ASSESSING STUDENT OUTCOMES: WHY, WHO, WHAT, HOW?

Senior Thesis or Research Project. Such a requirement encouragesstudents to use the tools of the discipline on a focused task as the culmi-nation of their undergraduate academic experience. Under ideal condi-tions, each department or program uses the student’s work to reflect onwhat students are achieving with the aim of evaluating, and if necessarystrengthening, the curriculum and experiences of students within thatmajor.

Performance or Fieldwork Experience. Asking students to demon-strate in some practical or even public way the knowledge and skills theyhave learned and acquired, this emphasizes the integration and applicationof the separate facets of the academic major. Such a requirement may beespecially fitting in professional fields like social work and teaching, as wellas in performing arts fields. Examples include student recitals, art exhibi-tions, practice teaching, and supervised field experiences.

Capstone Course. This is usually a required senior course designedto integrate the study of the discipline. It often has a heavy research andwriting component. Such courses offer ideal opportunities to assess studentlearning and strengthen the curriculum of that major. Often the course cancontain some other form of assessment, such as a local or national exam,embedded within it.

Student Portfolio of Learning Experiences. In this mode of assess-ment, students collect systematically the work that they have undertaken intheir study of a discipline. They undertake and write a self-examination of the material, demonstrating how they have constructed the disciplinethrough their writing and thinking over two years of study. Afterward fac-ulty meet with students to go over this portfolio. The faculty then could usetheir own and the students’ analyses of portfolios, coupled with their per-ceptions of the student conferences, as a basis for conversation among fac-ulty about the curriculum and practices within a discipline. In departmentsselecting this option, all faculty responsible for undergraduate educationshould be part of this process, but the plan becomes difficult to implementif each faculty member has to assess overly large numbers of students.

Senior Essay and Interview. The faculty constructs a series of ques-tions that ask students to demonstrate their conceptual understanding ofthe discipline and reflect on the strengths and weaknesses of their programs.The students respond in writing and then meet with faculty to discuss theirwritten statements. The faculty next meet to discuss the results of their con-ferences with students for the purpose of strengthening the major. Largedepartments may sample a cross-section of seniors rather than the entirepopulation.

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

Page 5: Assessing student learning in the major field of study

105ASSESSING STUDENT LEARNING IN THE MAJOR FIELD OF STUDY

Other Suggestions. Academic departments and programs have otheroptions as well.

• Academic departments have other options as well: Course-embeddedassessment, for example, a standardized examination, a classroomresearch project, or a comparison of a written assignment at the begin-ning of the course and one at the end

• Student self-assessment, for example, of a written exercise or after view-ing a video of his or her performance or presentation

• Student peer assessment, which also builds a learning culture in thedepartment

• Secondary reading of a student’s regular course work by faculty colleagues(this requires an atmosphere of department collegiality)

• External examiners or readers, which puts the student and faculty mem-ber on the same learning team

• Using historical department data on student performance and placementand course-taking patterns (these are discussed more thoroughly in Chap-ters 8, 9, and 10)

• Faculty focus group discussions with students• Surveys of students and alumni using self-reported measures

Table 7.1 gives a summary of possible methods for assessing studentattainment in the major depending on each purpose. The suggestions contained in the table should be treated as guides for departmental consid-eration rather than as definitive uses. A capstone course or departmentalexam might be the most useful choice for assessing the student’s attainmentof faculty-determined performance standards. For the purposes of meetingprofessional or national standards, a standardized test or external examinermay be employed. Comparing a student’s performance to some external ref-erence group almost always requires the use of a national exam or perfor-mance standard, but even graduate school and placement information canbe used for comparative purposes. Measuring the attainment of educationalgoals requires student-centered strategies such as learning portfolios, focusgroups, or locally designed exams. To answer the improvement question, awide range of strategies using multiple sources of information can be usedto examine trends over time.

Many professional organizations in the disciplines have developedexams, such as the Fundamentals of Engineering Exam, the Business Man-agement Test, the American Chemical Society Exam, the ETS PRAXIS testsfor beginning teachers, and the National Teacher Education tests in profes-sional knowledge and in various specialties. Many states and specializedaccrediting bodies require exam passage as a condition for professionallicensure, especially in the medical and health professions, accounting, law,civil engineering, and architecture. In such cases, faculty usually shape the

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

Page 6: Assessing student learning in the major field of study

106 ASSESSING STUDENT OUTCOMES: WHY, WHO, WHAT, HOW?

curriculum to maximize student performance. Other tests have been devel-oped by national testing companies, such as the ETS Major Field Achieve-ment Tests and GRE Subject Tests, the College Level Examination ProgramSubject Exams, and ACT Proficiency Exams. In addition, to evaluate stu-dent attainment in the major, many institutions rely on locally designedexams, course-embedded assessments, performance or field experiences,capstone experiences, and self-reported mastery on dimensions that arelocally constructed.

Collecting and Sharing Data

Each assessment question or purpose can be addressed in a variety of appro-priate ways. The question of whether the activity is relatively centralized

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

Table 7.1. Assessment in the Undergraduate Major Field of Study

Purposes Suggested Strategies and Methods

Meeting standards Department comprehensive examCapstone course with embedded assessmentSenior thesis or research projectExternal examiners or readersStudent performance or exhibit with expert judgmentsPractice teaching, internships, fieldworkProficiency or competency testsCertification exams

Comparing to National comprehensive exams (ETS major field, College Level others Examination Program, ACT)

Discipline-based exams Peer assessmentsProgram review by external expertsComparative graduate school and placement information

Measuring goal Senior thesis and research projectattainment Student portfolio

Student self-assessmentFaculty or student focus groupsAlumni surveysExhibits, performances, internships, field experiencesCapstone course with embedded assessmentLocally designed tests of competency/proficiency

Developing talent All of the above, especially:and program Senior essay followed by faculty interviewimprovement Faculty-student focus groups

Course-embedded assessmentLearning portfolioAlumni surveysAnalysis of historical, archival, or transcript dataSelf-reported growth

Page 7: Assessing student learning in the major field of study

107ASSESSING STUDENT LEARNING IN THE MAJOR FIELD OF STUDY

and controlled by forces outside the department, or decentralized and con-trolled by the faculty, is perhaps less important than the usefulness of theassessment for enhancing student learning. Even nationally normed stan-dardized tests can be used by faculty locally to reflect on the strengths andweaknesses of the department’s program. And even the most locally-centered form of student assessment usually yields results that over time canbe aggregated to serve program evaluation and institutional accountabilitypurposes.

There are many appropriate uses and efficiencies to be gained from rel-atively centralized data collection activities carried out in campus offices ofinstitutional research. Institutional researchers often work with others on the campus to develop and maintain student and alumni informationsystems that can serve as rich assessment databases. Rather than develop-ing their own survey research, database, and software expertise, departmentscan draw on these centralized databases in conducting their own assessmentactivities.

A healthy assessment culture is created when cooperative data sharingoccurs between administrative offices and academic departments. In recentyears, a variety of campuses have downloaded from their centrally main-tained student and alumni databases a variety of information that is helpfulto student and program assessment by faculty. The cost to the institution ofconducting separate alumni surveys in each department and maintainingindependent databases is too high. By surveying several thousand alumni,campuses can distribute both aggregated and disaggregated summaries ona wide range of measures of alumni satisfaction, educational outcomes,intellectual growth, career development, and effectiveness of the undergrad-uate experience, both departmental and campuswide.

A Case Study: How One University Responded

Over the past two decades, many universities have created assessment blue-ribbon committees, panels, task forces, and planning teams. Such effortsproduce the most enduring results when there is both administrative sup-port and faculty ownership. At one university, the assessment committeereport recommended that each academic department construct a means forcollecting information about student attainment in the undergraduate majoras an integral part of a comprehensive plan of departmental self-study anddevelopment. They viewed assessing student attainment in the major as partof larger curricular review, program evaluation, and development efforts bythe faculty.

The assessment committee concluded that student grades in specificcourses constitute necessary but not sufficient information about studentattainment. Indeed, some faculty give grades based on students’ reaching astandard of proficiency and knowledge; some grade on the curve; some gradestudent performance on the attainment of learning goals and objectives

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

Page 8: Assessing student learning in the major field of study

108 ASSESSING STUDENT OUTCOMES: WHY, WHO, WHAT, HOW?

embedded in the course; and some grade on the basis of student effort andimprovement. What was needed was a more comprehensive view of studentachievement in each discipline. In this context, student grades, faculty teach-ing evaluations, periodic program reviews, alumni studies, and student testscores all constitute complementary ways of obtaining useful feedback inorder to improve learning in the major.

In its plan, the assessment committee and the provost recommended amenu of departmental assessment choices and encouraged departments todesign their own if they wished. Most departments submitted plans forassessment in the major, and after review and evaluation, some departmentsbegan implementation, while others clearly needed assistance in the formof assessment workshops.

Departmental proposals covered a wide range of assessment strategies,including standardized tests, senior research and writing projects, course-embedded examinations, capstone courses, student performances and field-work, and student portfolios and essays. For a more detailed list ofassessment strategies elected by each department, see Volkwein (2004). Tai-lored to suit the needs of each discipline, these efforts gave the university,and especially the faculty, a rich array of information for improving thelearning experience of undergraduate students. One key to the program’ssuccess was the provost’s demand that every department try at least oneassessment method and see what it learned from it.

After a year or two of experience with one form of assessment, manydepartments explored another form of assessment, thus adding to the rich-ness of the faculty’s collective understanding about the effectiveness of thecurriculum. One form of assessment evidence sheds light on meeting stan-dards, while another reflected student growth or program improvement.One strategy enabled peer comparisons, while another yielded evidence ofgoal attainment. Judgments of student learning and program effectivenesswill be better and richer if they are based on multiple indicators and mea-sures, and they will be less reliable if based on a single indicator or measure.A multidimensional approach yields a far more reliable image of strengthsand weaknesses. For example, when a department sees congruence betweenthe information it receives from graduating seniors’ test results and fromalumni survey results, it has more confidence in the strength of the findings.

Consistent with the assessment committee recommendations, mostcampus efforts reflected a commitment to ongoing self-reflective develop-ment. Departmental plans, with few exceptions, reflected a willingness tobegin assessment and learn from it—to integrate assessment into the depart-ment’s self-reflective improvement. In a few cases, departments were skep-tical and unwilling to undertake assessment in the absence of perfectmeasures and methodologies. Successful assessment programs on othercampuses suggest that perfection is unattainable and that there is much tobe learned from getting started, building on the familiar, and developingmore informative and effective strategies based on experience over time.

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir

Page 9: Assessing student learning in the major field of study

109ASSESSING STUDENT LEARNING IN THE MAJOR FIELD OF STUDY

Conclusion

An important ingredient in a successful assessment program is an attitudeof cooperation and trust among faculty and between faculty and adminis-trative staff (Terenzini, 1989). Faculty need to be trusted to use the infor-mation for the enhancement of student learning, and administrators needto be trusted to use the summary information to promote institutional effec-tiveness. Obviously assessment evidence has the power to change facultybehavior, but it should not be used for faculty evaluation. The goal of col-lecting assessment information should be to promote evidenced-basedthinking and new conversations about student learning and about the cur-ricular experiences that promote student learning. Ideally the facultyrewards structure should be shaped to reward constructive changes ratherthan to punish poor performance.

References

Terenzini, P. T. “Assessment with Open Eyes: Pitfalls in Studying Student Outcomes.Journal of Higher Education, 1989, 60, 644–664.

Volkwein, J. F. “Assessing Student Learning in the Major: What’s the Question?” In B.Keith (ed.), Contexts for learning: Institutional strategies for managing curricular changethrough assessment. Stillwater, OK: New Forums Press, 2004.

Recommended Reading

Applebaum, M. I. “Assessment Through the Major.” In C. Adelman (ed.), Performanceand Judgment: Essays on the Principles and Practice in the Assessment of College StudentLearning. Washington, DC: U.S. Department of Education, 1989.

Banta, T. W., Lund, J. P., Black, K. E., and Oblander, F. W. (eds). “Assessing StudentAchievement in the Major.” In Assessment in Practice: Putting Principles to Work onCollege Campuses. San Francisco: Jossey-Bass, 1996.

Ratcliff, J. L. “What Can We Learn from Coursework Patterns About Improving theUndergraduate Curriculum?” In J. L. Ratcliff (ed.), Assessment and Curriculum. NewDirections for Higher Education, no. 80. San Francisco: Jossey-Bass, 1992.

Suskie, L. Assessing Student Learning: A Common Sense Guide. (2nd ed.) San Francisco:Jossey-Bass, 2009.

Walvoord, B. E. Assessment Clear and Simple: A Practical Guide for Institutions, Depart-ments and General Education. San Francisco: Jossey-Bass, 2004.

J. FREDERICKS VOLKWEIN is emeritus professor of higher education at The Penn-sylvania State University and a former director of the Center for the Study ofHigher Education.

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir