21
Benchmarking learning outcomes of undergraduate business education M. Ruhul Amin and Nafeez A. Amin Department of Management, Bloomsburg University, Bloomsburg, Pennsylvania, USA Keywords Assessment, Competences, Students, Higher education Abstract This paper discusses the efficacy of a model of benchmarking toward best practices in learning assessment in higher education. It outlines the need for best practices in learning assessment and reports findings on a comprehensive model of assessing learning outcomes of an undergraduate business program. By developing competency expectations of students, the model led to a multi-method data collection/analysis toward benchmarking learning outcomes. The findings demonstrate how benchmarking leads to continuous curriculum improvement of instructional process and the curriculum. The authors claimed that in the absence of a generally accepted model, if the methodology is replicated, it may lead to the best practices in assessing learning outcomes of an undergraduate business program. Introduction This paper discusses the efficacy of a model of benchmarking learning outcomes leading to best practices in higher education based on a pilot program in north-east Pennsylvania. The paper also discusses the impetus for continuous improvement of curriculum in higher education, and demonstrates the findings of a conceptual model toward benchmarking learning outcomes in general education and functional areas of business curriculum toward best practices of outcomes assessment in higher education. The movement for outcomes assessment of academic programs began during the late 1970s with Alverno College (of Milwaukee), and Northeast Missouri State University in their search for academic excellence as a competitive advantage in the wake of declining enrollment (Gatson, 1991). During the mid 1980s, a few state governments and accrediting agencies considered to mandate outcomes assessment (Ewell et al., 1990; Thrash, 1990) for public funding (Spangehl, 1987). The recognition of quality in the private sector through the Malcolm Baldrige award, ISO 9000, ISO 14000, and other QS and quality programs has positively influenced higher education towards requiring outcomes measures for educational programs by all accreditation agencies, especially AACSB International, the premier accrediting body for colleges of business (AACSB, 1989). In response to increasing interest on the subject matter, educators focused on doing it right instead of following a mandate to do so (Banta, 1986, 1988, 1996; Denney, 1994; Ewell, 1991; Ewell et al., 1990; Forrest and Steele, 1982). A number of model assessment tools and The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/researchregister http://www.emeraldinsight.com/1463-5771.htm BIJ 10,6 538 Benchmarking: An International Journal Vol. 10 No. 6, 2003 pp. 538-558 q MCB UP Limited 1463-5771 DOI 10.1108/14635770310505175

Benchmarking learning outcomes of undergraduate business education

Embed Size (px)

Citation preview

Page 1: Benchmarking learning outcomes of undergraduate business education

Benchmarking learningoutcomes of undergraduate

business educationM. Ruhul Amin and Nafeez A. Amin

Department of Management, Bloomsburg University, Bloomsburg,Pennsylvania, USA

Keywords Assessment, Competences, Students, Higher education

Abstract This paper discusses the efficacy of a model of benchmarking toward best practices inlearning assessment in higher education. It outlines the need for best practices in learningassessment and reports findings on a comprehensive model of assessing learning outcomes of anundergraduate business program. By developing competency expectations of students, the modelled to a multi-method data collection/analysis toward benchmarking learning outcomes. Thefindings demonstrate how benchmarking leads to continuous curriculum improvement ofinstructional process and the curriculum. The authors claimed that in the absence of a generallyaccepted model, if the methodology is replicated, it may lead to the best practices in assessinglearning outcomes of an undergraduate business program.

IntroductionThis paper discusses the efficacy of a model of benchmarking learningoutcomes leading to best practices in higher education based on a pilotprogram in north-east Pennsylvania. The paper also discusses the impetus forcontinuous improvement of curriculum in higher education, and demonstratesthe findings of a conceptual model toward benchmarking learning outcomes ingeneral education and functional areas of business curriculum toward bestpractices of outcomes assessment in higher education.

The movement for outcomes assessment of academic programs beganduring the late 1970s with Alverno College (of Milwaukee), and NortheastMissouri State University in their search for academic excellence as acompetitive advantage in the wake of declining enrollment (Gatson, 1991).During the mid 1980s, a few state governments and accrediting agenciesconsidered to mandate outcomes assessment (Ewell et al., 1990; Thrash, 1990)for public funding (Spangehl, 1987). The recognition of quality in the privatesector through the Malcolm Baldrige award, ISO 9000, ISO 14000, and other QSand quality programs has positively influenced higher education towardsrequiring outcomes measures for educational programs by all accreditationagencies, especially AACSB International, the premier accrediting body forcolleges of business (AACSB, 1989). In response to increasing interest on thesubject matter, educators focused on doing it right instead of following amandate to do so (Banta, 1986, 1988, 1996; Denney, 1994; Ewell, 1991; Ewellet al., 1990; Forrest and Steele, 1982). A number of model assessment tools and

The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at

http://www.emeraldinsight.com/researchregister http://www.emeraldinsight.com/1463-5771.htm

BIJ10,6

538

Benchmarking: An InternationalJournalVol. 10 No. 6, 2003pp. 538-558q MCB UP Limited1463-5771DOI 10.1108/14635770310505175

Page 2: Benchmarking learning outcomes of undergraduate business education

models, such as the College Board’s ACTCOM for outcomes assessment ofgeneral education (American College Training Inc., 1995), the Alverno model ofassessment profile (Alverno College, 1985), and the Educational TestingService’s (ETS) subject tests came to the market. Numerous books, reports, andcase studies came out during the 1990s (AAHE, 1996; Banta, 1996; Erwin, 1991;Hlebowitsh, 1995). A number of studies focused on best practices in learningassessment as well (Alstete, 1995; Cleary, 2001, Zairi, 1996). Most of thesescholars conceptualized assessment and benchmarking in terms of value addedby the instructional process toward learning competencies desired by theacademic community of a given institution. The questions addressed by theinitiative are the following: Is the instructional process achieving what it issupposed to achieve (i.e. the mission and program goals)? If so, how well is itdoing (i.e. how much value added), and compared to what (i.e. who are the peergroup)? Is the current process effective as well as the best it could be (i.e. searchfor best practice)? Alstete (1995) adequately defined the purposes ofbenchmarking in higher education. Of the two reasons why institutionsshould pursue benchmarking, one is to develop a better business processthrough re-engineering or continuous quality improvement. The other one isthe outcome of a process expressed in terms of performance indicators,benchmarks, and the best practices to true benchmarking. The abovepostulation runs parallel to International Benchmarking Clearinghouse’s (IBC)mission to help organizations find, adapt and implement best practices. Cleary(2001) while describing the institutional performance indicators as the outcomeof benchmarking, asserted that once the methodology is designed and usedeffectively, the leaders of higher education can strengthen the institutionalposition in the level of student learning, academic offering, and communitypartnerships. Bender and Schuh (2002, p. 1) summarize the assessmentdilemma of higher education in their editorial notes in the following way:institutions of higher education need strategically developed indicators thatprovide “an honest assessment of how an institution is doing and where it isheading”. However, as of now there is hardly any consensus as to the bestpractices of measuring learning outcomes of the students. Available casestudies to date demonstrate effectiveness of programs, but they hardly claim“true” benchmarking efforts. Recognizing the diversity of mission andcurriculum across colleges, accreditation agencies have refrained fromdictating a uniform system of assessment (AACSB, 1989; Middle StatesAssociation of Colleges and Schools, 1991; North Central Association ofColleges and Schools, 1993; Southern Association of Colleges and Schools, 1993;Wolf, 1990). In addition, a uniform system remains a tough sale in the collegecampuses in the United States because a large number of faculty memberscontinue to believe that exams/grades and placement records provide internaland external measures of learning outcomes (Ferren, 1993; Hutchings andReuben, 1988).

Benchmarkinglearning

outcomes

539

Page 3: Benchmarking learning outcomes of undergraduate business education

Thus, with the resistance and uneasiness of the faculty with the process inconjunction with the diversity of institutional missions and the curriculum,institutions of higher education are left alone in devising appropriate methodsunder the generally accepted principles of academic freedom. This paper isbased on one institution’s approach to assessment through benchmarkinglearning competence.

A comprehensive model of assessment of student competencies wasdeveloped during the summer of 1997. The model incorporated:

. testing of student competencies in the general education areas,specifically, analytical, critical thinking, and communication skills; and

. testing of students in eight functional areas, namely, accounting,economics, finance, management, marketing, quantitative businessanalysis, legal, and international issues.

A pre-test/post-test methodology with student demonstration of knowledgeand skills in the above areas was adopted. In addition, a studentself-assessment instrument geared to the assessment of communication,critical thinking, and analytical skills was utilized.

The assessment model and methodsAny assessment method for benchmarking must have clear goals andmeasurable objectives. Goals and objectives stem from the institutionalmission. The curriculum and the instructional processes should help studentsachieve these goals and objectives. Benchmarking student competencies orlearning outcomes, therefore, should allow institutions to improve curriculumand the pedagogy continuously. The undergraduate program requires studentsto complete 128 credits (120 credits from May 2003) in four years. About half ofthese credits (65) is to be taken in the general education during the first twoyears of the program. The other half (63) are to be taken in the functional, aswell as specialized areas of business. As already indicated, the model consistsof two parts. The first part relates to the benchmarking competencies of thegeneral education as approved by the College Curriculum Committee. This partof the curriculum (65 credits) contains critical thinking, analysis and synthesisand communication. The second part of the degree curriculum (63 credits)relates to the expertise that College of Business students should attain in thefunctional areas of business.

A detailed framework of dimensions for the first part of the corecompetencies in general business education (which the faculty believesstudents in the college must possess) was adopted and administered. Aconscious decision has been made to select only certain elements of thedimensions that make up the areas of critical thinking, analysis and synthesis,and communication. Therefore, the scope of the project in this part was limitedto these selected elements.

BIJ10,6

540

Page 4: Benchmarking learning outcomes of undergraduate business education

For the second part of the curriculum, a test of field competency orachievement in the functional areas of business was used. The Business FieldTest developed by ETS was adopted for the purpose. The test results to bepresented in this paper established a benchmark for the college as well as sixbusiness departments, namely, accounting, finance, management, marketing,computer and information science, and business and office systems.

A complex data collection method involving testing students through anin-house test battery (pre-test, post-test) designed to measure studentcompetencies; the student self-assessment; and an external test battery fromETS for the graduating seniors were adopted. In other words, a pre-testpost-test on competencies of general education, and post-test only throughETS test battery, and student self-assessment for functional business areaswere administered and reported in this paper. What follows are descriptionsof the dimensions and elements of communication, critical thinking, and fieldcompetencies as conceptualized for assessment in this study. Because of thecomplexity of the concepts being assessed, only selected elements of thedimensions listed later were measured and incorporated in the researchdesign.

Communication skillsIt is assumed that communication skills pertain to:

. encoding (form/organize) and conveying information, ideas, feelings,attitudes, and experiences to others; and

. receiving and decoding (understand) information, ideas, feelings,attitudes, and experiences.

Based on the assumptions, the following dimensions are proposed.A. Cognitive preparation in encoding and decoding

(1) Reading comprehension.

(2) Listening comprehension.

(3) Ability to follow instruction.

(4) Vocabulary.

(5) Writing mechanics:. grammar;. punctuation; and. composition.

(6) Organization of theme/concepts.

(7) Development of ideas/themes/concepts.

(8) Logic.

(9) Persuasion.

Benchmarkinglearning

outcomes

541

Page 5: Benchmarking learning outcomes of undergraduate business education

B. Oral conveyance/presentation skills

(1) Voice.

(2) Speed or rate of speaking.

(3) Use of visual aids.

(4) Eye contact.

(5) Comfort level.

(6) Style.

(7) Tone/inflection.

(8) Pronunciation.

(9) Language/dialect.

(10) Vocabulary.

(11) Fluency.

C. Self-assessment

(1) Interpersonal skills.

(2) Oral communication skills.

(3) Public speaking skills.

Assessment devicesAll of the above dimensions may be measured with a Likert type scale of 1-5(1 ¼ highly disagree; 2 ¼ disagree; 3 ¼ neutral; 4 ¼ agree; 5 ¼ highly agree).As a protocol, these values were rotated whenever this type of scale is used toavoid response bias of the students not including items in “C” self-assessmentdimension, and the first three items under category A (for whichwriting/reading tasks followed by multiple choice questions and answersmay be assigned). The following criteria of measurement may be associatedwith 1-5 scale for some of the individual items.

(1) Clarity.

(2) Courtesy.

(3) Conciseness.

(4) Confidence.

(5) Correctness.

(6) Conversational tone.

For instance, the item, A7, development of theme/concept under category A maybe evaluated for clarity in the following way: a trained assessor reads a studentessay, say, On becoming the CEO and she/he evaluates the essay on a Likerttype 1-5 point scale (1 ¼ highly disagree; 3 ¼ neutral; 5 ¼ highly agree).

BIJ10,6

542

Page 6: Benchmarking learning outcomes of undergraduate business education

Statements to be rated may reflect the six Cs listed earlier. Consider thefollowing illustrative examples:

Instruction to the assessor: Please read the student essay and indicateyour evaluation with a circle below each statement.

A7i. Concepts/Themes used in the essay are clear.Highly disagree 1 2 3 4 5 Highly agreeAll items of self-assessment will contain a “current state”, where the student

rates himself/herself, as well as the perceived “should be” state, where thestudent perceives he/she should be in the Likert type rating scheme of 1-5. Forinstance, on a 1-5 scale (1 ¼ highly disagree; 3 ¼ neutral; 5 ¼ highly agree)students may be asked to rate the following statements:

Instruction: The following statements reflect individual assessment ofhis/her communication skills. Please read these statements and rate yourskills as candidly as possible with a circle below each statement.

Ci1a. I have effective interpersonal skills.Highly disagree 1 2 3 4 5 Highly agree

Ci1b. My interpersonal skill is what it should be for a college graduate.Highly agree 1 2 3 4 5 Highly disagree

Student thought processesStudent thought processes (STPs) were developed based on ideas in Bloom’staxonomy (Bloom, 1956). It was divided into three essential dimensions forconvenience of assessment. They are:

(1) critical thinking;

(2) conceptual thinking; and

(3) quantitative reasoning.

As these dimensions are interrelated and overlapping, it is conceivable that onemay merge all of them under a single critical thinking category. However,categorizing thought processes into these dimensions may add clarity to conceptsand, therefore, may enhance the level of accuracy in measurement and assessment.

Each of the above dimensions, i.e. critical thinking, conceptual thinking, andquantitative reasoning, consists of components and elements. Deductivelythese dimensions constitute a hierarchy of concepts as shown in Figure 1.

What follows are brief descriptions of components and elements of eachdimension of STP.

Critical thinkingThree components make up the dimension of critical thinking. They includeAnalysis, Integration and Synthesis, and Critical Attitude. While analysis andintegration and synthesis are cognitive abilities, critical attitude refers tostudents’ cognitive perspective and/or the preparedness of the student to learn.

Benchmarkinglearning

outcomes

543

Page 7: Benchmarking learning outcomes of undergraduate business education

The following illustrative elements may determine the individual student’sability to analyze:

A. Analysis may refer to the student’s ability to:

(1) Judge the logic of an argument in textual material.

(2) Find the assumptions of an argument.

(3) Interpret information (pictorial, graphic, and numeric).

(4) Identify and classify:. component parts of a unified whole;. structural elements;. essential features; and. essential facts.

(5) Elucidate and clarify:. statements;. concepts; and. principles.

(6) Identify similarities and differences.

(7) Diagnose problems.

(8) Identify premises and alternative premises.

(9) Identify context:. relate a statement to a context.

(10) Determine strength and weakness of an argument.

(11) Compare and contrast:. ideas;. concepts;. constructs; and. models.

Figure 1.Hierarchy of concepts

BIJ10,6

544

Page 8: Benchmarking learning outcomes of undergraduate business education

Assessments of the above elements are performed through assigned readingand/or review tasks with multiple-choice questions/answers. As indicatedearlier, due to the complexity of these concepts, only selected elements will beassessed in this project.

The student’s ability to integrate and synthesize may be measured with thefollowing elements.

B. Integration and synthesis may refer to the student ability to:

(1) Detect interrelationships among ideas.

(2) Demonstrate cause-effect relationship.

(3) Relate historical antecedents to subsequent or contemporary events.

(4) Comprehend interactions among:. facts;. concepts;. events; and. forces within a conceptual framework.

(5) Develop generalized expressions based on:. ideas;. elements;. components; and. dimensions.

(6) Detect, review, and summarize various positions on an issue.

(7) Organize unrelated facts and ideas into a framework.

(8) Introduce new ideas into a perspective of an existing framework.

(9) Extrapolate from known to unknown situations.

(10) Recognize connection(s) between theory and practice.

(11) Integrate new facts and ideas with personal experience.

(12) Detect and infer author’s:. purpose;. point of view;. traits of thought; and. feelings expressed in the work under study.

(13) Write abstracts of a composition.

Tasks for the above elements may consist of writing assignments, readingcompositions and answering multiple-choice questions. Once again, it maybe noted here that only selected elements will be assessed in the pilotproject.

Benchmarkinglearning

outcomes

545

Page 9: Benchmarking learning outcomes of undergraduate business education

C. Critical attitude refers to cognitive preparedness of the students to:

(1) Form judgment on logical validity and interpretation.

(2) Acceptance and accommodation of uncommon:. ideas;. norms;. values;. thoughts; and. cultures.

(3) Identify effects of:. motives;. personal point of view;. self-interest;. biases or prejudices; and. ethnocentrism.

(4) Taking a side of a controversial issue and argue in support of a position.

Only a few of the above elements will be selected for assessment in the pilotproject. While the complexity of measurement and the time constraint arerecognized in implementing the pilot project, several of the selected elementsabove will be assessed by a rating scale of 1-5 (1 ¼ highly disagree; 3 ¼ neutral;5 ¼ highly agree) on a number of criteria statements reflecting the elements.

Conceptual thinkingThere are two dimensions of conceptual thinking: (1) the process oriented tomental resourcefulness and creativity; (2) deductive and inductive reasoning.

A. The attributes under mental resourcefulness and creativity may refer to thefollowing:

(1) vision;

(2) imagination;

(3) speculation;

(4) theorizing;

(5) new ideas; and

(6) model building/extension.

The above attributes will be measured through the assignments of essaywriting and problem solving (requiring creativity and imagination). Trainedassessors will evaluate these elements. Again, it must be pointed out that due tothe complexity of assessment only a few of the above elements will be selectedfor assessment in the pilot project.

BIJ10,6

546

Page 10: Benchmarking learning outcomes of undergraduate business education

B. Deductive and inductive reasoning refers to the degree of maturation inmaking logical arguments. Such attributes may refer to:

(1) ability to deduce conclusion from major/minor premises;

(2) ability to make inferences and generalization; and

(3) ability to detect logical fallacies such as illicit generalization,determinism, stereotype, allness and the like.

Assessment of the above abilities may be done by reading assignmentsfollowed by multiple-choice question-answers. For instance, a newspaperarticle or a paragraph from the selected essay may be chosen for the element 1above. Several conclusions in a multiple-choice format may be placed in the testfor the student to select only the logical one.

Quantitative reasoningQuantitative reasoning refers to the student ability to solve problems throughbasic mathematical and statistical manipulations. Such ability may be reflectedin the following activities:

. Solving applied numerical problems: combination, common factor(s),calculating mortgage rate, internal rate of return, simple probability.

. Inferences based on graphical presentation, frequency distribution,trends, bar charts, histograms, etc.

. Interpretation of basic applied math and statistical concepts such asmean, standard deviation, variance, interest rate, rate of return, issues ofsampling distribution, correlation-coefficient, scattergram, etc.

Assessment of this category may follow the format of any standardized testsuch as SAT or GMAT.

It was the intention of the author to benchmark competency of theundergraduate business students before they enter junior year. Therefore, 157students were identified on the basis of credits in spring 2000. Those whoearned between 45 and 64 credits were selected through a routine computer runby the Office of Institutional Research. Out of 157 students, 78 participated andtook a two-hour test designed for the purpose. However, due to attrition, only57 students remained in the pool to take the post-test as graduating seniors inspring 2002. Only a handful of these students participated in the post-test.

The functional business competency test (ETS test) has been administeredthree times since 1999 among the graduating seniors. Findings on these as wellas on general education competencies are presented below.

FindingsTable I shows the result of pre-test on general education competencies bydepartments.

Benchmarkinglearning

outcomes

547

Page 11: Benchmarking learning outcomes of undergraduate business education

The maximum possible total score in this test was 82. The subgroups’ sizeswere small and disproportionate causing more variability (as indicated by therelative size of standard deviation). The maximum possible score forquantitative reasoning was 40; for critical thinking, 18; for conceptual thinking,9; and for communication skills, 15. However, the performance of the studentswas appreciably low in all dimensions due to the level of difficulty of the testinstrument.

While the data collected through the post-test (with the administration ofthe same test to the graduating seniors who participated in the pre-test) arestill being analyzed, the findings from a subgroup (management, n ¼ 10)show the following: average total score was 36.4/5.58 (compared to33.29/5.18). Findings in other dimensions did not show significant increasesor decreases. It is anticipated that other subgroups would have similarresults. The findings of both pre- and post-test will be used as initialbenchmarks for the college and by the respective departments. TheCurriculum Committee of the College as well as the departmental CurriculumCommittees will be discussing the methods of intervention for continuousimprovement of student competencies based on the above findings. Asalready indicated, benchmarking student competencies in the functionalareas of the core was accomplished by administering ETS test and studentself-assessment questionnaire.

During spring 1998, all 150 seniors enrolled in the Business Policiescourse (the capstone course offered to the graduating seniors) were invited totake part in this important activity. A total of 110 students took the testwithout any inducement save a letter from the dean and the encouragementof the faculty teaching the course. The test was repeated in spring 1999(n ¼ 79) and again in the spring of 2001 (n ¼ 85). For convenience, only anoverview of test scores is presented here. The major findings of ETS arepresented in the following pages. ETS instrument was divided into eightsections: accounting, economics, management, quantitative business analysis,finance, marketing, legal and social environment, and international issues.Scores of students by college, department, and eight functional areas arereported. It should be mentioned here that the test was administered on aFriday evening without any incentive for the students. The obvious questionasked by many participants was “Is this test linked any way to my grades

DepartmentQuantitative

reasoningCritical

thinkingConceptual

thinkingCommunication

skillsTotalscore

Accounting 9.67/3.2 7.67/1.03 5.33/0.82 12.5/0.84 35.17/2.64Finance 9.0/3.29 8.5/2.81 5.5/1.05 11.67/1.63 34.67/6.25Management 7.67/2.55 7.36/2.31 5.5/0.86 12.80/1.42 33.29/5.24MIS 7.80/2.05 7.00/2.55 5.0/0.71 12.60/0.89 32.40/3.21Marketing 8.27/2.69 6.43/2.71 5.29/1.49 12.80/1.47 32.50/5.54

Table I.Mean/standarddeviation

BIJ10,6

548

Page 12: Benchmarking learning outcomes of undergraduate business education

at the University?”. A clear and truthful answer took away any seriousnessfrom the students. Many did finish the parts of the test before the allottedtime expired, and many never reviewed their answer. Therefore, this mayhave affected the overall student score and our institutional and departmentranking with other (325 or more) schools taking the same test might havebeen adversely affected as a result.

Table II shows the average scores of the college as a whole and its fourindividual degree programs (two degree programs – CIS and Office Systemswere excluded due to insufficient student participation). An average score of157 (out of 200 maximum possible points) places the College of Business atthe 70th percentile. This means that 70 percent of the participatinginstitutions had scores below 157. Table III shows the percentile distributionof correct answers on eight functional areas of the students in fourdepartments. The eight functional areas are sequentially represented inTable III as follows: accounting, economics, management, quantitativebusiness analysis, finance, marketing, legal and social environment, andinternational issues.

Based on findings as displayed, it seems that students across disciplinesdid show relative weaknesses in finance, economics, and legal/socialenvironment relative to the other five functional areas. Consistency offindings during three iterations of the test led the Curriculum Committee toquestion the apparent relationship between the poor scores of economics andfinance as the former is a prerequisite of the latter in the course sequencethe students in the college follow. Subsequently, a study of grades in therequired finance course (Introduction to Corporate Finance) as a function ofprerequisite courses was undertaken. The regression analysis showed that

1998 1999 2001

College 154.7 155.4 157Accounting 160.3 158.9 162.2Finance 158.7 163.5 166.4Management 150.9 150.9 154.3Marketing 150.9 153.2 152.7Institutional ranking 55 (percent) 58.8 (percent) 70 (percent)n 110 79 85

Table II.ETS test mean score

Department Acct. Econ. Mgt. QBA Fin. Mkt. Legal Int.

Accounting 67.4 44.1 58.5 53.5 48.4 48.1 49.4 46.9Management 47.6 39.6 60.9 49.8 31.6 53.8 40 48.7Marketing 44.2 36.1 50.8 52.3 32.2 59.9 39 41.5Finance 63.6 50.6 60.7 55.6 62.7 54 47.3 66.8

Table III.Percent correct score

(2001 data, n ¼ 85)

Benchmarkinglearning

outcomes

549

Page 13: Benchmarking learning outcomes of undergraduate business education

grades in managerial accounting followed by financial accounting explainedthe significant variance in the grade distribution of the Introduction toCorporate Finance more than the required economics courses. In addition,only grades in Microeconomics explained significant variance of grades inthe finance course. Such a finding led to reform in course sequencing andrevision of prerequisites. The faculty members teaching Business Law arealso reviewing the curriculum and the instructional processes towardimproving student competencies in this area.

Findings: the student self-assessmentA self-assessment instrument with 42 items was also administered among110 seniors at the same time they took the ETS test to benchmarkoutgoing competencies of general education. As a protocol, this wasrepeated in 1999 and 2001 as well. Only the findings from the 1998self-assessment test have so far been analyzed and hence reported in thispaper. Besides a few demographic items such as gender, disciplinary major,and the quality point average/grade point average (QPA/GPA), theparticipants were asked to self-assess the dimensions of communication –interpersonal, oral, written, and public speaking skills. In addition, a fewitems of analysis and critical thinking skills were included in theinstrument. These items also provided the value-added measurement ofeach of the dimensions mentioned earlier.

Tables IV and V are self-explanatory. It confirms that the sample of studentsdo represent the college. In addition, the sample was comprised of 52.7 percentfemales and 47.3 percent males. Thus, the gender distribution in the sample isalso a fair representation of the student population in the college. What followsare summary descriptions of findings in various areas of benchmarks. In

Percent of students

Accounting 24.5Finance 19.1Management 20.9Other 1.9Marketing 26.4OIS 2.7MIS 4.5

Table IV.Demographicinformation: breakdownby majors (n ¼ 110)

GPA Percent of students

2.1-2.5 27.32.2-3.0 20.93.1-3.5 40.93.2-4.0 10.9

Table V.GPA or quality pointdistribution

BIJ10,6

550

Page 14: Benchmarking learning outcomes of undergraduate business education

addition, departmental benchmarks were established and made available to thedepartments.

Communication skills. Since Likert type scales were used, the values(answers) 1 and 2 were combined, and reported as “disagreed”; and the values(answers) 4 and 5 were added together and reported as “agreed”. When needed(for reverse scales), values were recorded to fit the above format. What followsis a description of findings on each item of the self-assessment instrument.

Over 88 percent (88.2 percent) of the students indicated that they do haveeffective interpersonal communication skills. About 84 percent (83.6 percent)said that their interpersonal communication skills are, in fact, what they shouldbe for a university graduate. Over 89 percent indicated that they have definitelymade a significant improvement in their interpersonal skills during the lastfour semesters (i.e. junior and senior years). This is a benchmark of relativesuccess probably due to the social and interpersonal environment offered bythe institution.

About 54 percent of the students reported that they have effective oralcommunication skills versus 36.4 percent who believe that they do not havesuch skills. About 38 percent said that their oral communication skills arewhat they should be for a graduating senior, compared to 51.8 percent whofeel that their skills are not what they should be. About 37.3 percent of thestudents indicated that they have made a significant improvement in theiroral communication skills during the last four semesters. Compared tointerpersonal skills, this dimension has been identified by the students asrelatively weak. Discussion on this item should begin at the departmentaland college level toward curriculum enrichment, and the higher studentcompetency attainment.

About 74 percent of the participants believe that they do have effectivepublic speaking skills. About the same number feel that their publicspeaking skills are what they should be for graduating seniors. About 80percent of the seniors said that they have made significant improvements intheir public speaking skills especially during their last two years while atthe institution. The student assessment of this dimension is probablyindicative of:

. the effectiveness of the curriculum in terms of classroom presentations;and

. the general education requirement of the public speaking courseirrespective of the student majors.

A total of 39 percent of the respondents highly agreed that their ability to readand comprehend is excellent compared to 40.9 percent who do not rate theirability as excellent. Only 38.2 percent feel that their reading comprehensionlevel is where it should be for a graduating senior, whereas 42.7 percent saidthat it is not where it should be. About one-third (36.3 percent) indicated that

Benchmarkinglearning

outcomes

551

Page 15: Benchmarking learning outcomes of undergraduate business education

they have made significant improvement in their ability to read andcomprehend especially during the last four semesters. Over 39 percent said thatthey have not made significant improvement on this item. As candid as itseems from the above description, if this is the benchmark of outgoing quality,a serious discussion could/should entail on this issue. Possible suggestions forimprovement may include, among other things, networking with publiceducation systems that feed the college.

A total of 41 percent of the students indicate that they do not have a strongability for listening comprehension compared to 37 percent of them who feelthat they do have such ability. A little over 39 percent indicated that theirlistening comprehension ability is not where it should be for a collegiategraduate, but 36.4 percent felt otherwise. About 40 percent of the students saidthat they did make significant improvements in their ability to listen andcomprehend, especially throughout their last four semesters at the institutionwhereas 33 percent indicated otherwise. Both current assessment and thevalue-added item show relative weakness in this dimension. The imperative onthis dimension, therefore, is the same as the above.

About 89 percent of the students said that they have a strong ability tofollow oral instructions. About 79 percent said that their ability to followoral/verbal instructions is what it should be for graduating seniors. About 73percent of the respondents agreed that they have made significantimprovement in their ability to follow oral instructions, especially during thelast four semesters at the college.

About 88 percent of the students reported that they have a strong ability tofollow written instructions. About the same portion of the students indicatedthat their ability in this regard is what it should be for a graduating senior.About 89 percent of the students reported that they have made significantimprovement on this item during the last four semesters.

A total of 39 percent of the students feel that their writing skills are excellentcompared to 31 percent who reported that they have excellent writing skills.About 41 percent said that their writing skills are what they should be for agraduating senior, while 36.4 percent said the opposite. About 40 percentagreed that they have made significant improvement in their writing skillsespecially throughout the last four years here at BU while 34.5 percent said theopposite. This dimension was again assessed to be a problem area for students.A good number of students will be graduating with writing skills expected of acollege graduate.

Analytical and critical thinking skills. Over 54 percent (54.6 percent) of therespondents said that it was easy for them to locate the hidden assumption(s) ofan argument. About 57 percent said that their ability to find hiddenassumption(s) of an argument is what it should be for a graduating senior.About 63 percent (62.7 percent) of the respondents feel that they have made asignificant improvement in their ability to find hidden assumption(s),

BIJ10,6

552

Page 16: Benchmarking learning outcomes of undergraduate business education

especially during the last four semesters at school. On this item ofanalysis/critical thinking, the value-added component seems to be significant.

About 87 percent (87.2 percent) of the students reported that it is easy forthem to determine the strengths and weaknesses of an argument. About 82percent said that their ability to determine strengths and weaknesses of anargument is what it should be for a graduating senior. Over 76 percent (76.4percent) of the students said that they have made significant improvements intheir ability to determine strengths and weaknesses, especially during the lastfour semesters at the institution. Again this is a significant attestation ofvalue-added component.

A total of 43 percent of the students reported that it is easy for them tocompare and contrast ideas/concepts, while 41 percent reported the opposite.About 44 percent (43.7 percent) reported that their ability to compare andcontrast ideas/concepts is what it should be for graduating seniors, and 41percent said the opposite. About 39 percent of the students reported that theyhave made significant improvements in their abilities to compare and contrastideas/concepts, especially during their last four semesters at school, and 36.3percent said that they have not made an improvement during the last foursemesters. This may be a troublesome finding and, therefore, should beexplored through deliberation with the faculty.

About 86 percent (86.4 percent) of the students reported that it is easy forthem to summarize ideas/concepts to a logical conclusion. About 85 percent(84.5 percent) said that their ability in this regard is where it should be for agraduating senior. About 81 percent of the graduating seniors said that theyhave made significant improvements in summarizing main points to a logicalconclusion, especially during the last four semesters of their collegiate career.This is an important finding supportive of the value-added dimension of juniorand senior years.

About 76 percent of the students said that it is easy for them to interpretsimple graphic information. About 78.2 percent said that their ability tointerpret simple graphic information is what it should be for a graduatingsenior. About 71 percent said that they have made significant improvements intheir abilities to interpret simple graphic information especially during the lastfour semesters at the institution. This is also an important finding supportiveof the value-added dimension of junior and senior years.

Communication, critical thinking skills and the students’ GPA (QPA).COMSKILL scale consists of seven communication variables with an alphareliability coefficient a ¼ 0:703: A similar critical-analytic skills (CRITIC)index was created with four variables (with an alpha reliability of 0.712).

Taking the student’s reported GPA as a dependent variable, Pearsoncorrelation, and multiple regression analyses were performed.

As indicated in the correlation matrix, communication and critical-analyticskills are positively associated with each other (Table VI). The coefficient

Benchmarkinglearning

outcomes

553

Page 17: Benchmarking learning outcomes of undergraduate business education

r ¼ 0:397 is significant (at 0.000 level). It means (when CRITIC is treated asdependent variable) that an increase of the student communication skills byone standard deviation will lead to a corresponding increase by about 40percent of one standard deviation in critical-analytic skills of the student. Asimilar interpretation with COMSKILL as a dependent variable may also bemade.

Neither CRITIC nor COMSKILL shows any significant association with thestudent GPA or QPA. The coefficients, though insignificant, show negativetendencies. In other words, the student assessment is that critical-analyticalskills have little or no relevance to the attainment of overall QPA. In the case ofcommunication skills, the nearly significant negative coefficient ðr ¼ 20:18Þdemonstrates that the students with less communication skills do attain higherQPA.

In multiple regression equation, both COMSKILL and CRITIC are entered asindependent variables to predict the student QPA (GPA). As tables show, boththe independent variables produce a multiple correlation coefficient R ¼ 0:185indicating some degree of association (negative as indicated by negative bcoefficients) among the variables. However, the adjusted R 2 of 0.016 impliesthat only a little over 1 percent of variation in GPA or QPA is explained by thedependent variables. In other words, no real association is indicated. This isfurther substantiated by the insignificant negative regression coefficients. Thisis a significant finding of the study – one that needs to be further investigatedfor serious curriculum improvement effort!

ConclusionThe pre-/post-test of general education competencies shows somevalue-added in terms of total scores of only one subgroup analyzed so far.The post-test of general education competency was affected by attrition anda lack of voluntary participation by the students. The findings of BusinessFields Test and self-assessment of the seniors in the areas of communicationand critical-analytic skill dimensions demonstrate substantial baseline datafor continuous improvement. The composite score on ETS Business FieldsTest places College of Business at 70th percentile ranking in the nation. Thisallows peer (or external) benchmarking of the institution. The findings oneconomics, finance, and legal and social environment fields as relativelyweaker areas in terms of correct student responses in the test generated

CRITIC COMSKILL GPA

COMSKILL 0.397* 1.00 20.181CRITIC 1.00 0.397* 20.040GPA 20.040 20.181 1.00

Note: * Correlation is significant at the 0.00 level (two-tailed)

Table VI.Correlation matrix(n ¼ 110)

BIJ10,6

554

Page 18: Benchmarking learning outcomes of undergraduate business education

quality discussion across the College of Business toward qualityimprovement initiative.

The self-assessment by the students revealed that the students are relativelyweaker in oral and written communication areas. A majority of students havebeen reported to have strong interpersonal and public speaking skills.

The students indicated difficulties in certain items of critical-analytical skillareas more so than others. Overall, a substantial portion of students indicatedthat they have made significant improvements during the last four semesters inmost of the skill areas. Such findings support value-added outcomes of learningand instructional processes.

One important finding of self-assessment, however, was the obvious lack ofassociation between the student grades and the communication and criticalanalytical skill dimensions. More students may report higher skills in theseareas if curricula and pedagogy emphasize them and the grading of studentperformance is linked to the demonstration of these skills.

The findings on individual items by disciplines (not presented in this paper),by functional areas, and by communication and critical-analytical dimensionsprovide the initial benchmarks for the college. It seems that there is room forsignificant improvement on all of the items. Such improvements, however,should be accomplished through planned changes in the existing curriculumand following discussions and deliberations at the curriculum committees ofthe college as well as the departments.

This study experienced a serious challenge of voluntary studentparticipation as was evident in the post-test segment of general educationassessment. Lack of incentives for the students to take a three-hour long test isalso a challenge the project faces. In the first instance, our institution mandatesparticipation for College of Business students. However, in matters ofincentives, no meaningful alternative has so far been worked out. As this paperdemonstrates, the institution completed one full cycle of benchmarking towardan institutionalized program of benchmarking learning outcomes. The modelthus provides an alternative to the institutions of higher education tobenchmark and continuously improve curriculum and the instructionalprocesses.

The literature on best practices emphasizes an evolving nature of theconceptual paradigm. The concept has both ontological and epistemologicalimplications. On the ontology side, the best practices must lead toimprovements in the performance of an organization. The epistemology orthe process, practices, methodology is the side that seems to be evolving. It isrelative to situation, context, organizational mission, and organizationalsubsystems, but it is geared to the accepted ontology. To be precise, it ispossible to develop best practices in enrollment management or any of themajor processes of the institution. In support of this assertion, the work ofHelen Lund, who compiled a number of cases during 1985-1997 based on

Benchmarkinglearning

outcomes

555

Page 19: Benchmarking learning outcomes of undergraduate business education

experiences of benchmarking in UK higher education (Lund, 1998) could becited.

Thus, the methodology leading to institutionalized assessment practice asdescribed earlier in the pilot program followed the general protocols ofbenchmarking for best practices – a unique methodology leading towardimproving learning outcomes of the undergraduate business program.Activities undertaken included a conceptual model of identifying assessmentcriteria, information gathering, analysis, comparison internally, as well asexternally in search of best practices. This unique methodology resulted inrecommendation for process improvement through college-wide action. Ittherefore aims at recognition as the best practices of measuring learningoutcomes of an undergraduate business school with identical missions.Replication of the study in another institution with similar mission, another fullcycle of data gathering without logistical glitches in the same institution shouldenable the establishment of best practices of learning assessment of anundergraduate program.

References

AACSB (1989), Report of the AACSB Taskforce on Outcome Measurement, St Louis, MO, p. 2.

AAHE (1996), “What research says about improving undergraduate education”, AAHE Bulletin,AAHE, Washington, DC.

Alstete, J.W. (1995), Benchmarking Higher Education:Adapting Best Practices to Improve Quality,ERICH, George Washington University, Washington, DC.

Alverno College (1985), Assessment at Alverno College, Alverno Productions, Milwaukee, WI.

American College Testing, Inc. (1995), College Outcome Measures Program:Technical Report,American College Testing, Inc., Appendix-C, Iowa City, IA.

Banta, T. (1986), “Comprehensive program evaluation at the University of Tennessee, Knoxville:institutional responses to changes in state funding policy”, in Banta, T. (Ed.), PerformanceFunding in Higher Education: A Critical Analysis of Tennessee’s Experience, NCHEM,Boulder, CO, pp. 37-50.

Banta, T. (1988), “Assessment as an instrument of state funding policy”, in Banta, T. (Ed.),Implementing Outcomes Assessment: Promise and Perils, New Directions for InstitutionalResearch, Vol. 59, pp. 81-94.

Banta, T.W. (1996), Assessment in Practice: Putting Principles to Work College Campuses,Jossey-Bass, San Francisco, CA.

Bender, B.E. and Schuh, J.H. (2002), “Editorial notes”, New Directions for Higher Education,Vol. 118, p. 1.

Bloom, B.S. (Ed.) (1956), Taxonomy of Educational Objectives: The Classification of EducationalGoals: Handbook 1, Cognitive Domain, Longman, Green, New York, NY.

Cleary, T.S. (2001), Indicators of Quality: Planning for Higher Education, pp. 19-28.

Denney, L. (1994), “Institutionalizing assessment”, Assessment Update, Vol. 6 No. 2, pp. 8-9.

Erwin, T.D. (1991), Assessing Student Learning and Development, Jossey-Bass, San Francisco,CA.

BIJ10,6

556

Page 20: Benchmarking learning outcomes of undergraduate business education

Ewell, P.T. (1991), “To capture the ineffable: new forms of assessment in higher education”,Review of Research in Education, Vol. 17, pp. 75-125.

Ewell, P.T., Finney, J. and Lenth, C. (1990), “Filling in the mosaic: the emerging pattern of statebased assessment”, AAHE Bulletin, Vol. 58 No. 4, pp. 3-5.

Ferren, A. (1993), “Faculty resistance to assessment: a matter of priorities and perceptions”,Proceedings of Double Feature Conference: 8th AAHE Conference on Assessment inHigher Education, 1st AAHE Continuous Quality Improvement Conference, 9-12 June,Chicago, IL, pp. 1-5.

Forrest, A. and Steele, J.M. (1982), Defining and Measuring General Knowledge and Skills, ACT.Inc., Iowa City.

Gatson, P. (1991), “Measuring the marigolds: literary studies and the opportunity of outcomesassessment”, Journal of theMidwestModern Language Association, Vol. 24 No. 1, pp. 11-20.

Hlebowitsh, P.S. (1995), “A higher pragmatism: assessment in the college curriculum”, PeabodyJournal of Education, Vol. 70, pp. 160-74.

Hutchings, P. and Reuben, E. (1988), “Faculty voices on assessment: expanding theconversation”, Change, pp. 48-55.

Lund, H. (1998), “Benchmarking in UK higher education”, in Schofield, A. (Ed.), Benchmarking inHigher Education: An International Review, Chapter 5, CHEMS, London, pp. 44-60.

Middle States Association of Colleges and Schools (1991), Framework for Outcomes Assessment,Philadelphia, PA, p. 35.

North Central Association of Colleges and Schools (1993), Criteria for Accreditation 1992-1993,Columbus, OH, p. 19.

Southern Association of Colleges and Schools (1993), Criteria for Accreditation 1992-1993, p. 2.

Spangehl, S. (1987), “The push to assess”, Change, pp. 35-9.

Thrash, P.A. (1990), “State agency expectations for assessment in the north central region”, NCAQuarterly, pp. 415-34.

Wolf, R.A. (1990), “Assessment in the Western Accrediting Commission for senior colleges, anduniversities”, NCA Quarterly.

Zairi, M. (1996), Benchmarking for Best Practice: Continuous Learning Through SustainableInnovation, Butterworth-Heinemann, London.

Further reading

Albone, K.R. (1996), “General education as a major: assessment strategies”, Assessment Update,Vol. 8 No. 1, p. 6.

American Productivity and Quality Center (1998), Assessing Learning Outcomes: ConsortiumBenchmarking Study, Best-in-Class Report, APQC, Houston, TX.

Astin, A.W. et al. (1992), Principles of Good Practice for Assessing Student Learning, AAHE,Washington, DC, pp. 2-3.

Braskamp, L.A. (1991), “Purposes, issues, and principles of assessment”, NCA Quarterly,pp. 420-4.

Ewell, P.T. (n.d.), “Identifying and improving core processes of undergraduate teaching andlearning”, Proceedings of Double Feature Conference: 8th AAHE Conference on

Benchmarkinglearning

outcomes

557

Page 21: Benchmarking learning outcomes of undergraduate business education

Assessment in Higher Education, 1st AAHE Continuous Quality Improvement Conference,9-12 June, Chicago, IL, pp. 29-38.

Farquhar, R.H. (2002), “Higher education benchmarking in Canada and the United States”, inSchofield, A. (Ed.), Benchmarking in Higher Education: An International Review, CHEMS,London, pp. 21-34.

Harris, J. (1985), “Assessing outcomes in higher education”, in Adelman, C. (Ed.), Assessment inAmerican Higher Education: Issues and Contexts, US Department of Education, Office ofEducation Research and Improvement, Washington, DC.

BIJ10,6

558