12
www.universities-journal.com JOURNAL of the WORLD UNIVERSITIES FORUM Volume 1, Number 3 Assessing Institutional Learner Outcomes Fernando F. Padró and Marlene Hurley

Assessing Institutional Learner Outcomes

  • Upload
    usq

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

www.universities-journal.com

JOURNALoftheWORLD

UNIVERSITIESFORUM

Volume 1, Number 3

Assessing Institutional Learner Outcomes

Fernando F. Padró and Marlene Hurley

JOURNAL OF THE WORLD UNIVERSITIES FORUM http://www.universities-journal.com/ First published in 2008 in Melbourne, Australia by Common Ground Publishing Pty Ltd www.CommonGroundPublishing.com. © 2008 (individual papers), the author(s) © 2008 (selection and editorial matter) Common Ground Authors are responsible for the accuracy of citations, quotations, diagrams, tables and maps. All rights reserved. Apart from fair use for the purposes of study, research, criticism or review as permitted under the Copyright Act (Australia), no part of this work may be reproduced without written permission from the publisher. For permissions and other inquiries, please contact <[email protected]>. ISSN: 1835-2030 Publisher Site: http://www.universities-journal.com/ JOURNAL OF THE WORLD UNIVERSITIES FORUM is a peer refereed journal. Full papers submitted for publication are refereed by Associate Editors through anonymous referee processes. Typeset in Common Ground Markup Language using CGCreator multichannel typesetting system http://www.CommonGroundSoftware.com.

Assessing Institutional Learner OutcomesFernando F. Padró, Monmouth University, New Jersey, UNITED STATESMarlene Hurley, State University of New York, NY, UNITED STATES

Abstract: External reviews of higher education institutions focus on verifying or clarifying performance and results basedon mission, criteria, or clearly stated or implied set(s) of standards. Moreover, there is increased interest in program-levelaccreditation to support and demonstrate enhanced quality; the review process at this level also is based on meeting internaland external criteria and/or standards. The Center for Psychology in Schools and Education (1997) recommends that ef-fective learning should include multiple assessments for diagnostic, process, and outcome purposes. There are nationaland international associations who advocate certain assessment strategies to ensure appropriate documentation of studentperformance; however, because these are not involved in the accreditation or audit process the strategies may or may notbe elements of the criteria used for review purposes. Organizational theory suggests that organizational behavior is basedon previous reality generated from the stream of experience; previous experience argues for regulatory compliance for fearof sanction ranging from conditions, such as refusing reaffirmation or granting of initial approval. Indicators and metricsin this minimaxing regime prioritize data for performance reviews over accurately measuring student learning. Universitiesthus err on the side of caution, focusing more on traditional testing methodology instead of recognizing the usefulness ofdifferent assessment techniques as feedback that enhances student learning as well as documenting student learning itself.This paper discusses the tension between regulatory compliance and good design for student learning and its effect on as-sessment strategies. It also posits suggestions on how to mitigate some of the effects in order to balance institutional, faculty,and individual learner needs.

Keywords: Assessment Strategies, Learner Outcomes, Minimaxing Regime, Institutional Accreditation, ProgrammaticAccreditation, Quality Audit

Introduction

ACCREDITATION REPRESENTS Aminimaxing regime for higher education,especially when uncertainty is prevalent inrelationship to outcomes (see Anderson et

al., 1981). The identification, measurement, anddocumentation of learner outcomes represents a re-distribution of acceptable benefits, in this case fromdocumenting efficient use and effective stewardshipof institutional resources to defining institutionalwellbeing through proxies of quality based on theimpact graduating students have on the workplaceand communities. Universities cannot afford infiniterisk aversion in terms of avoiding the increasing de-mand for accountability and utility. The accreditationmodel created in the USA is based on the comprom-ise which posits that peer-review based self-evalu-ation and self-policing works better than an externalthird party formal audit or governmental agent re-view. Nevertheless, the practice of accreditationrepresents what Samuelson (1976) termed a completeand costly egalitarianism. As Savage (1951) postu-lated, accountability represents a strategy of choosingthat alternative that minimizes the maximum regretbased on the difference between the utility of altern-atives under different circumstances.

Because of risk aversion in terms of failing to meetcriteria or standards, universities err on the side ofcaution, thus interfering with creativity and innova-tion when it comes to assessing student learning andperformance as well as documenting institutionalquality (Padró&Hurley, 2005). This inherent cautionreflects the Janusian aspect of higher education. Onone hand there is the environment that attempts tofoster personal growth and development along withcreativity and innovation, allowingwhat Csikzentmi-halyi (1990) refers to as flow – when people are soinvolved in an activity that nothing else matters, nomatter what. However, on the other hand, there isthe conservative, bureaucratic entity that respondsslowly to a changing environment for a number ofreasons ranging from a need to carefully analyze theissues and their solutions to focusing on regulatorycompliance, financial concerns, and the desire to re-main “players” in state and national politics. Facultyparticipation allows the institution to tap in to built-in intellectual capital, but it does so at the cost ofagility based on time and differentiation of perspect-ive between vertical integration and horizontal rep-resentation (Padró, 2004). The difficulty, as Corson(1975) explained, is that while there is a preferenceto think of universities as a business enterprise,higher education institutions are held together by

JOURNAL OF THE WORLD UNIVERSITIES FORUM,VOLUME 1, NUMBER 3, 2008

http://www.universities-journal.com/, ISSN 1835-2030© Common Ground, Fernando F. Padró, Marlene Hurley, All Rights Reserved, Permissions: [email protected]

shared beliefs, attitudes, and values rather than bystructure and authority.Learner outcomes, as part of student assessment,

are now linked to institutional improvement; there-fore, institutions have to understand the internal andexternal impact from information emanating fromstudent assessments (Peterson & Vaughan, 2002).Learning outcomes at the institutional level are“statements describing [a university’s intention]about what students should know, understand, andbe able to do with their knowledge when theygraduate.” (Huba & Freed, 2000, pp. 9-10) When itcomes to documenting institutional performance ef-fectiveness, outcomes shift the emphasis from tradi-tional institutional inputs and regulatory compliancethroughputs to output as measured through thesemeasured results. Accreditation processes are adapt-ing their review and decision-making processes toincorporate learner outcome criteria or standards.For example, at the institutional level in the USA,theMiddle States Commission on Higher Educationhas created standard 14 for expectations derived fromstudent learning outcomes; The Higher LearningCommission has established a framework for evalu-ating learning experiences and their impact on stu-dents within criterion 3 (Padró & Hurley, 2005). Atthe professions level, both the American Assemblyof Collegiate Schools of Business (AACSB) and theNational Council for Accreditation of Teacher Edu-cation (NCATE) are espousing learner outcomesassessments as yardsticks to determine program vi-ability, and other professions and disciplines arepoised to follow suit now or in the near future.If there is an issue with learning outcomes, it is

its linkage with national policy steering interests. Inthe USA, there is no better example of this than theU.S. Department of Education (2006) report entitledA test of leadership: Charting the future of highereducation – A report of the Commission appointedby Secretary of EducationMargaret Spellingswhichlinks learning outcomes with employable skills,workforce development, and lifelong learning framedwithin the rubric of workforce development. If onebothers to look at the series of white papers fromwhich the Spellings Commission Report spring forth,a political agenda comes through, one that desiresuniversities to be more directly linked to nationalneeds and one that focuses outcomes on easilymeasurable results (such as standardized testing) asimposed on primary and secondary schools in theUSA as part of the No Child Left Behind Act of 2001(Padró, 2007). There is an inherent contradictionbetween the learning outcomes and a focus on high-stakes standardized testing because, as the Centerfor Psychology in Schools and Education (1997) re-commends, effective learning should includemultipleassessments for diagnostic, process, and outcome

purposes. Focusing on standardized tests is easy be-cause it provides a numerical reference to learning,and it makes the assessment process easier. However,because of the high probability of reductionistic fal-lacy that comes with over-reliance on an instrumentwhose processes havemany entropic variables, suchas test anxiety and environmental context (Hurley& Padró, 2006) that impact results and the reliabilityof data collection, the results may not be useful orused for all intended purposes (formative assessmentfor instructor and program improvement feedback,actual student learning, and global perspectives oninstitutional performance by students and graduateswhile in school and later at work).For the remainder of this paper, the focus will be

on the issue of how learning outcomes assessmentsare constrained by policy outcomes. The discussionis based on findings from studies related to mathem-atics and science curriculum for pre-service teachersand centering on assessment strategies that respondto national recommendations conflictingwith nation-al mandates.

Challenges in Formulating InstitutionalLearning OutcomesWalvoord (2004) recommends, among other things,embedding assessments as a necessary step and todo so as part of strategic planning or formation ofnew initiatives. Embedding learning outcomes be-comes more challenging when it is done from a ret-rofitting or re-engineering perspective, or the ra-tionale is one of regulatory compliance or other ex-ternal mandate forcing a non-understanding or disin-terested faculty. The challenges stem from (1) havingto figure out how to add to or adapt existing processin a way that is meaningful and supportive of otherexisting activities and needs, (2) worrying aboutadding to existing workloads, (3) linking to appropri-ate campus recognition and rewards for embracinglearner outcomes, (4) alignment with institutionaland program-level accreditation requirements alongwith other less formally structured disciplinary re-commendations, (5) generating and linking outcomeswith existing program and class assessment formats,and (6) making learning outcomes a meaningfulcomponent of the learning experience students re-ceive. These challenges are not so different fromwhat other structural and programmatic changes facein a typical campus environment. If there is a differ-ence, it is in making sure that faculty members arenot loaded down with additional duties (especiallyjunior faculty), and that faculty and students do notsee learning outcomes merely as busy work detract-ing from learning and what students get out of theirexperience from attending courses within programsand the institution as a whole (Hunter & Csikzentmi-halyi, 2003).

JOURNAL OF THE WORLD UNIVERSITIES FORUM, VOLUME 166

Developing a new program or reviewing an olderprogram at the institutional level (such as generaleducation) or within a college/school sometimes has

to bridge the more traditional notion of goals andlearner outcomes.When this is the case, the programrecommendation matrix could look as follows:

Once decisions are made at the institutional or pro-grammatic level, then the sample Huba & Freed(2000) matrix for assessment planning, monitoring,or reporting can be used for more specific follow-up.Notice how this grid takes into account and docu-ments a continuous improvement feedback processby linking outcomes and results to changes neededbased on performance as defined by the intendedoutcome. Particularly interesting is the last column

which identifies which key stakeholders have beeninformed. By doing this, any academic unit at anylevel clearly identifies internal and/or external rela-tionships deemed important enough for communica-tion to occur, providing insight to the value the aca-demic unit places on these relationships – a bestpractice in quality assurance advocated by qualityspecialists and certain accrediting or auditing organ-izations and agencies.

Consequently, at the course level, the documentationmatrix should reflect most of these criteria withminor adaptations:

• Goals• Learning outcomes• Relevant experience (in-class and out-of-class

tasks, artifacts, other assessments)• Exceptions (explanation of procedural changes

if these occur to document variation and applic-ability to summative tallies)

• Measures and milestones• Results• Accreditation/discipline criteria (based on rub-

rics)• Changes made (formative and summative; man-

agement by fact regime, i.e., decisions made byreview of collected data)

• Stakeholders informed.

A word of caution: Even though these grids can becreatedmanually, data collection and analysis shouldbe done electronically whenever possible. A databasesystem assists data collection and analysis by allow-ing designated academic staff (faculty, professional,clerical) to enter information into the system and,conversely, have access to it. The database shouldbe sufficiently sophisticated so that the different datacan be analyzed through proper statistical treatmentsbased on the needs of the unit and academic staff tomake documented, informed decisions. Relying ona combination of manually-entered information ontoa spreadsheet maymake it difficult to access reliableinformation on demand because the informationneeded may require refined analysis not possiblewith a spreadsheet. Moreover, the inability to input

and analyze data for decisionmaking purposesmakesit very difficult to document how the continuousimprovement loop is working, as well as notproviding evidence of the basis for decisions made.

Methodology of Studies Reviewed forthis PaperTwo separate studies are reviewed to demonstratethe challenges external pressures bring on settinglearner outcomes. The first study is about how“teacher work samples” (TWS) used in the prepara-tion of science teachers encourage traditional exam-ination results rather than fostering multiple assess-ment techniques set forth by the National ScienceEducation Standards (NSES) established by the Na-tional Research Council in 1996 (Padró & Hurley,2005).The NSES standards were reviewed to document

the rationale for multiple assessments. At least 100websites were found addressing some aspect of TWS.Each website was reviewed based on the need to findtwo types of information: (1) instructions for writingthe “assessment plan” section of the TWS and (2)examples of TWS Assessment Plans for teachingscience. Twenty-two web sites contained enoughinstructions for writing the Assessment Plan portionof the unit (specifically the major pre- post assess-ment piece). The instructions were with “T” for tra-ditional (pre-/post-tests), “N” for noncommittal(traditional and non-traditional assessments), and“P” for performance-based (non-traditional) assess-ment. A total of 10 TWS samples were found thathad “acceptable” teaching science units for inclusionin the study.

67FERNANDO F. PADRÓ, MARLENE HURLEY

The second study is really a set of studies examin-ing recent primary literature on integrated mathem-atics and science methods (IMS) courses (Hurley &Padró, 2007). First of all, a search for primary studiesreporting on integrations of mathematics and sciencein college and university methods courses from 1990(representing the presence of national standards)forward was conducted. While the studies did nothave to meet specific methodology criteria, they didhave to: (1) be teaching mathematics and sciencemethods courses in some integrated fashion; (2) re-port on the successes and challenges of their integra-tions; and, (3) have pre-service teachers training forthe elementary, middle school, or high school levelsof instruction. Studies were disqualified if they ad-dressed only professional development of teachers,were at the early childhood level, did little actualintegration instruction in the courses, and/or integ-rated disciplines other thanmathematics and science.Fourteen studies were identified.Schools of education were located using the web-

site, www.univsource.com, which listed a total of140 schools of education, public and private, for thenine states in 2004-2005. All school of education(SOE) websites and catalogs in the nine states repres-ented by 14 studies were searched for integratedmathematics and science methods courses. Of the140 SOE identified, 52 schools were contacted viae-mail letters and asked a set of three questions: (1)What was the reasoning behind the offering of anintegratedmathematics and sciencemethods course?(2)What were its successes and challenges? (3)Whatis the future of the integrated course? These schoolswere contacted two- three times for responses; thereturn rate was 33 percent (17 schools), with eightintegrated methods courses verified. The 14 studieswere also analyzed for statements or inferences per-ceived as indicators for the possible continuation oftheir integrated methods courses.

Findings from the TeacherWork Sample(TWS) Study“Science literacy implies the attainment of abilitiesto read, write, and discuss sciencewith an understand-ing that can be applied across the natural world andto real world contextual questions, problems, andissues.” (Padró & Hurley, 2005, p. 18; also see Hur-ley, 1998) Students have to pose questions and hypo-theses amenable to scientific investigation and exper-imentation, to criticize plans for scientific investiga-tions and experiments, to write a report about an in-quiry, and to identify reliable sources of scientificinformation (Champagne, Kouba, & Hurley, 2000).The national assessment standards from the NSES

moved the emphasis away from traditional testing

toward multifaceted assessments because of theseconcerns:

If the principles in the assessment standards arefollowed, the information resulting from newmodes of assessment applied locally can havecommon meaning and value in terms of the na-tional standards, despite the use of different as-sessment procedures and instruments in differ-ent locales. This contrasts with the traditionalview of educational measurement that allowsfor comparisons only when they are based onparallel forms of the same test (NRC, 1996, p.78).

A review of the TWS’s found on the internet in thearea of science education exemplifies how institu-tions tend to gravitate toward generic or traditionalassessments mechanisms to make sure that they areable to demonstrate to external reviewers that theirprocesses comply with externally derived or imposedcriteria or standards. Only two out of the ten TWS’smeeting the review criteria utilized multiple assess-ment types; one of the two also included a traditionaltesting scheme. The typical practice as seen in sevenout of the ten TWS’s was the use of traditional mul-tiple choice tests as the basis of a pre- and post-testassessment structure. Interestingly enough, samplesof curriculum units from TWS’s showed more com-plete data about learning outcomes when the teachercandidates ignored the institution’s page limitationsand provided more complete research, reporting,analyses, and evaluation.It seems TWS’s support the “old” instructor-

centered environment rather than the current viewof the learner-centered environment. Twigg (2005)recommends that continuous assessment/feedbackandmastery learning become one aspect of redesign-ing courses to promote and ensure quality. It not onlypromotes student responsibility in learning but edu-cational practice will likely improve when the learnerbecomes the primary focus. However, learner out-comes assessment is constructed within the limita-tions of traditional assessment frameworks not onlydue to time constraints to get the lessons done andrequired information covered, but because to dootherwise and consider the NSES standards couldjeopardize the pre-service teacher – especially in ahigh stakes testing environment that focuses on testtaking rather than on multiple forms of assessment.

Findings from a Review of IntegratedMath and Science (IMS) MethodsCoursesApresence of integration through “theoreticalmodelsand empirical research related to integrated mathem-atics and science courses, projects, and programs for

JOURNAL OF THE WORLD UNIVERSITIES FORUM, VOLUME 168

pre-service and in-service teachers has emerged inthe last 12 years” (Berlin & Lee, 2005, p. 22). Sucha presence in the days of NCLB may well indicatethat value is still placed on the integration of math-ematics and science. Yet, a review of the literatureyields only 14 studies on IMS methods coursespublished from 1990 through 2005: nine were forelementary education, two for middle school educa-tion, two for high school education, and one coveredmiddle and high school education. The studies showan overall positive feeling for IMS instruction, al-though there were some issues impacting effective-ness and the overall optimism for this curricular ap-proach on the part of pre-service teachers. Eight ofthe studies indicated a willingness to continue withtheir research based on their findings, a partial mitig-ation for the low number of studies that fit the re-search criteria. Nevertheless, a number’s view doesimply that integrated courses are a minority view onhow to meet policy expectations vis a vis learningperformance.Another possible negative impact from NCLB is

that a review of SOE in the nine states representedin the 14 studies identified only 52 out of 140 schoolswith integrated math and science methods coursesin their catalogs. Responses from follow- ups withthese institutions verified only eight of those courses.A number of outcomes stood out in the reviewed

studies that were reminiscent of much of the researchfrom K-16 integrations on student achievement. Forexample, challenges included a recognition by theteachers of the great need that exists for extra time,disparities between the value of integration for mathand for science, the need for strong content andpedagogical knowledge in order to integrate, thecontinued need for traditional methods courses inspite of integrated successes, and the recognition atthe end of the many challenges that integrationprovides. Successes included deeper integration un-derstandings, connections between theory and prac-tice, improved affective outcomes, content know-ledge, and analytical skills, and two studies indicatedthat pre-service teachers continued to believe in thevalue of integration in spite of its many challenges.The standards-based curriculum reform of the

1990s called for integration, connections, and linksbetween disciplines and specifically between math-ematics and science. What is noticeable in thesestudies is the unfortunate reality of when there areexternal demands such as accreditation or standards-focused institutional performance, educational insti-tutions will take a conservative approach to secureapproval and acceptance (Padró & Hurley, 2005).The learner-centered processes and concepts advoc-ated by the standards and supported by the learningliterature risk diminishment in the wake of require-ments for high-stakes student assessment (Hurley &

Padrό, 2006). Most of these studies echo Czerniak,et al.’s (1999, pp. 427-428) statement, “The pressureof state proficiency and standardized tests seems tobe a limiting factor in implementing an integratedcurriculum.”

ConclusionWhat has been learned from the two studies is thatthe use of learner outcomes as an instrument to doc-ument the vitality of learning is hampered by theassessment climate currently in place. Walvoord(2004) suggests institutions build on classroomworkand the potential use of portfolios and other meansof generating student samples as elements of learneroutcomes assessment. She also goes on to say thatif standardized tests are going to be used, facultymembers have to be willing to teach in a manner thathelps students do well on the test. This does not refer“teaching to the test;” rather, provide the informationin a manner that places the student in a position tosucceed.Assessing learner outcomes requires a knowledge

of the institutional mission, buy-in from academicstaff from all parts of the university, and a willing-ness to look at learning and scholarship from amulti-dimensional perspective. The challenge is to createa positive environment within what is a minimaxingregime where egalitarianism is defined by rising ex-pectations of consumerism, vocationalism, andcommercialization of instruction and research.Learner outcomes have to be viewed in terms ofpedagogical values first and then, from the perspect-ive of skills acquisition. Without faculty participa-tion, learner outcomes will be more externally fo-cused due to the rise of what Slaughter and Rhoades(2004) call academic capitalism and others call en-trepreneurialism or corporatization. Institutionalgoals are linked to mission and outcomes created bystrategic planning, state and accreditation oversight,and the resources the institution can muster fromwithin and outside its walls. Learning outcomesshould focus on the learner first and then the institu-tion; however, the current accountability environmentconfuses one for the other due to the focus on con-sumer choice. Individual competition rather thansocial cooperation will provide the value-addedcritical mass that leads to generating the social good(Williams, 2006).Part of the problem institutions face in the use of

learner outcomes assessment is the external point-of-view that there is either systemic failure of studentperformance or a lack of institutional capacity toprovide students with the necessary knowledgeneeded to succeed and become part of the globaleconomy so that the person can become sociallymobile. In either case, the prevailing idea is to over-

69FERNANDO F. PADRÓ, MARLENE HURLEY

come the limitations through forced-choice testingin order to ascertain that students have learned whatthe national standards imply they should learn.Higher education in the USA is being assessed interms of performance, participation, completion, af-fordability, benefits, and learning (National Centerfor Public Policy and Higher Education, 2004), allbased on what the individual student does. Mean-while in Europe, the restructuring of higher educationbased on the Bologna Process is focusing on structur-al change in order to better serve the consumer de-mands of students: standardization of degrees, setting

qualifications as learner outcomes, credit/transfers,quality assurance, cooperation/networking, andtransparency. Note that learner outcomes are deemedto be a structural component that is arguably beyondquality assurance compliance; however, in both caseslearner outcomes are used as a means to establishrelevant indicators to different group of students(Jacobs&Hundley, 2005). As Linkon (2005) determ-ined, the caveat of learner outcomes assessment isthe drive for accountability that is based increasinglyon economic issues rather than on helping studentsbecome thoughtful citizens and lifelong learners.

ReferencesAnderson, B.F., Deane, D.H., Hammond, K.R., McClelland, G.H., & Shanteau, J.C. (1981). Concepts in judgement and

decision research: Definitions, sources, interrelations, comments. New York: Praeger.Berlin, D. F. & Lee, H. (2005). Integrating mathematics and science education: Historical analysis. School Science and

Mathematics, 105(1), 15-24.Center for Psychology in Schools and Education. (1997). Learner-centered psychological principles: A framework for school

redesign and reform. Washington, D.C.: American Psychological Association. (Retrieved July 10, 2004 from ht-tp://www.apa.org/ed/lcp.html)

Champagne, A. B., Kouba, V. L., & Hurley, M. (2000). Assessing inquiry. In J. Minstrell & E. H. van Zee (Eds.), Inquiringinto inquiry learning and teaching in science (pp. 447-470). Washington, DC: AAAS.

Corson, J.J. (1975) The governance of colleges and universities: Modernizing structure and processes. (Revised ed.). NewYork: McGraw-Hill Book Company.

Csikzentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper & Row, Publishers.Czerniak, C.M.,Weber,W.B., Sandmann, A., &Ahern, J. (1999). A literature review of science andmathematics integration.

School Science and Mathematics, 99(8), 421-430.Huba, M.E., & Freed, J.E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to

learning. Boston: Allyn and Bacon.Hunter, J.P., & Csikzentmihalyi, M. (2003). The positive psychology of interested adolescents. Journal of Youth and Ad-

olescence, 32(1), 27-35.Hurley, M. M. (1998). Science literacy: Lessons from the first generation. Research Matters to the Science Teacher, 9801.

Retrieved from the NARST website: www.educ.sfu.ca/narstsite/publications/research.Hurley, M. M., & Padró, F. F. (2006). Test anxiety and high stakes testing: Pervasive, pernicious, punitive, and policy-

driven. International Journal of Learning, 13(1), 163-170.Hurley, M. M., & Padró, F. F. (2007). Integrated methods courses: A survey of practices in an adverse political climate.

(Paper presented at the Annual Meeting of the School Science and Mathematics Association, Indianapolis, IN,November 16, 2007)

Jacobs, F., & Hundley, S.P. (2005). Designing postsecondary education to meet future learning needs: Imperatives forplanning. Planning for Higher Education, 34(1), 12-18.

Linkon, S.L. (July-August 2005). How can assessment work for us? Academe, 91(4), 28-32.National Center for Public Policy and Higher Education. (2004). Measuring Up 2004: The national report card on higher

education. San José, CA:Author. (Retrieved January 15, 2006 from http://measuringup.highereducation.org/docs/na-tionalreport_ 2004.pdf)

National Research Council (1996). National science education standards. Washington, DC: National Academy Press.Padró, F.F. (2004). Vertical Integration v. Horizontal Representation: The clash of cultures in university environments and

how these impact institutional standards and their assessment of quality. Proceedings for the 7th "Toulon-Verona"Conference on Quality, September 2-4, 2004, 135-144. Toulon: University of Toulon-Var.

Padró, F.F. (2007). The key implication of the 2006 Spellings Commission Report: Higher education is a “knowledge industry”rather than a place of learning? International Journal of Learning, 14(5), 97-104.

Padró, F.F., & Hurley, M.M. (2005). An example of howmeeting standards biases institutions toward traditional assessmentmeasures: Pre-service science teacher education. In Kandlebinder, P. (Ed.),Making a difference: 2005 Evaluationsand Assessment Conference, 30 November- 1 December 2005 in Sydney, Australia, 16-23. Sydney: Institute forInteractive Media and Learning at the University of Technology.

Peterson, M.W., & Vaughan, D.S. (2002). Promoting academic improvement: organizational and administrative dynamicsthat support student assessment. In T.W. Banta (Ed.) & Associates Building a scholarship of assessment, 26-46.San Francisco: Jossey-Bass.

Savage, L.J. (1951). The theory of statistical decision. Journal of the American Statistical Association, 46, 55-67.Samuelson, P.A. (1976). Optimal compacts for redistribution. In R.E. Grieson (Ed.) Public and urban economics: Essays

in honor of William S. Vickrey, 179-190. Lexington, MA: Lexington Books.

JOURNAL OF THE WORLD UNIVERSITIES FORUM, VOLUME 170

Slaughter, S., & Rhoades, G. (2004). Academic capitalism and the new economy: Markets, states, and higher education.Baltimore, MD: John Hopkins University Press.

Twigg, C.A. (June 2005). Course redesign improves learning and reduces cost. The National Center for Public Policy andHigher Education Policy Alert. (Retrieved September 10, 2005 from http://www.highereducation.org/re-ports/pa_core)

U.S. Department of Education. (September 2006). A test of leadership: Charting the future of higher education – A reportof the Commission appointed by Secretary of Education Margaret Spellings. Jessup, MD: Author. (RetrievedOctober 5, 2006 from http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf)

Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education.San Francisco: Jossey-Bass.

Williams, J. (2006). The pedagogy of debt. College Literature, 33(4), 155-169.

About the AuthorsDr. Fernando F. PadróFernando Padró specializes in quality assurance, higher education systems, and faculty governance. His researchinterests include institutional quality assurance at universities, the role of accreditation, and organizationalpsychology of universities. He is a former Baldrige National Quality Award Examiner and is a Project AQIPreviewer for The Higher Learning Commission.

Dr. Marlene HurleyDr. Hurley has more than 20 years as a science educator at the secondary and university levels. She teaches in-terdisciplinary education studies and science to degree-seeking adult education students at the State Universityof NewYork, Empire State College. Dr. Hurley also does science education consultations and program evaluationsfor informal science institutions, in addition to her ongoing interdisciplinary and historical research.

71FERNANDO F. PADRÓ, MARLENE HURLEY

EDITORS Bill Cope, University of Illinois, Urbana-Champaign, USA. Mary Kalantzis, University of Illinois, Urbana-Champaign, USA. EDITORIAL ADVISORY BOARD Lily Kong, National University of Singapore. Bob Lingard, The University of Sheffield, United Kingdom. Kris Olds, University of Wisconsin, Madison, Wisconsin, USA. Michael Peters, University of Illinois at Urbana-Champaign, USA. Paige Porter, University of Western Australia, Perth, Australia. Dato’ Dzulkifli Abdul Razak, Universiti Sains Malaysia. Fazal Rizvi, University of Illinois at Urbana-Champaign, USA. Susan Robertson, University of Bristol, United Kingdom. Sulaiman Md. Yassin, Universiti Malaysia Terengganu.

Please visit the Journal website at http://www.Universities-Journal.com for further information about the Journal or to subscribe.

THE UNIVERSITY PRESS JOURNALS

International Journal of the Arts in Society Creates a space for dialogue on innovative theories and practices in the arts, and their inter-relationships with society.

ISSN: 1833-1866 http://www.Arts-Journal.com

International Journal of the Book Explores the past, present and future of books, publishing, libraries, information, literacy and learning in the information

society. ISSN: 1447-9567 http://www.Book-Journal.com

Design Principles and Practices: An International Journal Examines the meaning and purpose of ‘design’ while also speaking in grounded ways about the task of design and the

use of designed artefacts and processes. ISSN: 1833-1874 http://www.Design-Journal.com

International Journal of Diversity in Organisations, Communities and Nations Provides a forum for discussion and builds a body of knowledge on the forms and dynamics of difference and diversity.

ISSN: 1447-9583 http://www.Diversity-Journal.com

International Journal of Environmental, Cultural, Economic and Social Sustainability Draws from the various fields and perspectives through which we can address fundamental questions of sustainability.

ISSN: 1832-2077 http://www.Sustainability-Journal.com

Global Studies Journal Maps and interprets new trends and patterns in globalization. ISSN 1835-4432

http://www.GlobalStudiesJournal.com

International Journal of the Humanities Discusses the role of the humanities in contemplating the future and the human, in an era otherwise dominated by

scientific, technical and economic rationalisms. ISSN: 1447-9559 http://www.Humanities-Journal.com

International Journal of the Inclusive Museum Addresses the key question: How can the institution of the museum become more inclusive? ISSN 1835-2014

http://www.Museum-Journal.com

International Journal of Interdisciplinary Social Sciences Discusses disciplinary and interdisciplinary approaches to knowledge creation within and across the various social

sciences and between the social, natural and applied sciences. ISSN: 1833-1882

http://www.Socialsciences-Journal.com

International Journal of Knowledge, Culture and Change Management Creates a space for discussion of the nature and future of organisations, in all their forms and manifestations.

ISSN: 1447-9575 http://www.Management-Journal.com

International Journal of Learning Sets out to foster inquiry, invite dialogue and build a body of knowledge on the nature and future of learning.

ISSN: 1447-9540 http://www.Learning-Journal.com

International Journal of Technology, Knowledge and Society Focuses on a range of critically important themes in the various fields that address the complex and subtle relationships

between technology, knowledge and society. ISSN: 1832-3669 http://www.Technology-Journal.com

Journal of the World Universities Forum Explores the meaning and purpose of the academy in times of striking social transformation.

ISSN 1835-2030 http://www.Universities-Journal.com

FOR SUBSCRIPTION INFORMATION, PLEASE CONTACT [email protected]