18
Higher Education 41: 221–238, 2001. © 2001 Kluwer Academic Publishers. Printed in the Netherlands. 221 The reflective institution: Assuring and enhancing the quality of teaching and learning 1 JOHN BIGGS Department of Psychology, The University of Hong Kong, Hong Kong (E-mail: [email protected]) Abstract. Three definitions of “quality” have entered the quality assurance (QA) debate: quality as value for money, quality as fit for the purpose of the institution, quality as trans- forming. The first is pivotal for retrospective QA, which sees QA in terms of accountability, and conforming to externally imposed standards. The last two are pivotal for prospective QA, which sees QA as maintaining and enhancing the quality of teaching and learning in the institution. In this paper, the reflective practitioner is taken as the model for prospective QA. Three stages are involved in institutional reflective practice: articulating an espoused theory of teaching, the quality model (QM); continually improving on current practice through quality enhancement (QE), in which staff development should play an important role; and making quality feasible (QF), by removing impediments to good teaching, which often arise through distorted priorities in institutional policy and procedures. These three stages, QM, QE, and QF, are essential ingredients in prospective QA. Keywords: aligned teaching, criterian-referenced assessment, quality assurance, quality enhancement, quality feasability, reflective practice, staff development Assuring quality: In retrospect and in prospect Assuring and enhancing the quality of teaching and learning in universities is currently of major concern. Harvey and Green (1993) distinguish three definitions of quality that are relevant to the issue of quality assurance (QA): as value for money, as fit for the espoused purpose, and as transforming. 1. Quality as value for money. A “quality” institution in this view is one that satisfies the demands of public accountability. It produces, for example, more graduates for fewer public dollars, more peer-reviewed publications per capita of academic staff, has more Ph.D.s on its staff, and a strategic plan that signals high levels of self-funded activities. 2. Quality as fit for the purpose. The “purpose” is that of the institution. Universities have several purposes, with teaching and research as the most important. My concern here is restricted to the purpose of getting students to learn effectively, and to accredit that they have learned to publicly recognizable standards. The basic question then for QA is: Are

The reflective institution: Assuring and enhancing the quality of teaching and learning

Embed Size (px)

Citation preview

Page 1: The reflective institution: Assuring and enhancing the quality of teaching and learning

Higher Education41: 221–238, 2001.© 2001Kluwer Academic Publishers. Printed in the Netherlands.

221

The reflective institution: Assuring and enhancing the quality ofteaching and learning1

JOHN BIGGSDepartment of Psychology, The University of Hong Kong, Hong Kong(E-mail: [email protected])

Abstract. Three definitions of “quality” have entered the quality assurance (QA) debate:quality as value for money, quality as fit for the purpose of the institution, quality as trans-forming. The first is pivotal forretrospectiveQA, which sees QA in terms of accountability,and conforming to externally imposed standards. The last two are pivotal forprospectiveQA,which sees QA as maintaining and enhancing the quality of teaching and learning in theinstitution. In this paper, the reflective practitioner is taken as the model for prospective QA.Three stages are involved in institutional reflective practice: articulating an espoused theory ofteaching, the quality model (QM); continually improving on current practice through qualityenhancement (QE), in which staff development should play an important role; and makingquality feasible (QF), by removing impediments to good teaching, which often arise throughdistorted priorities in institutional policy and procedures. These three stages, QM, QE, andQF, are essential ingredients in prospective QA.

Keywords: aligned teaching, criterian-referenced assessment, quality assurance, qualityenhancement, quality feasability, reflective practice, staff development

Assuring quality: In retrospect and in prospect

Assuring and enhancing the quality of teaching and learning in universitiesis currently of major concern. Harvey and Green (1993) distinguish threedefinitions of quality that are relevant to the issue of quality assurance (QA):as value for money, as fit for the espoused purpose, and as transforming.1. Quality as value for money. A “quality” institution in this view is one that

satisfies the demands of public accountability. It produces, for example,more graduates for fewer public dollars, more peer-reviewed publicationsper capita of academic staff, has more Ph.D.s on its staff, and a strategicplan that signals high levels of self-funded activities.

2. Quality as fit for the purpose. The “purpose” is that of the institution.Universities have several purposes, with teaching and research as themost important. My concern here is restricted to the purpose of gettingstudents to learn effectively, and to accredit that they have learned topublicly recognizable standards. The basic question then for QA is: Are

Page 2: The reflective institution: Assuring and enhancing the quality of teaching and learning

222 JOHN BIGGS

our teaching programmes producing the results we say we want in termsof student learning?

3. Quality as transforming. Quality teaching transforms students’ percep-tions of their world, and the way they go about applying their knowledgeto real world problems; it also transforms teachers’ conceptions of theirrole as teacher, and the culture of the institution itself.

QA may be retrospective or prospective, depending on the kind of qualityto be assured. Retrospective QA looks back to what has already been doneand makes a summative judgment against external standards. The agendais managerial rather than academic, with accountability as a high priority;procedures are top-down, and bureaucratic. This approach, widely used in theemerging universities in Australia, New Zealand, and the United Kingdom(Liston 1999), is despite the rhetoric not functionally concerned with thequality of teaching and learning, but with quantifying some of the presumedindicators of good teaching and good management, and coming to some kindof cost-benefits decision.

Prospective QA is concerned with assuring that teaching and learningdoes now, and in future will continue, to fit the purpose of the institution. Italso encourages continuing upgrading and improvement of teaching throughquality enhancement (QE). The aim is to establish a teaching system thatmeets these requirements.

The distinction between retrospective and prospective QA is fundamental.Retrospective QA derives from the Thatcher Government’s demands in theUnited Kingdom for accountability, the framework for which was providedby the Jarratt Committee (Jarratt 1985), and duly modified is the approachused most commonly today, both within and beyond the UK. While theproponents of retrospective QA talk as if they are concerned with educationalquality in the sense of “fit for the purpose”, the procedures adopted address“value for money”, and are frequently counter-productive for quality in thesense of providing rich teaching contexts and enhanced learning outcomes.Most indicators of performance concentrate on administrative procedures,rather than on “the stuff of the academic enterprise. . . . It is not clearhow these procedures. . . can have any bearing whatsoever on the qualityof what universities are” (Goodlad 1995, p. 9). Likewise, Seymour (1993)points out that, because quality resides not in any one performance indic-ator but in the way the system as a whole works, individual indicators donot give a picture of the whole, which is what matters. For example, twokey indices of a “quality” institution laid down by the Hong Kong Univer-sity Grants Committee (2000) are “salary on graduating” and “employersatisfaction”. A recent book on quality assurance (Liston 1999) has two refer-

Page 3: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 223

ences to “teaching”: “as best practice” (less than one page), and “teachingqualifications” (the answer to poor teaching in universities, one paragraph).

Prospective QA is not concerned with quantifying aspects of the system,but with reviewing how well the whole institution works in achieving itsmission, and how it may be improved. This is analogous to what an indi-vidual reflective practitioner does (Schon 1983). Like the practitioner, theinstitution must operate from an espoused theory of teaching, based on “thepublic scholarship of teaching” (Boyer 1990), and try to match practice to thetheory. An individual teacher, for example, might ask:1. What is my espoused theory of teaching?2. Is my current practice in keeping with my theory? How can my theory

help me teach more effectively?3. What within myself or in my context is preventing me from teaching the

way I should be?An institution, or a system, should be asking the same questions of itself.These questions define three aspects of QA:1. Quality Model(QM). The institution, like the individual, needs to make

explicit the espoused theory that should be driving teaching-relateddecisions. The QM may be derived from constructivism (Biggs 1999),from phenomenography (Bowden and Marton 1998; Prosser and Trigwell1999), or from any theory of learning that generates consistency, andaccess to the considerable knowledge-base on teaching from which theQM should be derived.

2. Quality Enhancement(QE). The institution needs not only to design itsteaching delivery system in accordance with its espoused theory, but alsoto establish built-in mechanisms that allow it, like the individual reflectiveteacher, to continually review and improve current practice. New contentknowledge, educational innovations, a changing student population, andchanging conditions in the institution and in society, all make such areview necessary.

3. Quality Feasibility(QF). What can be done to remove theimpedimentstoquality teaching? This is a question that institutions rarely ask, althoughindividual expert teachers continually do. It is one thing to have a modelfor good teaching, but if there are institutional policies or structures thatactually prevent the model from operating effectively, then they need tobe removed.

In sum, quality is seen here as based on “quality as fit for the purpose”.A quality institution is one that has high level aims that it intends to meet,that teaches accordingly, and that continually upgrades its practice in order toadapt to changing conditions, within resource limitations. I expand on thesethree aspects of QA below.

Page 4: The reflective institution: Assuring and enhancing the quality of teaching and learning

224 JOHN BIGGS

The quality model (QM): A generic theory of teaching

Teachers tend to hold different theories of teaching at various stages in theircareers (Biggs 1999). These theories are built on two basic conceptions ofteaching: teaching as transmitting knowledge, and teaching as facilitatinglearning (Prosser and Trigwell 1999). They postulate causes for variation instudent learning outcomes that lay more or less responsibility on the teacher,and are ordered into three levels of increasing complexity.

Level 1. Focus: What the student is. Teachers using a Level 1 theory are struckby student differences, as most beginning teachers are. They see students aseasily teachable, or not. They assume a teacher- centred, transmission modelof teaching. The teacher is the guardian of knowledge, whose responsibility isto know the content well, and to expound it clearly. It is then up to the studentto attend lectures, to listen carefully, to take notes, to read the recommendedreadings, and so on. Differences in learning outcome occur because studentsdiffer in their ability, their motivation, their background, and so on. Thus,when teaching is not effective, it is seen as the students’ fault. Level 1 theorydoes not promote reflection, whereby the teacher asks the key generativequestion that all expert practitioners ask: “Is my present practice the best wayof doing this?”

Level 2. Focus: What the teacher does. The Level 2 theory is also basedon transmission, but of complex knowledge structures, which require skillin presenting to students, so that learning outcomes are now seen as morea function of how skillful the teacher is. Level 2 theory emphasizes whatthe teacher does: forward planning, good management skills, an armoury ofteaching competencies, ability to use IT, and so on. Retrospective QA usesLevel 2 theorising when it talks about teaching competencies, and distin-guished teacher awards (see below), as if focusing on what teachers do isin itself an index of student learning. In Level 2, means becomes ends.

Level 3. Focus: What the student does. Level 3 theory focuses not on teachers,but on teaching that leads to learning. Expert teaching in this sense certainlyincludes mastery of teaching techniques, but unless the appropriate learningtakes place, it is an empty display. Tyler, fifty years ago, said that learning“takes place through the active behavior of the student: it is whathedoes thathe learns, not what the teacher does” (Tyler 1949, p. 63). Likewise Shuell:

If students are to learn desired outcomes in a reasonably effective manner,then the teacher’s fundamental task is to get students to engage in learning

Page 5: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 225

activities that are likely to result in their achieving those outcomes (Shuell1986, p. 429).

This last statement seems common-sense, even bland, but the implicationsfor the design of effective teaching are profound. First, we have to specifywhat the “desired outcomes” are, so that it is clear from the outset whatstudents have to learn, and at what level of skill or understanding. Statementsthat such and such topics will be “covered” say only what students are tolearn, not what sort of level of understanding we want from them. Unless westipulate the latter, teaching and assessment are left dangling. However, thisdoes not preclude unexpected outcomes. Stipulating levels of understandingwith verbs such as “hypothesize”, “reflect”, “generate”, and so on, leaves thesystem open-ended. The challenge is to define what we want students to learnin ways that generate thinking about teaching and assessing.

Second, we need to arrange teaching/learning activities (TLAs) so thatthe students are encouraged to do those things that make it likely thedesired outcomes will be attained. Teaching methods thus should be aimedat addressing the level of understanding required. “To cover” is satisfied by“mention in lectures”, which only requires students to listen and take notes.“To demonstrate by solving unseen problems”, on the other hand, requiresthat teaching not only presents students with the requisite knowledge, butstretches their understanding of that knowledge with challenging situations.

Finally, we need to assess to see if the outcomes have been attained atvaryingly acceptable levels of acceptability, which are reflected in the gradingsystem. The task of assessment is to confirm the levels of understanding thathave been met by individual students.

Level 3 thinking takes on board the full implications of the obvious; thatit is the students who do the learning. The teacher’s job is then to supportstudents by aligning teaching methods, assessment tasks, and classroomclimate to acquiring the skills and kinds of understanding that we want themto acquire. To do this effectively requires a theory of learning that is so articu-lated with the content that we are enabled to specify an appropriate frameworkfor operationalizing teaching aims and objectives, and for making decisionsabout rich learning activities and assessment tasks.

The QM provides the framework for doing this. It has two aspects: ageneric framework, which can be used institution-wide, and which can alsogenerate specific decisions appropriate to the particular content topics tobe taught. For example, the SOLO taxonomy (Biggs and Collis 1982) is ageneric framework for classifying learning outcomes, but its application tocriterion-referenced assessment will differ according to the subject matterbeing taught (Biggs 1999).

Page 6: The reflective institution: Assuring and enhancing the quality of teaching and learning

226 JOHN BIGGS

In aligned teaching, where all components support each other, students are“trapped” into engaging in the appropriate learning activities, or as Cowan(1998) puts it, teaching is “the purposeful creation of situations from whichmotivated learners should not be able to escape without learning or devel-oping” (p. 112). A lack of alignment somewhere in the system allows studentsto escape with inadequate learning.

Aligned teaching is not a “method” as such, but a way of categorisinga variety of methods. Problem-based learning (PBL) in professional educa-tion is an example. The aim of all professional education is to get studentsto solve professional problems, but in the traditional approach, students aretaught the knowledge they will probably need to solve a range of problems(and a lot of knowledge they will rarely if ever need), with some practicumexperience. They are supposed to make the connection between knowledgeand action themselves. In PBL, the teaching method is to present studentswith problems to solve, and then they learn how to acquire the knowledgeand skills necessary to solve them, while the assessment isinter alia how wellthe students solve them. The students are indeed “trapped”. They are facedwith a case or professional problem that is their responsibility to solve, andwhile they have their fellow students, teachers, and other resources to helpthem work out how this may be done, they are not let off the hook until theyhave done so. Another example of aligned teaching is the learning portfolio.The students are given the course objectives and the grading criteria, and arerequired to put examples of their learning in a portfolio and to justify theirselection. This requirement prompts them to negotiate the teaching/learningactivities that will provide them with their examples (Biggs 1999).

Any criterion-referenced assessment seeks to align objectives or aims andassessment tasks, but a fully aligned system additionally tunes the teachingmethods, climate, institutional procedures and so on, to the objectives. Thus,instead of holding teaching constant as lecture plus tutorial, methods arevaried to suit what is to be learned. The principle of alignment is not in itselfnew. Lectures in many subjects are typically augmented with laboratories,field trips, practica, and the like, while the folk-lore of teaching has long heldthat good teachers practice what they preach. What is new is: (a) conceptu-alising aligned teaching within learning theory, in this case constructivism,hence “constructive alignment” (Biggs 1996a); (b) generating the beginningsof a technology for operationalizing it; and (c) suggesting that it be appliedrigorously at the institutional level.

An aligned systemin itself declares the quality learning because thegrades state precisely how effective student learning has been from subjectto subject, and monitoring from year to year informs as to the maintenance ofstandards. This seems to provide a rather more direct and visible assurance of

Page 7: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 227

quality than filling in forms, convening committees, and holding audits andaccountability exercises. The next step is to ensure that quality is continuallyupgraded to meet changing conditions.

Quality enhancement: Improving learning and teaching

Quality enhancement (QE) is about getting teachers to teach better, whichis what staff development is all about. While in the UK the rhetoric of theQuality Assurance Agency for Higher Education is encouraging (see website:http://www.qaa.ac.uk), staff development, which could actualize that rhetoric,is being minimized in many universities, not only in the UK but also inAustralia and New Zealand. Liston’s (1999) recent book on quality assurance,for example, devotes less than two pages to staff development.

Typically, staff development is undertaken in workshops run by the staffdevelopment centre (these have various names; here I use the generic “SDC”),and it is usually left to individual teachers to decide whether or not to attend.This is the fundamental problem facing SDCs: the focus is on individualteachers, not on teaching. Staff development should focus on teaching withinthe whole institution, not on those individuals who present themselves atvoluntary workshops, who are usually the good teachers anyway.

The role of staff development in QE is obvious, but the practical prob-lems are enormous. The ideal situation is where the following conditions aresatisfied:1. The SDCs, like the institution as a whole, operate from QM, a Level 3

espoused theory of learning and teaching. Too often SDCs are seen from aLevel 2 theory as places providing tips for teachers, or as remedial clinicsfor poor or beginning teachers. Most recently, they are being replaced bytraining in educational technology, in the confused belief that if teachersare using IT then they must be teaching properly for the new millennium.

2. SDCs are involved in creating a learning environment throughout theinstitution. All central decisions that bear upon teaching and learningshould involve the experts in teaching and learning. This might be thoughtto be self-evident, but it rarely happens.

3. Because the actual decisions on programmes, courses and content aremade in departments, SDCs should have a formal relationship witheach teaching department. In this way, decisions relating to the settingup, design, and administration of courses and programmes can drawon the knowledge base of teaching within the chosen QM framework.Operating at the departmental level, with the course as target, meansthat the problem of the reluctant under-performing teacher is drasticallyredefined. Teaching is now the focus, not individual teachers.

Page 8: The reflective institution: Assuring and enhancing the quality of teaching and learning

228 JOHN BIGGS

If that is the ideal, it is rarely realized in practice. In the case of largeestablished universities, teaching has traditionally been organized withindepartments and they guard that responsibility jealously. Many resist thenotion that there are generic skills in teaching and assessing that applywhatever the content being taught. Some schools and faculties have set uptheir own SDCs: Medical Education, for example, has become a field in itsown right, with its own internationally reviewed journals and knowledge base.

In the United States, SDCs in the sense discussed here are not common. Insmall institutions that can even be an advantage, in that a teaching- orientedadministration can realistically set out to create a teaching environment basedon Level 3 theory. Such a case is Alverno College, Milwaukee, where allpolicies and procedures are dedicated to optimal teaching and learning, anda clear and precise criterion-referenced system is in place across the wholeinstitution. Here, the whole degree programme is expected to have measur-able impacts on student learning, over and above passing individual courses(Alverno College Faculty 1994; Mentowski 2000).

Whether or not there is an SDC, the locus and the impetus for reflectivepractice must lie in the structure that services courses, usually the department.The task is to set up an articulated teaching system, containing QE proced-ures. The assessment system is the key, because in that we make clear whatstudents are to learn, and following from that, how best they might learn it(the teaching/learning context), and the evidence for their having learned it(the assessment tasks).

An appropriate assessment system requires:1. A statement of desired outcomes of learning, allowing for unexpected

outcomes, expressed as target qualities and levels of performance, notas an accumulation of “marks” or percents (see below). Where theseoutcome statements are expressed in various categories of acceptability(A, B, C, D, F), they become the framework for the grading system.

2. The teacher’s task is to “fill” the grading categories(e.g. A or “HighDistinction”) as appropriate for the subject taught. The Departmental taskis to have a general framework for grading so that the A’s that one teacherawards are as equivalent as can be in the sought-for qualities, allowing forsubject differences, as those another teacher awards.

3. Knowledge-sharing about teaching methods and assessment tasks. Thedefault methods of teaching and assessing in many universities compriselecture + tutorial, and course-work assignment + final exam. Alternativesthat achieve better alignment should be explored, by pooling colleagues’ideas and by consulting the SDC. A genuine sharing of problems andsolutions through the lenses of the QM can lift the game of the wholeDepartment.

Page 9: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 229

4. A review system. Once the grading categories are fixed, deviations fromexpectations need to be spotted and remedies proposed, and any changesmonitored. A useful device for getting teachers to be self-critical is apeer review system, where colleagues sharing the same QM sit in oneach other’s classes. Out of this can come ideas for action research,a methodology designed precisely to generate and evaluate in-contextinnovations (Elliott 1991). As a result of engaging in action research,teachers change their conceptions of teaching, and teach more effectively(Kember 2000). It is important to keep track with data that reflect change,such as student feedback, samples of student learning outcomes, staffreports, performance statistics, and so on, which are kept in departmentalarchives.

5. Other. Departments are the best locus for arranging student feedbackon teaching, not faculties or the central administration. Such question-naires should be criterion-referenced and designed to be as informativeas possible both for department, and individual teacher (see below).However, students can be involved in QE in other ways too. Quality, afterall, is to be found in their learning outcomes. In a criterion-referencedsystem, this will be evidenced in the grade distributions, but finer detailwould be desirable. Students could be interviewed about the qualityof their learning experiences, and be asked to submit what they thinkare their best performances, which can also be placed in departmentalarchives.

The SDC, if there is one, can act as resource and “critical friend” in any or allof the above.

In sum, QE cannot be left to the sense of responsibility or to the prior-ities of individual teachers. The institution must provide the incentives andsupport structures for teachers to enhance their teaching, and most import-antly, to involve individuals through their normal departmental teaching inQE processes.

Quality feasibility: What impedes quality teaching?

I have tried to base the views expressed in the following section on researchwhere possible, but with some topics I have had to rely on many years ofexperience, mostly in the Australian and Hong Kong university sectors.

QE will not be feasible if the institution is not purged of those factors thatinhibit quality learning. This is the issue of quality feasibility (QF).

Page 10: The reflective institution: Assuring and enhancing the quality of teaching and learning

230 JOHN BIGGS

Assuring or diminishing quality? Some marginal procedures

Several common QA procedures may in my experience be two-edged.

1. External examinersThe traditional role of external examiner in the British system is a time-honoured means of insuring that similar standards operate across institutions.It is important to bring outside perspectives and contacts to bear, and to feelconfident that one’s own standards are comparable to those elsewhere.

However, sometimes examiners are asked to comment on the assess-ments alone. Submitting examination papers and student performances tosomeone who does not – and who cannot – totally comprehend the contextof teaching can distort the assessment process, particularly if the examinerrequires the examination questions to be changed well into the teaching of thecourses concerned, as happens. To accede to such requests may well destroyalignment, yet the pressure to do so is considerable in institutions wherethe examiner’s comments are seen and discussed outside the departmentconcerned. Innovative assessment practices are discouraged, and assessmenttasks restricted to those that can easily be understood out of context.

The positive values of external examiners can be achieved in other ways.Replace the word “examiner” with “consultant” and you have it: an outsideadvisor who can visit the department and give all the advice and help that anexternal examiner can give, without the distortions created when examinersperceive their brief as simply adjudicating the assessment of student products.

2. Validation panelsAccrediting and approving courses by external validation panels is a commonQA procedure that has obvious value where staff are required to deliver newcourses in directions in which they may have had little experience. In suchcases, course accreditation provides useful scaffolding to ensure minimalstandards. It can however block innovative teaching.

One danger is that validation panels can exert strong pressure to includemore and more content. Each panel member thinks his or her own specialismmust be given “adequate” (= intensive) treatment. Committees tend to resolvematters by including the lot, to the detriment of the students’ learning. Thesame effect can be achieved when the course director anticipates such pres-sures by overloading the curriculum from the start. Inevitably, courses aredesigned that are thought most likely to be approved, so that course teams erron the conservative side: “Let’s get the validation over first!” Being innov-ative, for example by using the unique strengths of teaching staff, may beperceived as too risky. Yet the advantage of the university is precisely to

Page 11: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 231

strengthen teaching with the insights and knowledge of scholars at the cuttingedge of their discipline.

Once the course has been approved, it may well turn out that thecurriculum is unsuitable, or that the student intake changes, or that very recentresearch, post-validation, changes the perspective of a course. The danger isthat while some minor changes can be dealt with immediately, major changesare either not allowed because they were not in the validated documents, orhave to go through more committees. Changing an already validated coursecan be very difficult, and some administrators discourage the attempt to doso.

In sum, then, the QA mechanisms of external examiners and course valid-ation not only discourage improvisation, which is the hallmark of the expertteacher (Borko and Livingstone 1989), but also discourages QE itself, whichactually requires that change is enacted after reflective review.

3. Distinguished teacher awardsDTAs have obvious merits. It is good to reward people for doing anoutstanding job. Unfortunately, the message to undistinguished teacherseasily implies that distinguished teachers are born, not made; they are a rarespecies, against whom ordinary teachers cannot be expected to compete.

In short, DTAs encourage a Level 2 theory of teacher-as-performer, eventhough distinguished teachers themselves tend to operate from Level 3, asreflective practitioners, ever striving to teach more effectively (Dunkin andPrecians 1992). Reward the excellent teachers by all means, but if we wantquality teaching at an institutional level, the focus should not be on theteacher, but onteaching. Stigler and Hiebert (1999), in analyzing video-tapesof classroom teaching in three different countries, found that each culturedeveloped its own “script” for teaching, and that what determined high levellearning outcomes was the script, not the particular actor delivering it. GivingOscars to the actors is not likely to improve their scripts. It is thus unfortunatethat one performance indicator in use for quality teaching is whether or nota DTA system is in place (e.g. the Hong Kong University Grants Committee2000); they can send entirely the wrong message.

4. Student feedback questionnaires (SFQ)Many institutions have mandatory SFQs as summative evaluations at the endof each course, using standard questions across all courses, where the lectureis assumed to be the norm. Ratings then vary according to students’ ownconceptions of teaching, and penalise teachers using other methods (Kemberand Wong 2000). Thus, a teacher using PBL would score low on “The lectureris organized in presenting to the class. . .”. When low ratings have a high cost,in terms of promotion or contract renewal, teachers are obviously discour-

Page 12: The reflective institution: Assuring and enhancing the quality of teaching and learning

232 JOHN BIGGS

aged from innovating. SFQs too emphasize the actor, not the script. Theymeasure charisma, not teaching effectiveness in terms of improved studentlearning (Ware and Williams 1975). Used formatively, however, SFQs makeeminent sense where questions are tailored to specific courses on aspects theteacher wants feedback, which is why the department should control SFQs(see above).

In short, some common QA procedures have the opposite effect to thatintended. Those discussed above either belong to retrospective QA, emphas-izing conformity to imposed standards, or encourage a Level 2 view ofteaching.

While these procedures are no doubt well meaning, even if their effectsare ambiguous, other institutional aspects are unequivocally negative.

A quantitative mind-set

Two broad sets of assumptions are used when people think about learning andteaching; quantitative and qualitative (Cole 1990). Quantitative assumptionsreduce complex issues to units that can be handled independently, rather thanas part of a larger interactive system; the curriculum becomes a collection ofcompetencies, basic skills, facts, procedures and so on. The greatest effect ison assessment.

The fundamental problem is the misapplication of themeasurementmodelof assessment, which was developed by psychologists to study individualdifferences on fixed characteristics such as intelligence, abilities, personalitytraits, and so on (Taylor 1994). The model is designed to allow accuratemeasurements of individuals, so that they can be compared with each otheror with population norms. The results of the measurement must thereforebe expressed along a numerical scale. This model is not designed to assessattainment, but is widely used for that purpose, with deleterious effects onstudent learning. I have dealt with this issue elsewhere (Biggs 1999), so hereI shall be brief.

There are three areas where the quantitative mind-set can create problems:the assessment process, reporting results, and backwash effects on teachingand learning.

The assessment process. In quantitatively driven assessment, learning isreduced to a number along a scale, either as a subjective and arbitrary rating,or by counting knowledge as units as correct or incorrect. Subsequentlythese figures are averaged. However, students don’t learn “marks”, they learnstructures, concepts, theories, narratives, procedures, or performances ofunderstanding, which are meaningful only when considered as an integratedwhole. Analytic quantitative assessments are useful formatively, but when

Page 13: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 233

used summatively they miss the essential integrity of significant learning.Then when results are averaged, students with a high average score in most ofthe course can pass a sub-section of a course in which they have failed. Thismakes no educational sense at all. If a topic or task is important enough to bein the curriculum, it should be passed at some minimal level of understanding.

Reporting the results. When teachers are required to report results using acontinuous scale such as percentages, it is easier, and usual, to mark quantitat-ively in the first place, although it is possible to grade qualitatively and then tocovert into a continuous scale, and thus preserve criterion-referencing (Biggs1999). When results are expected to “show a good spread” and to lie along apredetermined curve, however, criterion- referencing becomes operationallyimpossible. Grading on the curve cannot be justified on educational grounds;it occurs because it is convenient (see below).

Backwash. The effects of quantitative assessment on teaching and learningare negative (Biggs 1996b; Crooks 1988; Frederikson and Collins 1989).Following are some of the messages students can receive:

• All ideas are equally important (get any fifty right and you pass)• You can skip or slack on certain areas if you are doing well elsewhere• The trees are more important than the wood• Verbatim responses must gain marks

Students in their search for marks may easily fail to see the structures beinglearned; in counting the trees, they get lost in the wood. Disputes aboutgrades become a niggling quibble about a handful of points here and there,which is demeaning for both student and teacher. Success or failure in norm-referenced assessment can be attributed to relative ability and luck, neither ofwhich is under the student’s control, while in criterion-referenced assessment,success or failure is defined in terms of the student’s performance in relationto the objectives, and the remedy is more easily seen to be in the student’shands.

A particular form of non-aligned teaching is teaching as a selection device(Biggs 1996c). Many university teachers hold the view, deriving from the HanDynasty in 4th Century BC China, that undergraduate teaching is a device forfinding out who the real scholars are. To do this, you teach abstract content,the particular topics are not all that important, and then give a rigorousexamination to find out who is best at learning it. The final assessment isnot linked intrinsically to teaching; it is only necessary that it gives a goodspread, especially at the top end. This is pure Level 1 theory based on themeasurement model: the idea is not to assess how well prescribed content hasbeen learned, but to measure stable student characteristics, such as capacity

Page 14: The reflective institution: Assuring and enhancing the quality of teaching and learning

234 JOHN BIGGS

for abstract thought, diligence, ability to withstand pressure, and so on. Whenan institution stipulates grading on the curve, it is saying covertly that theinstitution’s prime function is to discriminate between students, whatever themission statement says explicitly.

Distorted priorities

Distorted priorities are a major source of nonalignment. Probably all insti-tutions would put educational considerations as their top priority in theirmission statements. However, there is an institution to run, which gener-ates a set of administrative priorities. Bureaucrats want administration to runsmoothly, to avoid public criticism, to anticipate and legislate for awkwardcases, and so on. Their safest working assumption is that people are not tobe trusted, and so a “Theory X” climate is established (McGregor 1962).The alternative climate is created by Theory Y, the assumption that people,students and teachers in this case, are to be trusted. A completely Theory Xclimate would be unbearable, a completely Theory Y climate unmanageable.Theories X and Y form a continuum, with innovative educators more towardsTheory Y, administrators more towards Theory X. How the two sets of prior-ities are balanced is what separates a quality from a mediocre institution, aquality institution preferring to be biased towards establishing the optimalconditions for learning, a mediocre one towards administrative convenience.

Examples of distorted priorities are easy to find. The use of norm-referenced assessment generally has been mentioned. The particular caseof grading on the curve allows administrators simply to decree that the top25 per cent of graduates will achieve first class honours, and then to boast:“See here, all our departments are teaching to the same very high standard!”This is of course a confidence trick, but it is regrettably frequent. Likewise,timed, invigilated examinations are hard to justify educationally, but areuseful logistically and for assuring the public that plagiarism is under control;another distorted priority is assigning the most junior teachers to teach thelarge first year classes. The list is endless.

The balance between teaching and research priorities is frequently asource of distortion. Although many universities officially place equalemphasis on teaching and research, research is almost invariably perceivedas the activity of greater prestige. Thus, in personnel decisions, research isrewarded more than teaching. Some department heads do not even recognizethe scholarship of teaching, so that publications on research into teachingthe very subject the department is charged to teach are not counted as “real”research. A different problem exists in the newer universities, upgraded frompolytechnics and colleges. Their espoused and functional mission is teaching,but then a university-type funding system is thrust upon them, and these

Page 15: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 235

teachers have suddenly to learn a new game. The consequences can be frus-tration, shattered morale, poor research, strings of mediocre publications,and worse teaching, because it has necessarily been neglected. Everyone –students, teachers, institutions – stands to lose, because the funding system isbadly out of alignment with the real picture.

Finally, let me mention, however briefly, the corporatization of univer-sities, which illustrates misalignment at its starkest. Here commercial prior-ities and administrative structures have been imposed on institutions whoseoriginal function is to create and to disseminate knowledge because of afundamental and intrinsic need to understand the world. The result has beena dramatic deterioration in the quality of teaching and of research (Coady2000).

An Australian Government report (Dawkins 1987) told universities thatsmall classes were “a poor use of resources”, with the consequence thatclasses in many institutions became unmanageably large, many teachersincorrectly but understandably seeing mass lecturing and multiple-choicetesting as the only options realistically open to them. The credit transfersystem between universities discourages prerequisites and hence learning indepth, and encourages homogenised content and grade inflation. The privatesector naturally funds the sort of research that will benefit the shareholders,not what needs doing to advance scholarship, and typically places restrictionson publishing the results, thereby privatising knowledge and halting academiccareers.

In sum, whatever the particular impediments to quality, all such practicesmanifest poor alignment to the fundamental purpose of the institution. Wherequality is defined as fitness for purpose, quality teaching means trying to enactthe aims of the institution by setting up a delivery system that is aligned tothose aims. In practice, however, many institutions in their policies, practicesand reward systems actually downgrade teaching. Some of this is externallyimposed, ironically by some aspects of QA itself, and by managerialism andthe commercialisation of knowledge. Other practices fall into the categoryof institutional habits, largely unquestioned. Whatever the reasons for theirexistence, they need to be identified and minimized.

Summary and conclusions

Table 1 contrasts the attributes of retrospective and prospective QA. Retro-spective QA is an accountability exercise that is conducted accordingto managerial priorities. Despite the rhetoric, retrospective QA actuallydamages teaching (Bowden and Marton 1998). Ironically, if prospective QAoperates within budget, retrospective QA becomes redundant.

Page 16: The reflective institution: Assuring and enhancing the quality of teaching and learning

236 JOHN BIGGS

Table 1. Some contrasting attributes of retrospective and prospective quality assurance (QA)

Quality assurance

Retrospective Prospective

Quality as. . . value for money, meeting externalstandards

fitness for purpose, transforming

Function audit, status quo mechanisms QM, QE, QF

Aim meet externally imposed standards meet own standards developedinternally

Priority managerial, entrepreneurial educational

Focus on the past on present and future

Nature top-down bottom up

Climate Theory X Theory Y

Model deficit systemic

Style judgmental supportive

Framework quantitative, closed qualitative, open

Use of data summative formative

Prospective QA refers to a delivery system of teaching that fits what weknow about best practice. Such a system is achieved by designing a qualitymodel (QM), based on a Level 3 theory of teaching, to suit the particular insti-tution’s circumstances. Where teaching is tuned to the objectives as well as tothe assessment tasks, and where the institutional infrastructure is prioritizedtoward best practice in teaching, a quality assured-system is in place.

QE is designed to improve the ongoing system, by helping teachers toteach better. There is an enormous body of knowledge on the scholarship ofteaching that individual teachers and administrators cannot be expected toknow and to apply. If they want a self-improving quality system, they willprobably need expert help in achieving it. The SDC should be the system’smeans of using the scholarship of teaching in the institution to improveteaching, and its memory, recording the successes and failures of reflectivepractice for future reference. The SDC should be the conceptual centre of allcomponents of prospective QA, but it rarely is, if ever.

Quality feasibility (QF), the removal of factors in the institutional climateor structures that are deleterious to learning and good teaching, seems tohave received scant attention in the past. Such factors mostly result fromconfused or distorted priorities. If defensive individual teachers do not like toadmit their priorities are wrong, neither do institutions. How can administra-tions admit that their policies and procedures are actually undermining their

Page 17: The reflective institution: Assuring and enhancing the quality of teaching and learning

THE REFLECTIVE INSTITUTION 237

own mission statement? But if they want to be seen as quality institutions,they, like individual professionals, will have to engage in some reflective QF,painful though that might be.

Note

1. This paper is based on a Keynote Address, “Quality in teaching and learning”, given tothe 16th Annual Conference of the Hong Kong Educational Research Association, HongKong Institute of Education, 20–21 November, 1999.

References

Alverno College Faculty (1994).Student Assessment-as-Learning at Alverno College.Milwaukee: Alverno College Institute.

Biggs, J.B. (1996a). ‘Enhancing teaching through constructive alignment’,Higher Education32, 347–364.

Biggs, J.B. (1996b). ‘Assessing learning quality: Reconciling institutional, staff, and educa-tional demands’,Assessment and Evaluation in Higher Education21, 3–15.

Biggs, J.B. (ed.) (1996c).Testing: To Educate or to Select? Education in Hong Kong at theCrossroads. Hong Kong: Hong Kong Educational Publishing Co.

Biggs, J.B. (1999).Teaching for Quality Learning at University. Buckingham: Open Univer-sity Press.

Biggs, J.B. and Collis, K.F. (1982).Evaluating the Quality of Learning: The SOLO Taxonomy.New York: Academic Press.

Borko, H. and Livingston, C. (1989). ‘Cognition and improvisation: Differences in mathem-atics instruction by expert and novice teachers’,American Educational Research Journal26, 473–498.

Bowden, J. and Marton, F. (1998).The University of Learning. London: Kogan Page.Boyer, E.L. (1990).Scholarship Reconsidered: Priorities for the Professoriate. Princeton, NJ:

Carnegie Foundation for the Advabncement of Teaching.Coady, T. (ed.) (2000).Why Universities Matter. Sydney: Allen & Unwin.Cole, N.S. (1990). ‘Conceptions of educational achievement’,Educational Researcher18(3),

2–7.Cowan, J. (1999).On Becoming an Innovative University Teacher: Reflection in Action.

Buckingham: Open University Press.Crooks, T.J. (1988). ‘The impact of classroom evaluation practices on students’,Review of

Educational Research58, 438–481.Dawkins, J. (1987).Higher Education; a Policy Discussion Paper. Canberra, Australian

Government Printing Office.Dunkin, M. and Precians, R. (1992). ‘Award-winning university teachers’ concepts of

teaching’,Higher Education24, 483–502.Elliott, J. (1991).Action Research for Educational Change. Milton Keynes: Open University

Press.Frederiksen, J.R. and Collins, A. (1989). ‘A systems approach to educational testing’,

Educational Researcher18(9), 27–32.

Page 18: The reflective institution: Assuring and enhancing the quality of teaching and learning

238 JOHN BIGGS

Goodlad, S. (1995).The Quest for Quality: 16 Forms of Heresy in Higher Education.Buckingham: Open University Press and The Society for Research into Higher Education.

Harvey, L. and Green, D. (1993). ‘Defining quality’,Assessment and Evaluation in HigherEducation18, 8–35.

Hong Kong University Grants Committee (2000). Letter to Universities (2 May, 2000).Jarratt Report (1985).Report of the Steering Committee for Efficiency Studies in Universities.

London: Committee of Vice-Chancellors and Principals.Kember, D. (2000).Action Learning and Action Research: Improving the Quality of Teaching

and Learning. London: Kogan Page.Kember, D. and Wong, A. (2000). ‘Implications for evaluation from a study of students’

perceptions of good and poor teaching’,Higher Education39, 69–97.Liston, C. (1999).Managing Quality and Standards. Buckingham: Open University Press.McGregor, D. (1960).The Human Side of Enterprise. New York: McGraw Hill.Mentowski, M. (2000).Learning that Lasts. San Francisco: Jossey-Bass.Prosser, M. and Trigwell, K. (1998).Teaching for Learning in Higher Education. Buck-

ingham: Open University Press.Schon, D.A. (1983).The Reflective Practitioner: How Professionals Think in Action. London:

Temple Smith.Seymour, D.T. (1993).On Q: Causing Quality in Higher Education. Phoenix, AZ: The Oryx

Press.Stigler, J. and Hiebert, J. (1999).The Teaching Gap. New York: The Free Press.Shuell, T.J. (1986). ‘Cognitive conceptions of learning’,Review of Educational Research56,

411–436.Taylor, C. (1994). ‘Assessment for measurement or standards: The peril and promise of large

scale assessment reform’,American Educational Research Journal31, 231–262.Tyler, R.W. (1949).Basic Principles of Curriculum and Instruction. Chicago: University of

Chicago Press.Ware, J. and Williams, R.G. (1975). ‘The Dr. Fox Effect: a study of lecturer effectiveness and

ratings of instruction’,Journal of Medical Education50, 149–156.