Upload
harriet
View
215
Download
0
Embed Size (px)
Citation preview
This article was downloaded by: [University of California Santa Cruz]On: 22 October 2014, At: 20:57Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Assessment & Evaluation in HigherEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/caeh20
Characterising programme‐levelassessment environments that supportlearningGraham Gibbs a & Harriet Dunbar‐Goddet a
a University of Oxford , UKPublished online: 06 Jun 2009.
To cite this article: Graham Gibbs & Harriet Dunbar‐Goddet (2009) Characterising programme‐levelassessment environments that support learning, Assessment & Evaluation in Higher Education, 34:4,481-489, DOI: 10.1080/02602930802071114
To link to this article: http://dx.doi.org/10.1080/02602930802071114
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Assessment & Evaluation in Higher EducationVol. 34, No. 4, August 2009, 481–489
ISSN 0260-2938 print/ISSN 1469-297X online© 2009 Taylor & FrancisDOI: 10.1080/02602930802071114http://www.informaworld.com
Characterising programme-level assessment environments thatsupport learning
Graham Gibbs* and Harriet Dunbar-Goddet
University of Oxford, UKTaylor and FrancisCAEH_A_307277.sgm(Received 12 December 2007; final version received 5 March 2008)
10.1080/02602930802071114Assessment & Evaluation in Higher Education0260-2938 (print)/1469-297X (online)Original Article2008Taylor & Francis0000000002008Prof. [email protected] article outlines a methodology for characterising features of programme-levelassessment environments so that the relationship between features of the assessmentenvironment and students’ learning response can be studied. The methodology wasdeveloped through the detailed case study of nine undergraduate degree programmes:one in each of three contrasting discipline areas in each of three contrastinguniversities. Each case study involved examination of course documentation,interviews with academics and interviews with students, following which each degreeprogramme was coded in relation to a range of features of the assessment environment,such as the proportion of marks derived from examinations and the volume andtimeliness of feedback on assignments. Programmes were found to differ profoundlyin terms of variables that are known to have implications for student-learningprocesses. They also differed widely in the extent to which they illustrated theapplication of conventional wisdom about curriculum design, embodied in nationalquality assurance guidelines and the Bologna Agreement. Programmes were found tohave either a high volume of summative assessment or a high volume of formative-only assessment, but never both at the same time. Programmes also differed in themechanisms used to make goals and standards clear, having either highly explicitcurriculum design or high volumes of written and oral feedback, but never both at thesame time. The findings suggest that there are distinctive programme-level assessmentenvironments that operate in quite different ways despite all programmes studied beingsubject to the same quality assurance code of practice.
Keywords: assessment; student learning; degree programme
Introduction
The use of the Course Experience Questionnaire (Ramsden 1991) is based on the assump-tion that students respond in their approach to studying to global features of their entirelearning environment rather than responding only to their teachers and to what goes on inclasses or individual course units. However, studies of the way assessment affects studentlearning have tended to focus on small scale innovation within individual course unitsrather than on the characteristics of assessment environments as experienced by studentsat the programme level. External quality assurance in Europe (such as the QualityAssurance Agency in the UK) usually embodies assumptions about what makes for a goodassessment regime. Yet, at the programme level, assessment regimes appear to differwidely between institutions in relation to these assumptions. This raises questions aboutwhat features of assessment institutional quality assurance systems should focus on. The
*Corresponding author. Email: [email protected]
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
482 G. Gibbs and H. Dunbar-Goddet
appropriateness of conventional foci have been challenged (e.g. Elton and Johnston 2002).In particular, quality assurance has tended to focus on measurement and standards (e.g.explicit criteria and the alignment of goals with summative assessment) rather than onfeatures of assessment that are known to improve student learning (such as formativeassessment and feedback).
Previous work has outlined a methodology for studying the impact of assessmentregimes on student learning (Gibbs 2002), identified conditions under which assessmentsupports student learning at the course level (Gibbs and Simpson 2004), reported thedevelopment of the Assessment Experience Questionnaire (AEQ) (Gibbs and Simpson2003) for measuring the impact of assessment on student-learning experience andreported use of the AEQ in contrasting institutional environments (Gibbs, Simpson, andMacdonald 2003). The work reported here is part of a study that used a revised form ofthe AEQ (Dunbar-Goddet and Gibbs, in press) to examine the impact on student learningof programme-level assessment regimes in three subject areas in each of three contrastinginstitutional environments, as a pilot for an intended national scale study. The componentof that study reported here concerns a methodology for categorising programme-levelassessment environments in terms of nine key assessment characteristics. If a robustmethodology can be established for categorising assessment environments then this canbe used to examine the effects of assessment environments on student learning and thiscan inform what should be appropriate foci for quality assurance.
Methodology
Nine aspects of programme-level assessment environments were identified for study eitherbecause they are characteristics emphasised in quality assessment regimes (such as makinggoals and standards explicit and aligning assessment with goals, which tends to have theconsequence if increasing the variety of assessment methods and reducing reliance onexaminations) or because they are known to be important for the quality of student learn-ing (such as the volume and timeliness of feedback). The nine characteristics were:
● the percentage of marks derived from examinations;● the variety of assessment methods;● the volume of summative assessment;● the volume of formative-only assessment;● the volume of oral feedback;● the volume of written feedback;● the timeliness of feedback;● the explicitness of goals, criteria and standards; and● the extent of alignment of assessment with learning outcomes.
It should be noted here that ‘summative assessment’ refers to assessment practices thatmay or may not also involve a formative intention or component while ‘formative-onlyassessment’ refers to practices where the intention is to provide feedback without allocat-ing marks. It is acknowledged that some individual teachers may vary in the way theyimplement such practices: for example, offering to provide oral feedback on a student’sexamination performance, or adding an indicative grade that does not contribute to thestudent’s degree classification to tutorial feedback on an essay. There is inevitably alsovariation in the effectiveness with which formative-only or summative assessment prac-tices are implemented by individual teachers and in individual course units. Such variation
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
Assessment & Evaluation in Higher Education 483
is impracticable to document and is ignored in this study. What is categorised here is theassessment pedagogy as stated in documentation and as explained by directors of studyand others as programme-level practice, as described in the methodology below. Evidencereported elsewhere (Gibbs and Dunbar-Goddet 2007) suggests that there are substantialvariations in student response that are associated with programme-level characteristics ofassessment environments as characterised by the methodology reported here, andparticularly in response to the volume of formative-only or summative assessment. Thesevariations in student-learning response can be identified regardless of any local teacherlevel or course-unit level variation in the form of implementation or variation in itseffectiveness.
Undergraduate degree programmes were selected for study in order to maximise thelikelihood of variation in assessment environments so that the methodology that emergedcould be used in as wide a range of contexts as possible. Three very different types ofuniversity were chosen:
(1) Oxbridge, because of its traditional use of examinations for summative assessment,its provision of regular assessment through what are normally intended to be forma-tive tutorials and its strict separation of formative from summative assessment.Oxbridge has also engaged to only a limited extent with modern curriculum designinvolving the specification of learning outcomes and criteria. Oxbridge is alsocommonly characterised by the study of a single subject in depth and relatively littlechoice of course units (though there are exceptions to this pattern) and so assessmentis usually designed more at the level of the whole programme than at the level ofthe individual course unit. Oxbridge programmes do not normally have a ‘credit-accumulation’ system in which students on the same programme may experiencewidely different kinds of assessment in different course units.
(2) A ‘post-1992’ university with a background as a Polytechnic, in which curriculumdesign developed under the Council for National Academic Awards in the 1970sand 1980s. Such universities characteristically have credit accumulation systemswith a considerable variety of assessment between course modules and a greateruse of coursework assessment (and consequently a smaller proportion of marksderiving from examinations). The volume of assessed coursework assignmentsmay be high, partly in order to capture student time and effort, and formativeassessment and summative assessment tend to be combined in the same assign-ments. There are commonly explicit (and standardised) approaches to coursedescriptions, required by rigorous institutional quality assurance regulations, forthe purpose of course approval and review. In particular, courses are likely to havelearning outcomes and assessment criteria explicitly stated, with the form ofassessment explicitly linked to individual learning outcomes. Curriculum design islikely to comply with the Bologna Agreement in this respect. As students mayoften collect credits across subject areas there may be more emphasis on the designof assessment within individual course modules than on the coherence or integrityof assessment at the level of the programme.
(3) A ‘pre-1992’ university with the kind of curriculum and assessment system asso-ciated with a traditional research-intensive environment, in which competitivelyselected students are not expected to need intensive support and teachers are notexpected to provide it, given their research priorities. The emphasis on assessmentby examinations may resemble that in an Oxbridge degree programme, withoutthere being as heavy use of coursework assessment or formative assessment as in
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
484 G. Gibbs and H. Dunbar-Goddet
a post-1992 university. In terms of explicitness and alignment, curriculum designis likely to somewhere be in between Oxbridge and ‘post-1992’ curricula.
Only one institution of each type was studied, and so generalisations about character-istics of assessment environments associated with institutional types should be made withgreat caution. The intention in this article is to test the methodology for characterisingassessment environments rather than to generalise about the characteristics of institutionaltypes.
Three subject areas were selected, again to maximise the variety of assessmentenvironments:
(1) Science, which often has a larger number of smaller assignments or tests (such asproblem sheets and laboratory reports) as well as final examinations.
(2) Humanities, which often has extensive written feedback on essays and a limitedvariety of assessment.
(3) Applied social science, which is often less traditional than the humanities andwhich often has a greater variety of forms of assessment to address the widervariety of learning outcomes that span academic and professional goals.
One science, one humanities and one applied social science undergraduate degreeprogramme were selected in each of the three types of institution: nine in all. Each was athree-year undergraduate programme that could be studied as a Single Honours programme.
Each degree programme was visited to elicit the co-operation of the ‘Director ofStudies’ (or their equivalent) and to obtain course documentation that outlined the assess-ment system. Documents describing both the degree as a whole and individual course unitswere obtained and analysed. An initial interview was undertaken with the Director ofStudy to explain the rationale of the assessment system and the meaning of assessmentterminology and conventions evident in the course documentation. Follow-up contactswere made to clarify assessment regulations, to understand variations between course unitsand to understand what a typical pattern of study of course units would consist of for astudent within the degree programme. Typical samples of marked coursework wereobtained from a range of course units (in each year of the programme, and both compul-sory and optional) and studied in order to estimate the average volume of written feedback.The volume of oral feedback was estimated from course descriptions and informationabout class sizes – for example, a scheduled feedback session of one hour in which fourstudents took part would be estimated as 15 minutes oral feedback per student. Informaloral feedback that might take place in a laboratory or on a field trip was excluded from theanalysis as its volume was too difficult to estimate with any accuracy. Finally, a completedescription of the assessment environment for the degree programme was checked with theDirector of Studies for accuracy.
Coding categories
Once full quantitative and qualitative descriptions of all nine programmes were checkedas accurate, and the range of variation established, coding categories (High, Medium orLow, on each variable) and their boundaries were devised with the goal of distinguishingbetween the programmes so that there was at least one example of a programme that wascoded as high medium or low for each variable. For example, the number of times studentwork was marked (with the mark contributing to the degree classification) ranged between
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
Assessment & Evaluation in Higher Education 485
11 and 61. By setting the coding boundaries appropriately the nine programmes could becategorised as in Table 3. The coding boundaries are arbitrary except in that they succeedin distinguishing between the programmes. The qualitative categories were similarlydefined in order to distinguish between programmes. Once defined, the qualitative catego-ries descriptions were tested by independent judges to ensure that they could make thesame coding decisions, given the course documentation. Category definitions were re-defined to lessen ambiguity if there were discrepancies between judges. No controlled trialof inter-rater reliability was conducted.
The full definitions of the coding categories can be seen in Table 1.The range of characteristics of assessment environments was found to be wide.
Table 2 summarises the minimum and maximum for each of the assessment characteris-tic that it was possible to measure quantitatively. There was found to be roughly 6 timesthe proportion of marks for coursework in one programme than another, 9 times as manypieces of work marked, 67 times as much formative-only assessment, 4 times as muchwritten feedback, 23 times as much oral feedback and 28 times the delay in receivingfeedback. These are extraordinarily wide variations which, had they been evident in therelative volumes of teaching, would surely have caused a national scandal. The firstconclusion of this study is that quality assurance does not seem to have constrained varia-tion in assessment regimes, ensured that quality assurance requirements are met (such asvariety of assessment methods aligned to goals) or ensured that characteristics known tosupport learning (such as formative assessment and frequent, prompt, feedback) areevident. One wonders what the variation might have been in the absence of a qualityassurance system.
Table 3 shows how each of the nine degree programmes was categorised. The threetypes of university can be seen to have quite distinctive assessment environments.Oxbridge is relatively high in terms of the percentage of marks from examination, volumeof formative-only assessment and volume of oral feedback, and relatively low in terms ofvariety of assessment, volume of summative assessment, explicitness of standards andalignment of goals and assessment. In contrast, the pattern of assessment features at thepost-1992 university is a mirror image on all these features. The pre-1992 university is, foreach of these discriminating features, somewhere between these two extremes. Only interms of the volume of written feedback do the Oxbridge and post-1992 assessmentenvironments not differ markedly.
These institutional characteristics are fairly consistent across the three disciplineswithin an institution. There is only one case (out of 27 groups of three disciplines withinan institution for each characteristic) where the three disciplines within a university differby more than one category for any feature of the assessment environment. In eight casesthe pattern of assessment environment features is identical across the three disciplineswithin a university. It is clear that there are institutional assessment characteristics that areevident across disciplines. These may be built into local quality assurance regulations orguidelines, or they may be local traditions. In the post-1992 institution a proposal for amodule with 100% assessment by examination would be frowned upon, while in theOxbridge institution anything other than a very high proportion of marks being derivedfrom examinations would require considerable negotiation. The high volume of formative-only assessment and oral feedback at the Oxbridge institution is a reflection of its use ofvery small group teaching during which students present their essay or other work fordiscussion. This is seen as a teaching method and is not even described in documentationas a part of assessment, but it is a method that is universally adopted, across all disciplines,as a defining characteristic of the institution.
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
486 G. Gibbs and H. Dunbar-Goddet
Tabl
e 1.
Defi
niti
ons
of ‘
high
’, ‘
med
ium
’ and
‘lo
w’ f
or e
ach
char
acte
rist
ic o
f as
sess
men
t en
viro
nmen
ts.
Cha
ract
eris
tic
of a
sses
smen
t en
viro
nmen
tL
owM
ediu
mH
igh
Per
cent
age
of m
arks
fro
m
exam
inat
ions
Bel
ow 4
0%B
etw
een
40%
and
70%
Mor
e th
an 7
0%
Var
iety
of
asse
ssm
ent
met
hods
1–3
diff
eren
t m
etho
ds4–
6 m
etho
ds6+
met
hods
Vol
ume
of s
umm
ativ
e as
sess
men
tM
ark
allo
cate
d le
ss t
han
15 t
imes
15–4
0 ti
mes
Mor
e th
an 4
0 ti
mes
Vol
ume
of f
orm
ativ
e-on
ly
asse
ssm
ent
Les
s th
an 1
5 ti
mes
15–4
0 ti
mes
Mor
e th
an 4
0 ti
mes
Vol
ume
of (
form
al)
oral
fee
dbac
kL
ess
than
15
hour
s15
–40
hour
sM
ore
than
40
hour
sV
olum
e of
wri
tten
fee
dbac
kL
ess
than
300
0 w
ords
3000
–600
0 w
ords
Mor
e th
an 6
000
wor
dsT
imel
ines
s: a
vera
ge d
ays
afte
r su
bmis
sion
bef
ore
feed
back
pr
ovid
ed
Mor
e th
an 1
4 da
ys8–
14 d
ays
1–7d
ays
Exp
lici
tnes
s of
cri
teri
a an
d st
anda
rds
Exp
lici
t cri
teri
a an
d st
anda
rds
rare
and
/or
neb
ulou
s; m
arks
or
grad
es a
rriv
ed
at t
hrou
gh g
loba
l ju
dgem
ent
in t
acit
w
ay;
no e
ffor
t to
ena
ble
stud
ents
to
inte
rnal
ise
crit
eria
and
sta
ndar
ds
Cri
teri
a fo
r so
me
assi
gnm
ents
an
d ex
ams;
wea
k li
nk to
mar
ks
or g
rade
s; l
ittl
e ef
fort
to
en
able
stu
dent
s to
int
erna
lise
cr
iter
ia a
nd s
tand
ards
Cle
ar c
rite
ria
for
mos
t or
all
as
sign
men
ts a
nd e
xam
s; l
ink
mad
e to
gra
des;
eff
ort
mad
e to
ena
ble
stud
ents
to
inte
rnal
ise
crit
eria
and
st
anda
rds
Ali
gnm
ent
of g
oals
and
as
sess
men
tL
earn
ing
outc
omes
rar
ely
or w
eakl
y sp
ecif
ied
at e
ithe
r pr
ogra
mm
e le
vel
or c
ours
e le
vel;
ver
y w
eak
or r
are
link
bet
wee
n le
arni
ng o
utco
mes
and
ch
oice
of
asse
ssm
ent
met
hods
; no
ex
plic
it l
ink
betw
een
lear
ning
ou
tcom
es a
nd a
lloc
atio
n of
pr
opor
tion
s of
mar
ks;
only
ove
rall
gr
ades
rec
orde
d
Lea
rnin
g ou
tcom
es s
peci
fied
at
prog
ram
me
leve
l bu
t w
eakl
y sp
ecif
ied
at c
ours
e le
vel;
no
expl
icit
lin
k be
twee
n le
arni
ng
outc
omes
and
all
ocat
ion
of
prop
orti
ons
of m
arks
; on
ly
over
all
grad
es r
ecor
ded
Lea
rnin
g ou
tcom
es s
peci
fied
at
prog
ram
me
leve
l and
for m
ost o
r all
co
urse
s; d
ocum
enta
tion
sho
ws
how
ea
ch a
sses
smen
t li
nks
to e
ach
lear
ning
out
com
e at
the
cou
rse
leve
l; s
ome
link
to
mar
king
pr
oced
ures
; st
uden
t pe
rfor
man
ce
reco
rded
in
rela
tion
to
outc
omes
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
Assessment & Evaluation in Higher Education 487
Patterns in assessment characteristics
A number of patterns of assessment characteristics are visible in Table 3:
● The extent of alignment of assessment with goals is inversely related to the percent-age of marks from examinations.
● Where there is a greater percentage of marks from examinations, there is less varietyof assessment methods.
● Where there is a greater percentage of marks from examinations, there is lesssummative assessment and more formative-only assessment.
● Where the volume of summative assessment is low, the volume of formative-onlyassessment is high. There are no examples of an assessment system high on both thevolume of formative-only and summative assessment or low on the volume of bothformative-only and summative assessment. It is possibly the case that a programmecan afford one or the other, but not both.
● Assessment that is high on alignment with goals was only found where there is agreater variety of assessment methods and a lower percentage of marks fromexaminations.
● Programmes tend to be characterised either by a high level of explicitness of stan-dards or by a high volume of oral feedback, but not both at the same time. Thesemay in practice be alternative ways to make standards clear to students.
Table 3. Characteristics of the Humanities (H), Science (S) and Applied Social Science (SS)assessment environments at the three university types in terms of nine assessment variables.
Oxbridge Pre-1992 Post-1992
Feature of assessment environment H S SS H S SS H S SS
Percentage of marks from examinations Hi Hi Hi Med Med Hi Lo Lo LoVariety of assessment methods Lo Lo Med Lo Hi Hi Med Hi HiVolume of summative assessment Lo Lo Lo Med Hi Med Med Hi HiVolume of formative-only assessment Hi Hi Hi Med Med Lo Med Lo LoVolume of (formal) oral feedback Hi Hi Hi Lo Med Lo Lo Lo LoVolume of written feedback Hi Hi Med Lo Med Med Med Hi MedTimeliness of feedback Hi Hi Hi Med Lo Lo Med Lo MedExplicitness of standards Lo Med Med Hi Hi Med Hi Med HiAlignment of assessment Lo Lo Med Med Med Med Med Hi Hi
Table 2. Range of characteristics of assessment environments between degree programmes.
Characteristic of assessment environment Minimum Maximum
Percentage of degree marks derived from examinations 17% 100%Percentage of degree marks derived from coursework 17% 100%Total number of times work marked per student 11 95Variety of assessment methods 2 18Total number of formative-only assessments per student 2 134Total number of words of written feedback per student 2,700 10,350Total number of hours of oral feedback per student 3 68Average number of days between submission of assignment and feedback 1 28
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
488 G. Gibbs and H. Dunbar-Goddet
The relationship between the volume and summative assessment and the volume offormative-only assessment is displayed in Figure 1. Each of the nine points on the graphin Figure 1 represents a degree programme. There is a greater similarity betweenprogrammes within an institution than between disciplines across institutions. Figure 1shows that there is a trade-off between the volume of summative assessment and thevolume of formative-only assessment with no examples of a programme that has a highvolume of both.Figure 1. The relationship between the volume of summative assessment and the volume of formative-only assessment, across nine degree programmes. Note: ([squf ] ■ ) Oxbridge; (▲) pre-1992; (◆ ) post-1992.
Conclusions
Whether or not the patterns in assessment environments described above are typical of thetypes of institutions studied, or typical of UK higher education as a whole, would requirea larger scale study to examine. This article is concerned with establishing if the method-ology it uses is capable of distinguishing between assessment environments, and in that itsucceeds. It identifies substantial differences between the characteristics of differentassessment environments and identifies patterns between those characteristics that areassociated with the institution and evident across disciplines.
Features of assessment environments appear to cluster into highly distinctive patterns.Institutions may emphasise certain features in quality assurance guidelines, or they maybe emphasised through traditional patterns of teaching as well as through patterns ofassessment. Some of these patterns are logically connected. For example, if alignment ofassessment and learning outcomes is stressed, within a modular course, this almost inevi-tably increases the variety of assessment, increases the use of summatively assessed
0
20
40
60
80
100
120
140
160
0 20 40 60 80 100
Number of summative assessments
Nu
mb
er o
f fo
rmat
ive-
on
ly a
sses
smen
ts
Figure 1. The relationship between the volume of summative assessment and the volume of for-mative-only assessment, across nine degree programmes. Note: ([squf ]■ ) Oxbridge; (▲) pre-1992; (◆ )post-1992.
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014
Assessment & Evaluation in Higher Education 489
coursework, increases the total number of summative assessments, and in doing so,reduces the resources available for formative-only assessment.
Use of the methodology is recommended for studying the impact of features of assess-ment environments on student learning. A study of the relationship between features of theassessment environments reported here and students’ learning experience, as evident inscores on the AEQ and in focus groups, is reported elsewhere (Gibbs and Dunbar-Goddet2007).
AcknowledgementsThe study reported in this article was funded by the Higher Education Academy. Chris Rust and SueLaw contributed to the planning and to the interpretation of the findings and Gill Turner contributedto analysis of the data. The authors also acknowledge the helpful contribution from the staff andstudents of the participating institutions.
Notes on contributorsGraham Gibbs is a senior visiting researcher at the Oxford Learning Institute, University of Oxford.
Harriet Dunbar-Goddet is a research officer in the Department of Education, University of Oxford.
ReferencesDunbar-Goddet, H., and G. Gibbs. in press. A research tool for evaluating the effects of programme
assessment environments on student learning: The Assessment Experience Questionnaire(AEQ). Assessment & Evaluation in Higher Education.
Elton, L., and B. Johnston. 2002. Assessment in universities: A critical review of research. Reportto the Generic Centre of the LTSN network.
Gibbs, G. 2002. Evaluation of the impact of formative assessment on student learning behaviour.European Association for Research into Learning and Instruction. Newcastle: NorthumbriaUniversity.
Gibbs, G., and H. Dunbar-Goddet. 2007. The effects of programme assessment environments onstudent learning. York: Higher Education Academy.
Gibbs, G., and C. Simpson. 2003. Measuring the response of students to assessment: TheAssessment Experience Questionnaire. In Proceedings of the 11th International ImprovingStudent Learning Symposium, Hinckley.
Gibbs, G., and C. Simpson. 2004. Conditions under which assessment supports student learning.Learning and Teaching in Higher Education 1: 3–31.
Gibbs, G., C. Simpson, and R. Macdonald. 2003. Improving student learning through changingassessment—A conceptual and practical framework. European Association for Research intoLearning and Instruction Conference, Padova, Italy.
Ramsden, P. 1991. A performance indicator of teaching quality in higher education: The courseexperience questionnaire. Studies in Higher Education 16, no. 2: 129–50.
Dow
nloa
ded
by [
Uni
vers
ity o
f C
alif
orni
a Sa
nta
Cru
z] a
t 20:
57 2
2 O
ctob
er 2
014