25
Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the quality of feedback on assessed work, dissatisfaction expressed by students remains visible. This paper draws upon the preliminary findings of a Higher Education Academy Collaborative Research Project on assessment feedback within two large political science and international relations departments. It offers a critical review of current feedback practices and processes and identifies four key issues – negativity, transferability, intelligibility and consistency – that require attention in order to deliver more effective assessment feedback. The paper also suggests practical ways forward in addressing these issues, highlighting in particular the importance of both structure and timing in helping to produce high quality feedback efficiently. Introduction Despite attempts by higher education institutions to improve the quality of feedback on assessed work, dissatisfaction expressed by students remains visible. 1 This paper draws upon a Higher Education Academy (HEA) Collaborative Research Project on assessment feedback (Project Grant GEN264) that took place between June 2013 and December 2014 within two large political science and international relations (IR) departments in the United Kingdom (UK). 2 It offers a critical review of current feedback practices and processes, drawing from the results of the coding of 400 pieces of feedback at the two institutions, together with data collected from a series of student-led focus groups and an online survey. This extensive analysis identifies key issues and themes – negativity, transferability, intelligibility and consistency – that require attention if departments are to deliver more effective assessment feedback. The paper also discusses some practical ways forward in addressing these issues, reflecting in particular on the importance of both structure and timing in helping to produce high quality feedback efficiently. Bridging the Gap between the Production and Implementation of Feedback There is now a substantial body of literature on the crucial role that assessment feedback plays in the development of effective learning (see for instance Irons, 2007; Frankland, 2007; Brookhart, 2008; Joughin, 2008; Carless, 2006; McInerney, Brown, and Liem, 2009; Orsmond and Merry, 2010; McDowell, Sambell, and Montgomery, 2012; Blair and McGinty, 2013; Boud and Molloy,

Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Feedback: critiquing practice, moving forward

Helen Williams

Abstract

Despite attempts by higher education institutions to improve the quality of feedback on assessed work, dissatisfaction expressed by students remains visible. This paper draws upon the preliminary findings of a Higher Education Academy Collaborative Research Project on assessment feedback within two large political science and international relations departments. It offers a critical review of current feedback practices and processes and identifies four key issues – negativity, transferability, intelligibility and consistency – that require attention in order to deliver more effective assessment feedback. The paper also suggests practical ways forward in addressing these issues, highlighting in particular the importance of both structure and timing in helping to produce high quality feedback efficiently.

Introduction

Despite attempts by higher education institutions to improve the quality of feedback on assessed

work, dissatisfaction expressed by students remains visible.1 This paper draws upon a Higher

Education Academy (HEA) Collaborative Research Project on assessment feedback (Project

Grant GEN264) that took place between June 2013 and December 2014 within two large

political science and international relations (IR) departments in the United Kingdom (UK).2 It

offers a critical review of current feedback practices and processes, drawing from the results of

the coding of 400 pieces of feedback at the two institutions, together with data collected from a

series of student-led focus groups and an online survey. This extensive analysis identifies key

issues and themes – negativity, transferability, intelligibility and consistency – that require

attention if departments are to deliver more effective assessment feedback. The paper also

discusses some practical ways forward in addressing these issues, reflecting in particular on the

importance of both structure and timing in helping to produce high quality feedback efficiently.

Bridging the Gap between the Production and Implementation of Feedback

There is now a substantial body of literature on the crucial role that assessment feedback plays in

the development of effective learning (see for instance Irons, 2007; Frankland, 2007; Brookhart,

2008; Joughin, 2008; Carless, 2006; McInerney, Brown, and Liem, 2009; Orsmond and Merry,

2010; McDowell, Sambell, and Montgomery, 2012; Blair and McGinty, 2013; Boud and Molloy,

Page 2: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

2013). Indeed, high-quality feedback has been identified as ‘the most powerful single influence

on student achievement’ (Beaumont, O’Doherty, and Shannon, 2011: 671). Yet the literature

also points to high levels of dissatisfaction about feedback practices from students and lecturers

alike and, in particular, to the existence of a significant, and growing, ‘feedback gap’ between its

provision and implementation (Evans, 2013: 73). On the one hand, surveys from across the

world show that students are often unhappy with the quality and/or quantity of the feedback

they receive (Nicol, 2010), with key issues including that feedback comments are difficult to

interpret and therefore to implement, that they are too generalised and vague rather than focused

and personalised, and that they are too negative and are therefore demotivating (Carless, 2006;

Jones and Gorra, 2013). On the other hand, lecturers report spending ‘inordinate’ amounts of

time on the construction of written comments (Bloxham and Campbell, 2010: 292) that are not

always able to read, and still less acted upon, by their students (Nicol, 2010; Orsmond and Merry,

2011; Blair et al., 2013b). They do so, moreover, within the context of larger class sizes, growing

marking loads, heightened workloads, the modularisation of courses, and the end-loading of

assessments – all factors which, in turn, mean that staff have less and less time both to produce

written comments and to provide oral feedback to students in tutorial and other settings

(Hounsell, 2003; Bailey, 2009; Li and De Luca, 2014). And yet the marketization of higher

education, the widening of student access, the need for student retention and completion, and

growing resource constraints also mean that the provision of effective feedback is now more

important than ever before (Price et al., 2010; Evans, 2013; Blair et al., 2013a).

Although there is no general agreement on what precisely constitutes ‘good’ feedback (Evans

and Waring, 2011), there is something of a consensus that if feedback is genuinely to contribute

to effective learning and development, then it must be understood as ‘an active, shared process’

(Bloxham and Campbell, 2010: 291). This reflects a broader shift in which learning has come to

be conceptualised not as the simple transmission of information from teacher to student but

rather as a dialogue between teacher and student so that students are viewed not as passive

receivers but rather as active producers of knowledge (see inter alia Laurillard, 2002; Juwah et al.,

2004; Blair and McGinty, 2013; Crisp, 2007; Higgins, Hartley, and Skelton, 2001; Nicol and

Macfarlane-Dick, 2006; Poulos and Mahony, 2008; Lizzio and Wilson, 2008; Rae and Cochrane,

2008; Johnson et al., 2011). However, and despite the emphasis on student-centred learning both

within the pedagogical literature and from higher education institutions themselves, a variety of

studies show that it is the transmission, not dialogic, model that continues to inform actual

feedback practices in many higher educational contexts (see for instance Juwah et al., 2004;

Carless, 2006; Crisp, 2007; Higgins, Hartley, and Skelton, 2001; Blair and McGinty, 2013; Blair,

Page 3: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Curtis and McGinty, 2013). Simply put, feedback provision all too often takes the form of a

monologue or ‘series of unilateral pronouncements by assessors’ (Crisp, 2007: 578) rather than as

a two-way process through which students can engage with their lecturers on how to improve

(Blair, Curtis and McGinty, 2013). There is therefore a clear need to involve students much

more directly in the development of feedback practices, including in the debates surrounding

those practices, in order to promote open dialogue and discussion and therefore a culture in

which students are encouraged to regard themselves as active learners responsible for the

management of their own learning and development.

Our project aimed to substantially improve the student learning experience by addressing the

frustration that arises when students recognise that they are receiving feedback but do not

understand how to implement it. In the UK, National Student Survey (NSS) scores across

disciplines and universities consistently show that feedback-related questions receive the poorest

scores from students (Ipsos MORI, 2014). Besides poor general feelings about the feedback

they have received, there is a stark disparity between students’ estimations of the level of detail

and promptness of the feedback compared to their ability to implement the feedback: the five-

year averages of the two departments of this bid show a greater than ten-point gap between

responses to ‘I have received detailed comments on my work’ and ‘Feedback on my work has

helped me clarify things I did not understand’. This gap is not inevitable, and initial exploration

demonstrates that summative feedback can be improved and can bridge the communication

divide.

At the same time, and although a number of studies have shown the effectiveness of formative

feedback in particular (Shute, 2008; Juwah et al., 2004), the project also explicitly recognised how

summative assessment is unavoidable for many departments due to constraints of time, staffing

and resources. For example, the two departments involved in this project teach over 1,400

undergraduates a year, processing more than 15,000 individual assessments and providing tens of

thousands of words of feedback. Yet despite the thousands of hours invested in feedback

provision — both written and in-person via tutorials — there are clear disparities between

lecturers’ intentions and students’ interpretations of feedback, between lecturers’ efforts and

students’ perception of such efforts and, more fundamentally, between the desire to promote

student-centred learning and the lack of open dialogue about feedback practices.

The project therefore aimed to explore and address the persistent communication gap between

the provision of feedback by lecturers and the implementation of feedback by students. It did so

by involving students directly in the research and, in so doing, presented the opportunity for

Page 4: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

truly dialogic feedback and to make students stakeholders in the learning process. Central to the

project was the use of virtual learning environments (VLEs), which are increasingly being used

by universities and which have the potential to offer different and expanded opportunities for

learning for students whilst also allowing greater flexibility for lecturers (Johnson et al., 2011:

499). In 2011-12, following a scheme piloted by one of the applicants on this project, the

Department of Political Science and International Studies at the University of Birmingham rolled

out electronic marking through GradeMark,3 a component within the widely-used Turnitin

plagiarism detection software. Without changing any other aspects of the feedback process, the

NSS score on utility of feedback jumped by more than 10 points above the previous five-year

average. The project therefore set out to offer a more thorough, detailed investigation in order

to interrogate what constitutes effective students that students can understand and act upon.

Methodology

This project used three sources of empirical data:

A sample (n=400) of coursework feedback drawn from both departments, covering all

three undergraduate levels;

Focus groups with students (n=60) to explore their perceptions of feedback in both

departments;

An online survey of students (n=186) to gain a general understanding of how students

rate the feedback they receive and what they do with it.

The first phase of the project involved analysing a sample of previous years’ written feedback to

determine common themes in feedback emerging from markers’ perspectives. The sample was

drawn from all three years of undergraduate teaching at the two universities.4 All feedback was

related to coursework as opposed to exams, and the vast majority of the coursework was in the

form of essays.5 All pieces were anonymised immediately after collection, retaining only

information about which degree level the feedback was given.6 The purpose of this audit was to

form a snapshot of current feedback practices, focusing particularly on critiques commonly

raised across markers, modules, years, and institutions.

The focus groups were conducted in phases, aligning with the general phases of the project, with

twelve focus groups in total taking place during Phase 1 and Phase 2 (one per year group for

each institution during each phase).7 These were run by students from each year group who had

been trained as facilitators, with a postgraduate research assistant also in attendance for the

purposes of recording the session (but who had no involvement in the discussions). The audio

Page 5: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

recordings were subsequently transcribed by the postgraduate research assistants before being

given to the research team in order to retain complete anonymity for the participants in an effort

to encourage honest discussions. The aim of the Phase 1 focus groups was to gain greater insight

into how students understand feedback on summative assessment, pinpointing what they do

with it and how they interpret the ‘language of feedback’. The aim of Phase 2 focus groups was

to test the reception of different formulations of feedback to construct phrases that express

common critiques in language comprehensible to students. Focus group participants also

completed a short questionnaire to get a rough idea of both the participant’s profile and of their

personal view of feedback practices, independent of the discussion.8 Focus group discussions

followed a semi-structured format, with nine core questions addressed in all focus groups. A

general background of the focus group participants’ characteristics can be found in Table 1.

Table 1. Characteristics of focus group participants

Characteristic N

Level

First-year undergraduate 16

Second-year undergraduate 19

Final-year undergraduate 25

Sex

Male 24

Female 36

Achievement to date

First 2

2:1 21

2:2 1

Third 0

No response 36

Secondary school type

Comprehensive 37

Grammar 8

Independent/Private 13

International 2

Other 3

Selectivity of secondary school

Non-selective 28

Partially selective 11

Fully selective 15

No response 6

Total participants 60

Source: compiled by the authors

Page 6: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Transcriptions of the feedback and focus groups were coded qualitatively using the software

programme QSR NVivo, using a largely emergent coding framework. Feedback was coded

according to the topic of the content (e.g. quality of argument, use of sources, referencing);

whether the comment was framed negatively or positively; whether the content was specific to

the essay topic or more general; and whether the comment was focused on explaining the mark

given (feeding back) or provided suggestions for improving future work (feeding forward).

Focus groups were coded according to responses to the semi-structured questionnaire as well as

themes. Alongside, but separate from the focus groups, a short online survey of second-year

students was conducted at both institutions (n=92 at one institution; n=94 at the other). This

was designed to provide contextual information about what students think about the quality and

quantity of the feedback they receive and to get an idea about what students do with the

feedback after reading it.

Findings

In this section we discuss the findings of the feedback audit, the focus groups, and the online

survey. The results from these three different sources of empirical data fall into four broad

categories: negativity, transferability, intelligibility and consistency. The different sources of data

triangulate to give very consistent results on these themes. We will discuss each of these in turn

before offering some suggestions about what we can do to address the issues raised.

Negativity

The feedback audit found that the phrasing of the comments in feedback given to students was

overwhelmingly negative. Although it is to be expected that comments will become more

negative as the quality of the work decreases, it came as a surprise that 56 per cent of the

feedback on submissions awarded a First was negative (Figure 1)9. This was coded and calculated

on a per-word basis, so sentences that had some positive and negative characteristics were split

for coding. This negativity was also highlighted by students in the focus groups, who noted that

they were always told what was wrong with their work but not what was right. One student

explained:

I think it is also important to include good things as well and I think the best sort of feedback

is where they are critical but it’s not harsh like I found a lot of feedback…it comes across like

Page 7: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

they are being so mean about it and you just think it’s a bit like unjustified. So, I think, that

needs to come across a bit more approachable.

Although markers may follow a tacit assumption that if they do not comment on something then

it is ‘right’, this is not how students experience those comments. Equally, negative comments

often highlight to the student what is wrong but do not indicate how to achieve a better

outcome. This is linked to the general frustration from students about a lack of transferable

feedback discussed in greater detail below.

Figure 1. Balance of positive to negative comments

n=400 Source: compiled by the authors

Focus group discussions made it equally clear, however, that wholly positive feedback was also

unwelcome. As one focus group participant commented:

Sometimes I don’t think that it’s critical enough, like, it says you did well in this, you did OK

with this but it doesn’t say the things you can improve on.

Students noted that one of the fool-proof ways to achieve student dissatisfaction with feedback

is to give a comment such as ‘This is a very good submission with few faults’, accompanied by a

mark of 65. A mid-Upper Second-Class may be ‘good enough’ for the marker – but not good

enough for students who want to improve. Instead, the focus group discussions across year

groups conveyed the students’ desire to receive a mixture of positive and negative comments.

Students also frequently mentioned the counterproductive effect of having wholly negative

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

First UpperSecondClass

LowerSecondClass

Third Fail

Positive

Negative

Page 8: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

and/or excessively harshly phrased comments, the receipt of which caused students to feel

demoralised and to disengage from the learning process. The comment from one focus group

participant that ‘most of the feedback received last year was so bad and harsh, I ignored it’ drew

widespread agreement from other students.

The desire for a mixture of positive and negative feedback was also underscored by the

individual questionnaire results returned by first-year focus group participants. These students

were asked to rate the four characteristics of feedback they perceived as most important from

eleven possible options (Table 2). Eleven of thirteen respondents chose ‘Feedback should help

improve performance’, followed by seven each who chose ‘Feedback should be critical’ and

‘Link between feedback and grade’. This suggests that students do not expect to receive wholly

positive comments, but they do expect to receive constructive criticism that indicates to them

both the negative and the positive. Their expectation that the feedback will explain to them the

mark they received is an expectation held commonly between both lecturers and students and

indicates that, on this at least, both parties agree. The link between feedback and future

performance leads us to the next theme: transferability.

Table 2. Most important characteristics of feedback for studentsa

Rated 1st Rated 2nd Rated 3rd Rated 4th Rated in top 4

Feedback should help improve performance

2 2 6 1 11

Feedback should be critical 3 2 0 2 7

Link between feedback and grade 3 1 0 3 7

Feedback should be from an experienced teacher

1 1 1 3 6

Knowing how to implement advice given in feedback

0 2 2 1 5

Returned in a timely manner 1 1 0 2 4

The quantity of feedback 1 1 1 0 3

Opportunity to discuss feedback 1 2 0 0 3

Consistency between feedback across departmenti

0 0 2 1 3

Subject-specific feedback 1 0 1 0 2

Skill-specific feedback 0 1 0 0 1

Source: compiled by the authors

a This option was only added in the second round of focus groups, so the lower number of selections may not indicate its actual level of importance.

Page 9: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Transferability

Both the focus group discussions and the survey free responses underscored not only the

variability but also the lack of transferability of the feedback they received. Many students

complained that the comments received were so focused on the subject-specific content of the

particular essay as to be inapplicable to improving generally. This student insight proved

consistent with our findings from the feedback audit, which found that seven times as many

words were coded as topic-specific compared to suggestions for improvement. These comments,

some of which could have contained transferable suggestions, were framed in such a way as to

make its transferability obscure to students with lower levels of feedback literacy.

Students’ frustrations were generally focused on the fact that they did not feel that the feedback

gave them enough information to improve future results. As Table 2 shows, ‘Feedback should

help improve performance’ was the most important characteristic of feedback from first-year

students’ perspective. This finding is further underscored by the questionnaire completed by

second- and third-year focus group participants: one-third felt that they received sufficient

feedback on assessments to improve their grade; an identical number disagreed or strongly

disagreed with this. Focus group participants frequently felt that the feedback they received was

more focused on justifying their mark than on telling them how to improve future performance.

Follow-up discussions in Phase 2 focus groups revealed that students especially felt the need for

transferrable feedback in the first two years of their degree but that they felt that topic-specific

feedback played more of a role in their final year. There are some potential underlying

phenomena here: students in their first two years of study are still trying to grapple with the

expectations of university assessment, while many finalists feel that they have generally figured

things out or given up; finalists generally have a wide array of elective modules to choose from

and may use them strategically to learn more about their dissertation topic, making topic-specific

feedback more valuable; and finalists are not looking toward the next year’s assessment when

collecting feedback and are therefore less concerned about its transferability.

Interestingly, students also expressed some dissatisfaction if feedback was too generic and/or

vague. Comments to this effect in the anonymous survey, for instance, included:

[S]eems like the marker hasn’t even read the essay properly but just gives a generic comment

which is pathetic seeing as we pay to go to university and they don’t give good feedback in

order to show how we can improve.

Page 10: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

[T]he feedback provided is normally a few general sentences … I want specific comments on

my piece of work.

Needs to be more personal.

Deeper probing of such comments in the focus groups revealed that students are fairly

pragmatic about recycled comments: they do not resent receiving some of the same comments as

their peers, so long as the combination of comments is clearly specific to the individual submission.

The frustration arose not from noticing that lecturers reuse comments but from the feeling that

everyone, regardless of performance, received roughly the same comments. Students then felt that

they have been given some general information that may or may not apply to their submission

and that they are left to sift through it to find something that they can use. This is not, therefore,

an argument against reusing the same critiques, so long as the comments are combined in a

manner that is appropriately specific to the submission.

Intelligibility

The issue of transferability also relates to that of intelligibility, for it is not enough simply to

identify common themes to students that need to be tackled in their work; these themes need to

be articulated in a clear way that students can both understood in principle and implement in

practice. In fact, the feedback audit revealed a number of common critiques being offered by

markers that, moreover, were the same across the two institutions and across all levels of

teaching. The most common themes were grouped around argument, sources, and presentation

standards (Table 3).

Table 3. Common critiques offered by markers

Argument Sources Presentation

Criticality/depth Number General standards Clarity Comprehension Grammar Structure Use of key sources Spelling Originality Referencing Punctuation Use of examples/case studies Use of quotations

Source: compiled by the authors Given that we found that lecturers at all levels consistently critiqued the same errors, it is clear

that we are not conveying adequately to the students what to do to fix many of these problems.

Addressing issues of presentation standards, for instance, might seem to be fairly straightforward

– but students frequently expressed a feeling of inability to transfer the comments received into

concrete improvements. For example, when a lecturer advises that a piece of work needs to ‘be

Page 11: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

more critical’, many students can neither identify what this actually means nor understand how to

improve on it in future submissions. This is epitomised by the frustration of one third-year

student:

I don’t really understand what it means. Half of the feedback I get (…) doesn’t really make

sense to me. (…) this is what you did wrong, be more critical but I don’t really understand

how to do that.

This indicates that, though lecturers highlight similar issues with many students’ work, the

students do not understand the language used in the critiques themselves, which prevents

students from improving and can lead them to disengage with future feedback.

Focus group participants frequently expressed the feeling that it would be easier to know what to

do with the feedback they received if common critiques were accompanied by concrete

suggestions. One student suggested:

It’s better when they talk about what’s gone wrong, but they set out what to improve, so you

know what to improve on. Like your structure is poor, this could be helped by signposting at

the start of every paragraph. When they tell you your structure is poor, well obviously you’ve

written the essay so you didn’t see anything wrong with it, so them just telling you it is poor

isn’t helping.

In response, another student felt that, if it were a choice between cryptic feedback and the

marker not commenting on a problem at all, they would rather have the cryptic feedback and try

to figure out what to do with it. What all of this highlights is that students are not able to

interpret potentially transferable suggestions, even when they are provided, because they lack the

conceptual tools to interpret the information provided and to transform that into concrete

improvements on future submissions.

Consistency

This section examines the question of consistency from several angles: perceptions of quantity

and quality of feedback; and views on the consistency of feedback received across modules,

markers, and years. The results of the feedback audit are generally corroborated by the survey

and focus group results: the quantity and quality of feedback varies drastically between markers.

Page 12: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

There was large variation in the quantity of feedback given between institutions and between

markers (see Figure 2). The general trend towards more words with decreasing marks is probably

unsurprising, given that there are fewer points to critique in the higher-scoring submissions. The

anomaly, however, is that work receiving a Third received roughly the same amount of feedback

as an Upper Second-Class and quite a bit less than a Lower Second-Class, thereby giving them

less feedback to improve and leaving the impression that the markers have become so

disillusioned as to have given up writing feedback. Furthermore, the means show that even those

receiving the greatest amount of feedback still only received around 150 words, with Firsts

receiving around 90 words. Given the variations between markers, this means that some students

are receiving 200-300 words, while others are receiving one or two sentences. To provide a

measurement for comparison, this paragraph is longer than 150 words.

Figure 2. Words of feedback per essay (mean average)

n=400 Source: compiled by the authors

Clearly, high quantity feedback does equate to high quality feedback but the issue of quantity is

nevertheless relevant to how students perceive the ‘quality’ of the feedback they are receiving.

Students had very mixed responses about the quantity of feedback they were receiving, and the

picture at the two universities was also very different (Figure 3). Student responses and answers

to open-ended questions highlighted the same trend that we noticed when auditing the feedback:

there is huge variability in the feedback students receive from marker to marker, both in terms of

quality and quantity. Only a minority of students (32.9 per cent for one department and 6.4 per

cent for the other) felt that they were receiving enough feedback across-the-board, and a

0

20

40

60

80

100

120

140

160

First Upper SecondClass

Lower SecondClass

Third Fail

Page 13: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

minority (37.8 per cent for University 1 and 5.3 per cent for the University 2) felt that the

feedback was consistently of high enough quality to help them improve over time. As one

student noted in the online survey: ‘Some tutors give useful and detailed feedback … It totally

depends on the marker. You can usually tell when they have left the marking to the last minute’.

Another stated: ‘Some tutors give excellent, comprehensive feedback. Others, on the other

hand, give one-line responses … which is useless’. The clear majority (around two-thirds) felt

that both the quantity and the quality varied significantly across assignments.

One perplexing finding in these results is that the department giving fewer words of feedback,

on average, received more positive results about the quantity of feedback being provided. It is

possible that this is partially because the survey was administered at the same time as students

received feedback on a specific piece of coursework and that the quantity of feedback received

on that assessment influenced answers; it is also possible that this is linked to perceptions of

quality versus quantity.

Figure 3. 'Quantity of feedback: are you getting enough of it?'

n=186 Source: compiled by the authors

Views on the quality of feedback also varied between universities, following roughly the same

pattern as for quantity of feedback received (Figure 4). Despite inter- and intra-institutional

variations, the survey results do indicate that very few students find the feedback they receive

universally unhelpful, but students were frustrated by the lack of consistent critiques. Without

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Uni 1 Uni 2 Combined

I'm not getting enoughfeedback on any of mymodules.

Overall, I'm getting anadequate amount offeedback but I'd definitelyfind it helpful to get more.

It varies - sometimes I getloads and other times notenough.

Page 14: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

consistent comments, there were no general trends they could identify in the problems identified

by one marker compared to another. This made it very difficult for students to identify what had

gone wrong and therefore what steps they could take generally towards improvement. This was

again paralleled by the findings of the audit: not only was there a paucity of transferable

feedback, there were also no clues in the feedback on modules with multiple coursework

assessment points that could explain why some students’ marks improved or declined drastically

between assessment points.

Figure 4. 'Quality of feedback: how helpful is it?'

n=186 Source: compiled by the authors

Exasperation about inconsistency is tied to feelings about fairness and the general perception

that the whole marking process is very opaque and secretive. The inconsistency of quality

between markers leads to a sense that some students get ‘lucky’ with their marker, while others

are ‘unlucky’:

There is a big disparity depending on who is marking which assignment. Certain students

may have been unlucky and received consistently weak feedback and vice versa.

Consistency also covers students’ need to see the same critiques repeated between markers in

order to identify a pattern of what is going wrong:

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Uni 1 Uni 2 Combined

I never (or rarely) find thefeedback I get helpful, andI'm really struggling toimprove.

The feedback I get is notbad, overall, but it's notalways as helpful as itcould be.

It depends on the module- some is really helpful,some isn't.

The feedback is great - Ireally understand it and it'shelping me to improveover time.

Page 15: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

[W]hat I’d do is I take the feedbacks from all my different essays and then kind of see which

points are consistently I’ve failed to do across all of them or I’ve consistently done OK with.

If each marker of each assessment focuses on different aspects to the effect that there is no

overlap between assignments, it is difficult for students to pinpoint overall problems.

Inconsistent feedback, whether from quality, quantity, or critiques, hampers students’

development of feedback literacy and their ability to improve. It also prevents students from

identifying when differences in critiques are because of the different demands of different areas

of their degree — e.g. theory, IR, comparative politics — versus skills they need to develop

across the board. This frustration is captured by a first-year student:

I had an essay back recently and the tutor literally just highlighted random words going

through it and had just written ‘OK’ next to it. I don’t quite know what I’m meant to get out of

that. There were comments and stuff but it wasn’t, it was all a bit minimal. I’m not going to be

able to respond to something if it’s just kind of going through the essay. Another one of my

tutors…went through and wrote really detailed like stuff all the way through the essay with

each line, what’s good, what’s bad, so much annotation and this other tutor I had for another

module literally was just, ‘OK, OK, OK’. I don’t know what that’s meant to be telling me.

What can we do?

Thus far, we have painted a fairly bleak picture of negative, inconsistent, non-transferable

feedback and frustrated students. Although it might be tempting to dismiss our findings are

relevant only to the two departments in question, it should be noted that student satisfaction

with assessment and feedback is poor in other political science departments too (Smith and

Williams, this volume) and that the two departments in question score higher-than-average for

overall satisfaction with assessment and feedback compared to other ‘top’ political science

departments (ibid). This suggests that our findings are relevant for other institutional contexts

and, given that low levels of satisfaction appear to be generalised rather than simply confined to

political science, we believe that they are relevant for other disciplinary contexts too. The most

important thing we want to emphasise, however, is that we are not powerless to improve the

quality and effectiveness of our feedback, and there are some concrete things that we can do to

improve. In this section we argue that the basic, underlying issue of feedback structure underpins

the key problems outlined above, and that such problems can in large part be tackled – if not

Page 16: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

entirely overcome – by thinking carefully about the structure (rather than just the specific

content) of feedback offered to students. We also discuss the importance of timing in the

production of feedback, considering the three stages at which we can improve outcomes for

students: before submission, during marking, and after returning the work.

Structure

There are two possible ways we can alter our feedback structurally to address the issues raised in

the previous sections of this paper: through the use of section headings and by harnessing the

benefits of technology.

Section headings

The focus groups discussed students’ feedback experiences in general and some particular

examples of ‘dummy’ feedback modelled on the styles found during the feedback audit. The

format of the example feedback varied, from completely unstructured to fully structured

according to defined themes. The content also varied in length, detail and wording. Participants

were then prompted to discuss the example feedback, giving their opinion of the different

options and indicating which of the examples they would prefer to receive themselves. There

was broad agreement between participants and between institutions about which of the examples

would be their preferred feedback as well as which feedback they thought was the least desirable.

Students expressed a strong preference for the feedback examples that were divided into

sections. Students also preferred examples that contained both positive and negative comments,

even if the mark was undesirable, and which had a paragraph devoted to how students could

improve in the future. This is a clear result that indicates we can signpost our feedback by

dividing it into categories, ensuring one of them is ‘suggestions for improvement’. This would

serve two purposes: to remind us what the students need feedback on and to highlight to

students what information is contained in that section.

Technology

The second structural aspect we can change in our feedback is through the use of technology.

Technology can serve several purposes: to allow us to provide more feedback in less time

through the use of repeated comments; to provide a mixture of in-text annotations and general

comments; and to identify trends across submissions, which can help lecturers to target post-

submission advice to students by highlighting common errors.

Page 17: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

As discussed above, whilst students desire ‘personal’ feedback, this does not mean that they

think lecturers should never use repeat comments, just that the comments should be provided in

a combination unique to that student’s submission. Exploiting technology means that lecturers

can develop their comments in greater detail, including examples and further information for

improvement.

The second advantage is through being able to provide both in-text annotations and general

comments if using electronic marking software such as the GradeMark system embedded in

Turnitin. Students expressed a strong desire to receive feedback in both forms. In-text

annotations help the students to pinpoint precisely where they went wrong, for example, where

problems with referencing are and what should be fixed. General comments, on the other hand,

give them an overview of how they did and what they need to do better. Many of the electronic

systems also allow tracking of the number of times different comments are used, which provides

feedback to the marker about common problems that should be addressed with all students.

This leads us to the final sections, which explore the three points at which lecturers have a

chance to provide feedback.

Timing

The importance of timely feedback is well recognised in the pedagogical literature (see for

instance Evans and Waring, 2011; Ferguson, 2011; Higgins, Hartley, and Skelton, 2002; Blair et

al., 2013a; Li and De Luca, 2014) for, as Beaumont et al. (2011: 685) argue, understanding

feedback as a dialogic process rather than as a single event highlights the significance of timing to

learning and how, ‘although principles of good practice are useful’, they must also ‘be

systematically implemented’ at appropriate points within the learning cycle in order to be

effective. Our online survey data also illustrated how timeliness of feedback is critically

important to students, with comments including: ‘The feedback is often given after other

deadlines meaning it’s pretty useless as you cannot use the advice’; ‘On a few occasions the

feedback and marks have come in later than expected which is slightly frustrating because by

then the next assessment has already been handed in so don’t know how to improve’; ‘When I

get feedback it’s good – but it’s no good having feedback a month before your exams – you need

it earlier in the year’; and ‘Often, it doesn’t come back until we’ve already handed the next paper

in too, making the small amount of feedback we do get pretty much useless for immediate

application’. If feedback is indeed understood as a dialogic process rather than as a single event,

then it is crucial to view it within the broader context of the whole assessment cycle. Here we

Page 18: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

consider the key stages of assessment and reflect on the implications for the design and

implementation of feedback.

Before submission

Students clamour for clear criteria and examples. Whilst it is not always practical to provide the

latter, we should always make an effort to provide the former, unless we are testing students’

ability to read our minds or follow hidden instructions. Providing clear criteria in advance — and

dedicating time in seminars to discussing the criteria and answering questions about them —

significantly reduces the marking burden. We frequently write very general criteria cloaked in

feedback jargon, figuring that the criteria are more for the markers than for the students. It is

time to change this tradition. While there should be some general elements to the marking

criteria, there should also be elements specific to the learning outcomes, topics, and skill

development expectations of the assignment. There should be a clear connection between the

criteria and the assignment itself. Let us underscore this with two examples from our experience.

The first example is drawn from a workshop presentation about the importance of marking

criteria for outcomes: academic participants were given a series of assignment details and

marking criteria and asked to match the criteria to the assignment. The modules from which the

materials were taken varied widely, from research methods to American politics; and the forms

of assessment varied as well, from empirical data reports to standard essays. Given the diversity,

the differences between the assignments should have been at least vaguely discernible in the

marking criteria. They were not: none of the participants could accurately match the materials.

We have to remember that, especially when we use non-traditional assessment forms, students

are very unsure of what we want and are looking for extra information.

The second example is drawn from a research methods module. Because it was running for the

first time in a very different format, there were no past student submissions to provide as

examples to convey to the students what we were looking for. Instead, students were provided

with a substantial grid with weighted marking criteria and descriptions of the qualities of each of

the different grades for each of the components. Half an hour of one of the seminars was

devoted to going through the criteria and answering questions about them. Students gave very

positive feedback for the criteria and wished that such specific criteria were available for other

modules. Many of the students wrote their assignments with the criteria to hand, and students

were given feedback on their achievement on each individual component when they received

their marks back. The benefits did not stop there, however. This was a large module with nine

Page 19: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

people on the teaching team. When the team met in a marking calibration exercise before

embarking on independent marking, each person marked the same submission at the same time.

Referring only to the marking criteria, all markers landed on the same result.

In an era where student satisfaction drives policies and complaints about inconsistency and lack

of transparency are rife, such examples should make us pause to consider the possible impact of

improving the standards of criteria given to the students. Incorporating the criteria into the

marking process and providing feedback on the individual elements also helps students to spot

the smaller areas in which they did well and to identify where improvement is most needed.

The other essential aspect of the pre-submission stage is availability. Again, some academics

seem to feel that students should be able to ‘figure things out’ and that this is somehow part of

the whole assessment process. However, this is nearly guaranteed to make the marking

experience worse for the marker. This can be mitigated by ensuring availability at office hours,

repeatedly encouraging students to use office hours, and answering questions during seminars in

the run-up to the due date. Taking ten minutes or less at the beginning of seminars for the last

weeks before submission has the added benefit of reducing the number of emails received asking

the same questions.

During marking

There are some relatively small changes that we can make to our marking practices to improve

feedback reception and to address the most common student frustrations. The first step is to use

feedback forms with structured sections on different themes. This will ensure basic consistency

of areas of feedback between markers and modules, and the inclusion of a section on

transferable comments and suggestions for improvement can remind markers to provide

forward-thinking advice. We have to remember that, alongside all of the other skills we expect

students to learn, we generally do very little to assist directly in improving their feedback literacy;

providing a basic structure to the feedback gives visual clues to the students about how they

might use the feedback.

The second step is simply to reframe our feedback in two areas: the positive/negative balance

and transferability. As mentioned above, students do not know what they have done correctly if

we do not tell them and therefore learn by trial and error what they should repeat. In addition,

receiving entirely negative comments can be very demoralising and can leave especially struggling

students with the feeling of not knowing where to start. Some of the negative comments can be

framed more positively with slight tweaks to the phrasing, e.g. ‘Your answer to the question is

Page 20: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

clear, but the structure (how you’re going to get from here to there) is not’ rather than ‘Structure

is unclear or missing’. This does require writing a bit more, but it should make the feedback

more effective. The same advice applies to transferability: simple rephrasing of the same point

can signpost to the student that this piece of feedback, whilst topic-specific, can also be applied

to other assignments. Compare:

You should have cited some examples when talking about the abortion debate in the USA.

Use of more examples would help to illustrate how well you understand the material. For

example, you talk about the abortion debate in general terms, but applying it to specific

examples would lend greater depth to your argument.

The third step, and linked to the previous point, is to use examples. Just as we ask students to

use examples to apply the ideas they are talking about, we should use examples to show them

what we mean. This can help students to grasp what we mean when we throw around feedback

jargon like ‘critical analysis’, ‘depth’, ‘structure’, ‘argument’, ‘vague’. Some of this can go into

general module feedback and does not need to be repeated on every submission, but students

should have access to examples of the difference between description and critical analysis – one

of the most difficult things for us to explain and for students to understand. Even without

students’ permission to use their examples, we can frequently find examples of what we want

students to emulate in published material.

After feedback is released

The final stage of the feedback process is the follow-up. This can take several different forms

but is critical for closing the feedback loop. It is critical for the assessment to be returned before

the next assessment point so that students have adequate time to implement the feedback they

have received, and one of the most effective ways of closing the loop is to carve out curricular

time to spend on feedback. This could consist of having the students fill out a self-reflection

form, asking them to pinpoint the differences between their expectations and the result, anything

they do not understand, and what they need to ask more questions about. If many students

exhibited the same weaknesses — such as referencing problems, lack of synthesis of sources,

grammar, structuring problems — time could be spent doing a few exercises in these areas as a

group. Even if it is not possible to follow up with class time, it is still important to give students

an idea of the general feedback on the performance in addition to the individual feedback. It is not

helpful to do one without the other, and one of the common ways to frustrate students is only to

Page 21: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

give general feedback on an assessment, leaving them to figure out what applies to them and

what does not. However, assembling some critiques of things that commonly went wrong as well

as some of the things that were done really well and an idea of what the average (mean, median,

spread of marks) looked like allows students to compare themselves to their peers and

contextualise their performance.

We also need to do far more to encourage students to follow up on the written feedback they

received. Given that research shows how valuable dialogic feedback and the oral feedback

process are for students’ learning (Bloxham and Campbell, 2010; Evans, 2013), it is problematic

that this is still not a standard part of the feedback process for most students. Nearly a third of

the students surveyed responded that they had never, by mid-way in their second year, followed

up essay feedback by discussing it with their lecturer or seminar tutor. More than 40 per cent had

only done this once or twice; but it was only routine (‘most of the time’ or ‘always’) for five of

the survey respondents. This is despite the seminar tutor and module convenor of the students

surveyed having made repeated invitations to students, having an open-door policy, always

keeping office hours, and frequently meeting with students outside of office hours. The lack of

follow-up is likely to be at least partially a reflection of the perceived lack of transferability of

feedback from one assessment to the next: students do not follow up the feedback because it is

for an assessment that is already done; now they want to move on to the next thing. This means

that addressing the breaks in the feedback loop requires lecturer accessibility, sign-posted

feedback, clear transferability, and a cultural change in how students process and respond to

feedback.

Conclusions

Our findings from the feedback audit, the student survey, and the student focus groups show a

clear gap in the provision and implementation of feedback, in large part due to issues of

negativity, transferability, intelligibility and consistency. These problems are not, however,

impossible to address. Two of the simplest changes we can make is to ensure that feedback

comments are structured into clear categories through the use of section headings, and that

comments do not focus only on specific content (‘what to learn’) but also on future development

(‘how to learn’). Most importantly, we must carve out more time in the curriculum to provide

feedback before, during, and after the assessment submission. Ultimately, for feedback to be

effective, it needs to be embedded into broader practices that treat learning not as a product to

provide to students but rather as a dialogue to share with them. Feedback is perhaps the most

important way that we can actively involve students in this ongoing, dialogic process.

Page 22: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Acknowledgments

We are indebted to the Higher Education Academy (HEA) and University of Nottingham for

jointly funding this research (Project Grant GEN264) and we are also hugely grateful to our

Project Lead, Bettina Renz, and our Participating Investigator and Project Research Assistant,

Hardeep Basra, who organised and undertook the collection of data upon which our analysis is

based. Special thanks must also go to the many students acted as participants in, facilitators of,

and research assistants on, the project, without whom this research would not have been

possible. Finally, we are enormously grateful to the anonymous reviewer for their hugely helpful

and insightful comments on an earlier iteration of this piece.

Notes

1 For example, the Times Higher Education reported in 2014 that although student satisfaction overall had reached a ‘10-year high’ in the National Student Survey (NSS), assessment and feedback was rated the lowest – as it had been in previous years – with just 72 per cent of students reporting that they were satisfied with this element of their learning (Grove, 2014). 2 The School of Politics and International Relations at the University of Nottingham and the Department of Political Science and International Studies at the University of Birmingham. 3 GradeMark comes with a set of pre-programmed ‘quick-marks’ including basic grammatical and argumentation advice, but these were not formulated with extensive input from students and academics. 4 Our aim was to ensure that the sample covered a roughly equal number of modules across each of the years and roughly representative numbers of pieces of feedback across mark boundaries, but fully stratified sampling across institutions was not possible due to informed consent requirements, which causes a self-selection bias in the sample, as lecturers already providing higher quality feedback were more likely to participate. 5 Essays are by far the most common type of assignment to receive detailed feedback comments in the two departments and so this was our focus in the audit, but our findings may be relevant to other forms of assessment such as reflective learning logs. 6 The content of topic-specific comments may make it possible to identify the originating module in some cases, but as far as possible, identifying information has been excluded. 7 Due to space limitations we confine our discussion here to Phases 1 and 2 of the project but we also undertook an additional Phase 3, which included the collation of a ‘bank’ of more highly effective feedback available as OERs through a set of QuickMarks and qualitative rubrics in GradeMark format for institutions using Turnitin-based marking; Adobe fillable forms and Word templates with macros that can be used in paper or electronic form for institutions using paper-based marking. A final round of student-led focus groups were also run in Phase 3, in order to discuss the effectiveness of the feedback examples. 8 Unsurprisingly, the participant profiles indicate a self-selecting sample: of those who chose to provide information regarding their average achievement to date, nearly all participants reported they were achieving at an Upper Second Class level. While the distinct majority of students at both universities graduate with a ‘good’ degree (Upper Second Class or First), it does mean that the focus group results essentially exclude those who are achieving at a lower level, who are frequently harder to reach. 9 See Access to Higher Education (2012) for information on the UK grading system in Higher Education.

References

Page 23: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Access to Higher Education (2012) ‘Grading Scheme Handbook – Section A: Introduction and Summary’, available at https://www.accesstohe.ac.uk/AboutUs/Publications/Documents/Grading-scheme-A.pdf [accessed 6 November 2015].

Bailey, R. (2009) ‘Undergraduate Students’ Perceptions the Role and Utility of Written Assessment Feedback’, Journal of Learning Development in Higher Education 1, 1-14.

Beaumont, C., O’Doherty, M. and Shannon, L. (2011) ‘Reconceptualising Assessment Feedback: A Key to Improving Student Learning?’, Studies in Higher Education 36(6): 671–87.

Blair, A., Curtis, S., Goodwin, M. and Shields, S. (2013a) ‘Learning and Teaching in Politics and International Studies: What Feedback do Students Want?’, Politics 33(1): 66-79.

Blair, A., Curtis, S., Goodwin, M. and Shields, S. (2013b) ‘The Significance of Assignment Feedback: From Consumption to Construction’, European Political Science 12(2): 231-44.

Blair, A., Curtis, S., and McGinty, S. (2013) ‘Is Peer Feedback an Effective Approach for Creating Dialogue in Politics?’, European Political Science 12(1): 102-15.

Blair, A. and McGinty, S. (2013) ‘Feedback-Dialogues: Exploring the Student Perspective’, Assessment & Evaluation in Higher Education 38(4): 466–76.

Bloxham, S. and Campbell, L. (2010) ‘Generating Dialogue in Assessment Feedback: Exploring the Use of Interactive Cover Sheets’, Assessment Evaluation in Higher Education 35(3): 291–300.

Boud, D. and Molloy, E. (2013) Feedback in Higher and Professional Education: Understanding It and Doing It Well, London: Routledge.

Brookhart, S. (2008) How to Give Effective Feedback to Your Students, Alexandria: ASCD. Caffarella, R. and Barnett, B. (2000) ‘Teaching Doctoral Students to Become Scholarly Writers:

The Importance of Giving and Receiving Critiques’, Studies in Higher Education 25(1): 39-52.

Carless, D. (2006) ‘Differing Perceptions in the Feedback Process’, Studies in Higher Education 31(2): 219–33.

Crisp, B. (2007) ‘Is It Worth the Effort? How Feedback Influences Students’ Subsequent Submission of Assessable Work’, Assessment & Evaluation in Higher Education 32(5): 571–81.

Evans, C. (2013) ‘Making Sense of Assessment Feedback in Higher Education’, Review of Educational Research 83(1): 70–120.

Evans, C. and Waring, M. (2011) ‘Student Teacher Assessment Feedback Preferences: The Influence of Cognitive Styles and Gender’, Learning and Individual Differences 21(3): 271–80.

Ferguson, P. (2011) ‘Student Perceptions of Quality Feedback in Teacher Education’, Assessment & Evaluation in Higher Education 36(1): 51–62.

Frankland, S. (2007) Enhancing Teaching and Learning through Assessment: Deriving an Appropriate Model, New York: Springer.

Grove, J. (2014) ‘National Student Survey 2014 Results Show Record Levels of Satisfaction’, Times Higher Education, 12 August 2014.

Hattie, J. and Timperley, H. (2007) ‘The Power of Feedback’, Review of Educational Research 77(1): 81-112.

Higgins, R., Hartley, P. and Skelton, A. (2001) ‘Getting the Message Across: The Problem of Communicating Assessment Feedback’, Teaching in Higher Education 6(2): 269–74.

——— (2002) ‘The Conscientious Consumer: Reconsidering the Role of Assessment Feedback in Student Learning’, Studies in Higher Education 27(1): 53–64.

Hounsell, D. (2003) ‘Student Feedback, Learning and Development’, in M. Slowey and D. Watson (eds.) Higher Education and the Lifecourse, Maidenhead: Open University Press, pp.67-78.

Page 24: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

Ipsos MORI (2014) National Student Survey (NSS), available at http://www.ipsos-mori.com/researchspecialisms/socialresearch/specareas/highereducation/nss.aspx, [accessed 31 May 2015].

Irons, A. (2007) Enhancing Learning Through Formative Assessment and Feedback, London: Routledge.

Johnson, E., Cowie, B., De Lange, W., Falloon, G., Hight, C. and Khoo, E. (2011) ‘Adoption of Innovative E-Learning Support for Teaching: A Multiple Case Study at the University of Waikato’, Australasian Journal of Educational Technology 27(3): 499–513.

Jones, O. and Gorra, A. (2013) ‘Assessment Feedback Only on Demand: Supporting the Few Not Supplying the Many’, Active Learning in Higher Education 14(2): 149–61.

Joughin, G. (2008) Assessment, Learning and Judgement in Higher Education, New York: Springer.

Juwah, C., Macfarlane-Dick, D., Matthew, B., Nicol, D., Ross, D. and Smith, B. (2004) Enhancing Student Learning through Effective Formative Feedback, York: Higher Education Academy.

Laurillard, D. (2002) ‘Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies’, New York: Routledge.

Li, J. and De Luca, R. (2014) ‘Review of Assessment Feedback’, Studies in Higher Education 39(2): 378–93.

Lizzio, A. and Wilson, K. (2008) ‘Feedback on Assessment: Students’ Perceptions of Quality and Effectiveness’, Assessment & Evaluation in Higher Education 33(3): 263–75.

McDowell, L., Sambell, K. and Montgomery, C. (2012) Assessment for Learning in Higher Education, London: Routledge.

McInerney, D., Brown, G. and Darmanegara Liem, A. (2009) Student Perspectives on Assessment: What Students Can Tell Us about Assessment for Learning, Charlotte: IAP.

Nicol, D. (2010) ‘From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education’, Assessment & Evaluation in Higher Education 35(5): 501–17.

Nicol, D., and Macfarlane-Dick, D. (2006) ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice’, Studies in Higher Education 31(2): 199–218.

Orsmond, P. and Merry, S. (2011) ‘Feedback Alignment: Effective and Ineffective Links Between Tutors’ and Students’ Understanding of Coursework Feedback’, Assessment & Evaluation in Higher Education, 36(2): 125-36.

Poulos, A. and Mahony, M.J. (2008) ‘Effectiveness of Feedback: The Students’ Perspective’, Assessment & Evaluation in Higher Education 33(2): 143–54.

Price, M., Handley, K., Millar, J. and O’Donovan, B. (2010) ‘Feedback: All That Effort, but What Is the Effect?’, Assessment & Evaluation in Higher Education 35(3): 277–89.

Rae, A. and Cochrane, D. (2008) ‘Listening to Students How to Make Written Assessment Feedback Useful’, Active Learning in Higher Education 9(3): 217–30.

Shute, V. (2008) ‘Focus on Formative Feedback’, Review of Educational Research 78(1): 153-89. Weaver, M. (2006) ‘Do Students Value Feedback? Student Perceptions of Tutors’ Written

Responses’, Assessment & Evaluation in Higher Education 31(3): 379-94. Young, P. (2000) ‘‘I Might As Well Give Up’: Self-Esteem and Mature Students’ Feelings About

Feedback on Assignments’, Journal of Further and Higher Education 24(3): 409-18.

Page 25: Feedback: critiquing practice, moving forward...Feedback: critiquing practice, moving forward Helen Williams Abstract Despite attempts by higher education institutions to improve the

About the Authors

Helen Williams is Assistant Professor of Politics at the University of Nottingham. Between June 2013 and December 2014 she led the Higher Education Academy collaborative project on ‘Closing the loops: bridging the gap between provision and implementation of feedback’. Nicola Smith is Senior Lecturer in Political Science at the University of Birmingham. She was co-investigator on the Higher Education Academy funded project on assessment feedback entitled ‘Closing the loops: bridging the gap between provision and implementation of feedback’ which ran in 2013-14.

Key Quotes

1. ‘Although there is no general agreement on what precisely constitutes ‘good’ feedback, there is something of a consensus that if feedback is genuinely to contribute to effective learning and development, then it must be understood as ‘an active, shared process’. (pp.3-4)

2. ‘Given that we found that lecturers at all levels consistently critiqued the same errors, it is clear that we are not conveying adequately to the students what to do to fix many of these problems’. (p.13)

3. ‘…we are not powerless to improve the quality and effectiveness of our feedback…’ (p.18)

4. ‘Students expressed a strong preference for the feedback examples that were divided into sections’. (p.19)

5. ‘…receiving entirely negative comments can be very demoralising and can leave especially struggling students with the feeling of not knowing where to start’. (p.24)