11
Web-based feedback after summative assessment: how do students engage? Christopher J Harrison, 1 Karen D Konings, 2 Adrian Molyneux, 1 Lambert W T Schuwirth, 3 Valerie Wass 1 & Cees P M van der Vleuten 2 CONTEXT There is little research into how to deliver summative assessment student feed- back effectively. The main aims of this study were to clarify how students engage with feed- back in this context and to explore the roles of learning-related characteristics and previous and current performance. METHODS A website was developed to deliver feedback about the objective structural clinical examination (OSCE) in various formats: sta- tion by station or on skills across stations. In total, 138 students (in the third year out of five) completed a questionnaire about goal orientation, motivation, self-efficacy, control of learning beliefs and attitudes to feedback. Individual website usage was analysed over an 8-week period. Latent class analyses were used to identify profiles of students, based on their use of different aspects of the feedback web- site. Differences in learning-related student characteristics between profiles were assessed using analyses of variance (ANOVAs). Individual website usage was related to OSCE perfor- mance. RESULTS In total, 132 students (95.7%) viewed the website. The number of pages viewed ranged from two to 377 (median 102). Fifty per cent of students engaged comprehen- sively with the feedback, 27% used it in a min- imal manner, whereas a further 23% used it in a more selective way. Students who were comprehensive users of the website scored higher on the value of feedback scale, whereas students who were minimal users scored higher on extrinsic motivation. Higher per- forming students viewed significantly more web pages showing comparisons with peers than weaker students did. Students who just passed the assessment made least use of the feedback. CONCLUSIONS Higher performing students appeared to use the feedback more for posi- tive affirmation than for diagnostic informa- tion. Those arguably most in need engaged least. We need to construct feedback after summative assessment in a way that will more effectively engage those students who need the most help. Medical Education 2013: 47: 734744 doi:10.1111/medu.12209 Discuss ideas arising from the article at www.meduedu.com ‘discuss’ 1 School of Medicine, Keele University, Keele, UK 2 Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands 3 School of Medicine, Flinders University, Adelaide, South Australia, Australia Correspondence: Christopher J Harrison, Keele University Medical School, David Weatherall Building, Keele University, Keele, Staffordshire ST5 5BG, UK. Tel: 00 44 1782 734677; E-mail: [email protected] 734 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744 feedback-seeking behaviour

Web-based feedback after summative assessment: how do students engage?

Embed Size (px)

Citation preview

Page 1: Web-based feedback after summative assessment: how do students engage?

Web-based feedback after summative assessment: howdo students engage?Christopher J Harrison,1 Karen D K€onings,2 Adrian Molyneux,1 Lambert W T Schuwirth,3 Valerie Wass1 &Cees P M van der Vleuten2

CONTEXT There is little research into how todeliver summative assessment student feed-back effectively. The main aims of this studywere to clarify how students engage with feed-back in this context and to explore the rolesof learning-related characteristics and previousand current performance.

METHODS A website was developed to deliverfeedback about the objective structural clinicalexamination (OSCE) in various formats: sta-tion by station or on skills across stations. Intotal, 138 students (in the third year out offive) completed a questionnaire about goalorientation, motivation, self-efficacy, control oflearning beliefs and attitudes to feedback.Individual website usage was analysed over an8-week period. Latent class analyses were usedto identify profiles of students, based on theiruse of different aspects of the feedback web-site. Differences in learning-related studentcharacteristics between profiles were assessedusing analyses of variance (ANOVAs). Individualwebsite usage was related to OSCE perfor-mance.

RESULTS In total, 132 students (95.7%)viewed the website. The number of pagesviewed ranged from two to 377 (median 102).Fifty per cent of students engaged comprehen-sively with the feedback, 27% used it in a min-imal manner, whereas a further 23% used itin a more selective way. Students who werecomprehensive users of the website scoredhigher on the value of feedback scale, whereasstudents who were minimal users scoredhigher on extrinsic motivation. Higher per-forming students viewed significantly moreweb pages showing comparisons with peersthan weaker students did. Students who justpassed the assessment made least use of thefeedback.

CONCLUSIONS Higher performing studentsappeared to use the feedback more for posi-tive affirmation than for diagnostic informa-tion. Those arguably most in need engagedleast. We need to construct feedback aftersummative assessment in a way that will moreeffectively engage those students who needthe most help.

Medical Education 2013: 47: 734–744doi:10.1111/medu.12209

Discuss ideas arising from the article at

www.meduedu.com ‘discuss’

1School of Medicine, Keele University, Keele, UK2Department of Educational Development and Research, Facultyof Health, Medicine and Life Sciences, Maastricht University,Maastricht, the Netherlands3School of Medicine, Flinders University, Adelaide, SouthAustralia, Australia

Correspondence: Christopher J Harrison, Keele University MedicalSchool, David Weatherall Building, Keele University, Keele,Staffordshire ST5 5BG, UK.Tel: 00 44 1782 734677;E-mail: [email protected]

734 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744

feedback-seeking behaviour

Page 2: Web-based feedback after summative assessment: how do students engage?

INTRODUCTION

Assessment has complex effects on learning, whichhave been categorised as pre-assessment, post-assess-ment and pure assessment effects.1 As a post-assess-ment effect, the importance of feedback is clearlyestablished.2 Most of the literature refers to the roleof feedback in workplace-based assessments, follow-ing low-stakes assessments or in preparation forhigh-stakes tests. The role of feedback after summa-tive or high-stakes assessments has received littleattention.3 Instead, the focus has been on determin-ing the pass–fail cut-off score and on the assessmentof competence rather than on encouraging excel-lence. Medical schools also face practical obstaclesto providing feedback in these circumstances. First,summative assessment tends to be timed at the endof the course or module, limiting the value of feed-back, but also the cohorts are increasingly large andfeedback requires considerable faculty resources, soschools have tended to focus on the small numberof students who fail assessments. As a result, a stu-dent’s response to feedback after passing summativeassessments is not clearly understood.

If students receive no feedback after summativeassessment, but are aware that the primary focus ofthe assessment is to determine whether they haveachieved minimal competence, there is an inevitablerisk that they will prepare for the assessment by con-centrating on the relatively low hurdle of minimallyacceptable competence, rather than aspiring toattain the more challenging goal of optimal compe-tence, as the latter is not normally rewarded. Simi-larly, within medical schools, much effort has beendeployed to develop psychometrically defensiblesummative assessment systems to determine whichstudents are competent to graduate.4 Standard-set-ting procedures, which define the minimally compe-tent student, usually allow a student to graduate ifhe or she has achieved an overall grade of compe-tence.5 However, the lower such an overall mark is,the higher the probability that there remain signifi-cant areas in which the student is incompetent.Feedback after summative assessment would offerthe opportunity, or even the obligation, for the stu-dent to address these areas of weakness. It is there-fore important to understand how to engagestudents with feedback after summative assessments.

Even if students are provided with feedback, it is byno means certain that it will lead to improvementsin performance.6 There remains considerableuncertainty about what makes feedback effective for

learning. Indeed, even the task of ensuring the stu-dent receives the feedback is not straightforward. Insome studies, more than 50% of students do not takeup offers to receive feedback.7 Learners may explic-itly ask for feedback or may seek it more indirectly,perhaps by monitoring others’ reactions to them andtheir performance, or comparing their results withothers.8 Learners express a desire for more feedback,but fear receiving information that challenges theirown positive self-assessment of their abilities.9 Somelearners appear to want feedback as a means of build-ing their confidence rather than as a means tocorrect a knowledge or skill deficiency.10 It is increas-ingly recognised that there needs to be a two-waycommunication process between teachers and learn-ers and a focus on how feedback is received (feed-back as a dialogue).10,11 Receiving more negativefeedback can have a profound effect on a student’sself-perception.12 Arguably, the emotional impact ofthe results of a high-stakes assessment will probablybe more powerful than that arising from moreinformal or lower-stakes assessments.

Although little is known about feedback after high-stakes assessments, it is relevant to consider learn-ing-related characteristics, such as goal orientation,control of learning, self-efficacy and motivation,which influence feedback in other settings.

The role of goal orientation is important in under-standing students’ use of feedback. Some individualshave a mastery or learning goal orientation, todevelop competence and mastery of new skills,whereas others have performance goal orientation,to demonstrate their competence to others, seekpositive comments and avoid negative commentsabout their work.13 A student with a learning goalorientation will probably regard feedback as a diag-nostic tool providing useful information to helpthem acquire greater competence, whereas a stu-dent with a performance goal orientation will proba-bly view feedback as a judgement about themselvesas well as their competence.14 Negative feedbackcan be particularly demoralising as it is in directopposition with the goal of appearing competent.Performance goal orientation can be subdividedinto two main categories: approach and avoidgoals.15 Performance approach oriented studentsare concerned with demonstrating that they aremore competent than others, whereas performanceavoid oriented students are keen to avoid appearingincompetent. In the workplace, medical traineeswith a higher learning goal orientation perceivemany benefits and few costs from feedback, whereas

ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744 735

Web-based feedback after summative assessment

Page 3: Web-based feedback after summative assessment: how do students engage?

those with a higher performance goal orientationperceive more costs from feedback.16 We thereforehypothesised that students with a learning goal ori-entation would make greater use of feedback infor-mation after summative assessment compared withstudents with a performance orientation. Further-more, we hypothesised that students with a perfor-mance approach goal orientation who performedpoorly would make less use of feedback than thosestudents who performed well, whereas performanceavoid oriented students would make more use offeedback if they performed poorly, as they would beconcerned about potential failure.

Motivation influences the effort taken in order toachieve a goal.17 Intrinsic motivation refers to adesire within an individual to succeed, rather thanbeing reliant on external factors, whereas extrinsicmotivation is focused on the achievement of out-comes in order to satisfy external pressures. Stu-dents with higher intrinsic motivation, focusedtowards achievement, invested greater time instudying.18 Intrinsic motivation correlates positivelyto time spent reflecting on learning.19 Therefore,we hypothesised that students with higher intrinsicmotivation would view more feedback informationthan students with higher extrinsic motivation.

‘Control of learning’— a student’s beliefs that theiracademic achievements are contingent on their ownefforts, not on external factors— is another factor toconsider.20 Students who believe that their own effortswill affect their achievement put more effort into theirlearning.20 We therefore expected that students witha stronger belief in the control of their learning wouldbe more likely to make use of feedback.

Successful academic achievement can lead to animprovement in self-efficacy, which is concernedwith a person’s belief in their ability to succeed in aparticular situation.21 A higher self-efficacy, oncedeveloped, can help students to cope with negativefeedback.22 We therefore predicted that medicalstudents with higher self-efficacy would makegreater use of feedback.

The term ‘self-presentation of low achievement’,refers to a student’s preferences for keeping theirachievements hidden from their peers.23 Studentswho underachieve compared with their peers scorehigher on this concept.24 An unwillingness to beopen about performance may impair a student’swillingness to engage with feedback as they may notwish to acknowledge to others areas where theyneed to improve, although empirical data on this

effect are lacking. We therefore hypothesised thatstudents who preferred to keep their achievementshidden would make less use of the feedback.

Feedback can be valuable to learners as it providesinformation to help them meet their goals and itcan help to increase their self-confidence.10 On theother hand, feedback can present risks to learners ifit provides information that does not fit in with theirown self-image. Feedback-seeking behaviour is medi-ated by the learner’s perception of the probablevalue and risk of feedback.16 In particular, learnerswho regard feedback as valuable are, unsurprisingly,more likely to seek feedback. Because the percep-tion of feedback value tends to correlate inversely tothe perception of feedback risk, learners who regardfeedback as more risky will probably seek less feed-back, although empirical results are equivocal.16

Therefore, we hypothesised that students who per-ceive a higher value of feedback will make more useof feedback information after summative assessment,whereas those who regard it as more risky will makeless use of the feedback provided.

Learners are not simply passive recipients of feed-back, but seek out the feedback using both directand indirect strategies.8 In the former case, learnersask feedback providers directly, who in turn shareonly the feedback that they wish to share. By con-trast, employing an indirect strategy means that thefeedback seeker attempts to infer the feedback bymonitoring reactions of others (for instance, thenon-verbal behaviour of a supervisor). When design-ing feedback systems for medical students, it wouldbe useful to understand to what extent direct andindirect feedback-seeking strategies are employed.We hypothesised that more active seekers of feed-back in the workplace would also be more activeseekers of feedback after summative assessment, anda similar relationship would be expected from thoselearners who were feedback monitors.

Once feedback has been obtained, it can sometimesraise as many questions as it answers, especially ifthe feedback is negative or unexpected. This isdefined as feedback uncertainty.25 Learners whoperceive higher levels of feedback uncertainty tendto be less active seekers of feedback.25 We thereforehypothesised that students who rated more highlyon feedback uncertainty would make less use of thefeedback information provided.

Although the literature suggests that commentsfrom examiners are appreciated by students, it isoften not practical to provide them in the context

736 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744

C J Harrison et al

Page 4: Web-based feedback after summative assessment: how do students engage?

of a busy, high-stakes objective structured clinicalexamination (OSCE) with very limited time betweencandidates. As a result, summative assessments ofteninvolve the awarding of grades or numerical marks,but the use of grades when giving feedback hasbeen criticised.26 This stems from studies demon-strating that awarding grades blocks a learner’sreceptivity to formative comments.27 However, thiscriticism primarily derives from studies that havebeen conducted on schoolchildren or in the contextof formative, low-stakes assessments. The effect ofgiving grades, or benchmarking a student’s markagainst others in the same cohort, has not previ-ously been explored in the context of summativeassessment in medicine. Benchmarking can helpstudents to self-assess their own performance with agreater degree of accuracy.28 One study of medicalstudents offered feedback after the awarding ofgrades for a summative written assignment demon-strated that higher performers were more likely toaccess the feedback than students who had per-formed less well or had failed.7 Performance inassessment may therefore influence the receptivityto feedback. We are not aware of any studies com-paring students’ summative OSCE performances(and the awarding of marks) with their subsequentengagement with feedback. There is therefore aneed to understand the influence of performanceon the use of feedback after summative assessment.

This study investigated how students receive and usefeedback in the context of summative assessment.The aim was to explore how students receive feed-back (in the form of profile scores on OSCE sta-tions and benchmarking, delivered via a website)after a summative assessment. Specifically, weintended to answer the following questions:

1 To what extent do medical students engage withfeedback after a high-stakes OSCE?

2 Do goal orientation, motivation, self-efficacy,control of learning beliefs or students’ attitudesto the benefits or risks of feedback influencehow students engage with the feedback?

3 Does performance in the current or previousassessment affect engagement with feedback?

METHODS

Context

The study took place at Keele University School ofMedicine in 2011. The third year (out of five),comprising 138 students, represents a transition to

much more clinically based learning, with 80% oflearning taking place in the clinical environment.At the end of the year, students have a summative12-station OSCE, in which they have to pass at leasteight stations. Students have to pass to progress toYear 4. Each station is 10 minutes long and ismarked with generic consultation rating scales thatare used across multiple stations.29 Students whofailed the OSCE were required to discuss their per-formance with a senior tutor, although they werenot required to view the website either before orafter this meeting. This took place at least 6 weeksafter the results were released. Students who passedthe OSCE were not offered this opportunity.

Feedback website

A website was developed within the virtual learningenvironment to deliver feedback about the OSCE invarious formats: station by station or on skills acrossstations. In the station-by-station manner, studentscould view information in various ways: ‘pass–fail’web pages, which gave information about how closeto the pass mark and how far short of the maximummark students were; ‘global’ web pages presentedthe examiners’ global mark with a graphical com-parison with the cohort average for that station;‘skills breakdown’ pages presented a breakdown ofthe different skills assessed at that station, with agraphical comparison with the cohort average;‘detailed comparison’ web pages presented thecohort’s performance on that station in a frequencydistribution chart, with additional informationabout the pass mark and the student’s individualperformance. Students could also look across sta-tions at their performance on skills that wereassessed at multiple stations. Skills that wereassessed in more than one station were filtered outand presented in numerical form (the skills sum-mary) or graphically compared with the cohort(cross-station breakdown). Students could alsoaccess ‘next steps’ web pages, which provided moreguidance on reflecting and responding to the feed-back (with specific worksheets to download andwork through), along with encouragement todevelop action plans based on the well-knownSMART format (Specific, Measurable, Achievable,Realistic, Timely).30 A table showing a summary ofthe 130 different web pages available is shown inAppendix S1, available online. The results from theOSCE were uploaded to the website via an Excelspreadsheet. Four hours after the results of theOSCE were delivered, the feedback was released.Use of the website was left entirely to the students’discretion; there was no requirement to log on. The

ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744 737

Web-based feedback after summative assessment

Page 5: Web-based feedback after summative assessment: how do students engage?

website recorded the total pages viewed, the timespent on the website and the page hits for eacharea. Data were collected on the students’ OSCEperformance, performance in the previous year’s 20-station practical skills assessment objectivestructured skills examination (OSSE) and on thestudents’ performance in the written assessmentsfor that year.

Questionnaire

A 51-item questionnaire was developed to measurelearning-related characteristics and attitudes to feed-back. It was based on scales of previously validatedinstruments that were relevant to feedback in a sum-mative context and had an acceptable reliability.Some items were slightly reformulated to better fitthe context of the study. Some scales wereshortened. Full details are shown in Appendix S2available online. The scales selected covered goalorientation (mastery, performance approach andperformance avoid), intrinsic and extrinsic motiva-tion, control of learning beliefs, self-efficacy forlearning and performance, attitudes towards self-presentation of low achievement, attitudes towardsthe value, uncertainty or risk of feedback andself-reported frequency of direct or indirect feed-back-seeking behaviour.20,23,25,31–34 Responses wererecorded on a 5-point Likert scale.

Data collection

Questionnaires were given out at the end of a lec-ture 2 weeks before the summative OSCE. Studentswere told that this was part of a research study andthat they were free to consent or not. Participantswere given a chocolate bar as a token of thanks.Students’ library card numbers were recorded toallow linking of responses to examination perfor-mances and use of the website. Usage of the websitewas monitored for 8 weeks following the release ofthe feedback in order to record repeat visits to thewebsite and to record if some students first visitedthe website after a significant delay.

Data analysis

To answer the first research question, website usagewas analysed by calculating the number of separateweb page ‘hits’ for each student and by calculatingthe number of separate visits to the website madeby each student. The total time (from login to lo-gout) spent on each visit was also recorded. Descrip-tive statistics were obtained using SPSS Version 18(SPSS Inc, Chicago, IL, USA). Latent class analyses

(LCA; WINMIRA software35) were used to investigatedifferent profiles of students using the feedbackinformation. LCA is a statistical probabilisticmethod that clusters subgroups of students accord-ing to similar profiles.36 It has previously been usedsuccessfully to cluster homogenous groups of stu-dents together when studying learning environ-ments and enables an evaluation of the interactionbetween student profiles and other learning charac-teristics.37 This method uses the Akaike InformationCriterion (AIC), Bayesian Information Criterion(BIC) and Consistent Akaike Information Criterion(CAIC) as indices for the quality of the computedsolutions. The lowest values on these criteria indi-cate the best fit to the data. Empirical distributionswere generated using the bootstrapping method(Cressie-Read and Pearson).36 Based on these indi-ces the best fitting solution was selected, represent-ing the optimal number of profiles to be used todescribe the data. The variables entered in the anal-yses for defining the profiles were the total webpages visited, the number of separate visits to thewebsite, the average number of web pages viewedper minute and the number of web pages viewedfor each section of the website (pass–fail, global,skills breakdown, detailed comparisons, next steps,skills summary and cross-station breakdown). Toanswer the second research question, descriptive sta-tistics were calculated for each of the learning-related characteristic scales. One-way analyses of var-iance (ANOVAs) were performed to explore the rela-tionship between learning-related characteristicsand use of the feedback (as defined by the numberof separate visits students made to the website).Post-hoc analyses were performed with Tukey’s Hon-estly Significant Difference Test. To answer thethird research question, ANOVAs and chi-squared testswere performed to explore the relationship betweenOSCE performance (in the current year) and use ofthe feedback, and between OSSE performance (inthe previous year) and use of the feedback.

Ethical approval: Ethical approval was obtained viathe Keele School of Medicine Ethics Committeebefore the study commenced.

RESULTS

Use of the website

The website was viewed by 132 students (95.7%). Ofthose who viewed it, 115 (87.1%) first visited thewebsite on the first day that the results became avail-able; five (3.8%) waited more than 2 weeks before

738 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744

C J Harrison et al

Page 6: Web-based feedback after summative assessment: how do students engage?

first viewing the website. The number of separatevisits ranged from one to five (mean 1.90, standarddeviation [SD] 1.17, median 1); 65 students(49.24%) visited the website more than once. Thenumber of pages viewed ranged from two to 377(mean 123.47, SD 74.24, median 111). There were130 different web pages available. Descriptive statis-tics for the use of the different aspects of the web-site are shown in Table 1. Pass–fail web pages werevisited very frequently, whereas other breakdowns ofmarks were only visited about half as frequently.Many students viewed the web pages quickly, withhalf viewing at least six web pages per minute.

To select the best-fitting LCA model, the solutionsfor one to 10 classes were computed (Table S1). Onthe basis of the BIC and CAIC, the two-class solu-tion is favoured as these indices are at their lowest,suggesting the best fit. However, based on the AICand the bootstraps the four-class model is preferred.The three-class solution provided compromise andthe best fit overall to the data. The three classes ofstudents differed in how they utilised the website.For ease of reference, these are described as ‘com-prehensive users’, ‘selective users’ and ‘minimalusers’ (see Fig. 1). The comprehensive users, repre-senting 50% of the students, made high use of thewebsite across all areas. The minimal users (27% ofstudents) made low use of the website across allareas. The selective users (23% of students) madeless use of the website overall. Compared with thecomprehensive users, they made fewer visits andlooked at fewer pass–fail, skills summary and cross-station breakdown pages, but made similar use ofthe global, skills breakdown and next steps sections.

Learning-related characteristics and use of feedback

The questionnaire was completed by 113 students(81.9%); 69 (61%) were female, which is comparablewith their proportion in the year as a whole (59%).Descriptive statistics for the questionnaire scales areshown in Table 2, together with Cronbach’s alphaper scale. The scales showed moderate to acceptableinternal consistencies (alpha ranging from 0.52 to0.80). Students on average scored above the midpointof the scale on mastery and performance avoid goalorientation, intrinsic motivation, control of learningbeliefs, self-efficacy, value of feedback and frequencyof feedback monitoring; they scored below the

Table 1 Use of different parts of website

Mean Median

Standard

deviation

Lower

quartile

Upper

quartile

Interquartile

range

Total web pages visited 123.47 111 74.24 70 170 99

Total number of visits to website 1.90 1 1.18 1 2 1

Speed (average pages viewed per minute) 6.56 6 2.68 4.69 8.12 3.43

Pass–fail web pages viewed 25.14 20 18.61 13 32 19

Global mark web pages viewed 15.23 13 12.12 6 19 13

Skills breakdown web pages viewed 15.61 13 16.79 9 17 8

Detailed comparison with other students web pages viewed 13.86 13 13.37 3 18 15

Skills summary web pages viewed 17.86 12 17.90 3 28 25

Station breakdown web pages viewed 10.01 4 13.99 2 12 10

Next steps web pages viewed 1.43 1 1.75 0 2 2

Figure 1 The three-class model for latent class analysis

ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744 739

Web-based feedback after summative assessment

Page 7: Web-based feedback after summative assessment: how do students engage?

Table 2 Descriptive statistics for learning-related student characteristics in questionnaire

Learning-related

student

characteristics

Number of

items in

scale Mean

Standard

deviation

One-sample

t-test*

Cronbach’s

alpha per

scale

Sample item (all items shown in

Appendix S2)

Mastery goal

orientation

5 4.47 0.45 p < 0.01 0.64 One of my goals this year is to learn

as much as I can

Performance approach

goal orientation

5 2.70 0.78 p < 0.01 0.80 It is important to me that other

students in my year think I am

good at my work

Performance avoid goal

orientation

3 3.50 0.74 p < 0.01 0.68 One of my goals is to avoid looking like

I have trouble doing the work

Intrinsic motivation

towards

accomplishment

4 3.38 0.72 p < 0.01 0.71 I go to medical school because it allows

me to experience a personal

satisfaction in my quest for

excellence in my studies

Extrinsic motivation

introjected

regulation

4 2.99 0.82 0.69 I go to medical school because of the

fact that when I succeed in medical

school I feel important

Control of learning

beliefs

4 3.77 0.66 p < 0.01 0.64 If I study in appropriate ways, then I will

be able to learn the material in this course

Self-efficacy for

learning and

performance

6 3.37 0.60 p < 0.01 0.75 I am confident I can do an excellent job

on the assignments and tests in this

course

Self-presentation of

low achievement

4 2.45 0.68 p < 0.01 0.52 If other students found out I did well

on a test, I would tell them it was

just luck even if that was not the

case

Value of feedback 3 4.31 0.56 p < 0.01 0.52 It is important to me to receive

feedback on my performance

Risk in feedback

seeking

5 2.17 0.69 p < 0.01 0.78 It is not a good idea to ask your tutor

for feedback; they might think you

are incompetent

Feedback-related

uncertainty

4 2.67 0.79 p < 0.01 0.79 When my tutors give me feedback

about my performance, I am not

immediately sure what it requires me

to do

Frequency of feedback

monitoring

2 3.49 0.83 p < 0.01 0.76 How frequently do you observe what

performance behaviours and skills

your tutor rewards and use this as

feedback on your own performance?

Frequency of active

feedback seeking

1 3.18 1.09 How frequently do you seek feedback

from your tutors about your

performance?

* One-sample t-test compares the mean for the scale with the midpoint of the scale (3).

740 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744

C J Harrison et al

Page 8: Web-based feedback after summative assessment: how do students engage?

midpoint for performance approach goal orienta-tion, self-presentation of low achievement, risk infeedback seeking and feedback-related uncertainty(one-sample t-tests, p < 0.01; see Table 2). ANOVAsdemonstrated that students who were comprehensiveusers scored more highly on the value of feedbackscale than selective users (F(2,110) = 3.42; p < 0.05;D = 0.34, p < 0.05, d = 0.57). Minimal users scoredmore highly than selective users on the extrinsicmotivation scale (F(2,110) = 3.706; p < 0.05; D = 0.58,p < 0.05, d = 0.67). Post-hoc analysis of the ANOVA

comparing learning-related characteristics with thenumber of separate website visits did not demonstratesignificant differences.

Performance and use of feedback

An ANOVA demonstrated that the comprehensiveusers had a significantly higher OSCE(F(2,134) = 4.65; p < 0.05; D = 13.02, p < 0.05,d = 0.57) and OSSE score (F(2,134) = 5.21; p < 0.01;D = 5.04, p < 0.01, d = 0.58) than the minimalusers. Inspecting the means plots of stations passedagainst total website usage indicated that there wasno direct linear relationship between overall OSCEscore and website usage but a U-shaped relationshipexisted. To facilitate further analysis, students weregrouped into categories: excellent (passed all 12 sta-tions, N = 47), good (failed one to two stations,N = 64), just passing (failed three to four stations,N = 17) and failing (failed five or more stations,N = 9). The results are shown graphically in Figs 2and 3. Within the different categories of OSCE per-formance, there was a trend towards excellent stu-dents viewing the highest number of web pages andjust-passing students viewing the website the least(F(3,133) = 3.05; p < 0.05; D = 52.18, p < 0.10,d = 0.75). As can be seen in Fig. 4, 14 (29.79%) ofthe excellent students visited the website three ormore times; no just-passing students visited the web-site more than twice (likelihood ratio = 29.85,df = 15, p < 0.01). Excellent students viewed moreglobal web pages than just-passing students(F(3,133) = 4.67; p < 0.01; D = 10.27, p < 0.05,d = 0.97) and failing students (D = 11.68, p < 0.05,d = 1.18). Excellent students viewed more detailedcomparison web pages than all other students(F(3,133) = 6.20; p < 0.01; good: D = 7.74, p < 0.05,d = 0.55; just passing: D = 11.77, p < 0.01, d = 0.89;failing: D = 13.85, p < 0.05, d = 1.09). Failing stu-dents viewed more skill breakdown pages than goodstudents (F(3,133) = 3.41; p < 0.05; D = 17.84,p < 0.05, d = 2.63) or just-passing students(F(3,133) = 3.41; p < 0.05; D = 19.18, p < 0.05,d = 2.81). Failing students used the next steps web

pages more frequently than good students(F(3,133) = 5.81; p < 0.01; D = 2.32, p < 0.01,d = 0.84) or just-passing students (D = 2.56,p < 0.01, d = 0.96). There was no significant rela-tionship between performance in the written assess-ments and use of the website. No significantdifferences were found in the use of other areas ofthe website.

DISCUSSION

This study demonstrated that the majority of studentswill engage with feedback after a summative OSCE,

Note: error bars represent 95% confidence intervals

0

50

100

150

200

250

Excellent Good Just passing Failing

Total webpages visited

Figure 2 Mean number of web pages by objectivestructured clinical examination (OSCE) performance

Note: error bars represent 95% confidence intervals

0

10

20

30

40

50

60

70

80ExcellentGood

Just passingFailing

Figure 3 Mean number of web pages visited by sectionand objective structured clinical examination (OSCE)performance

ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744 741

Web-based feedback after summative assessment

Page 9: Web-based feedback after summative assessment: how do students engage?

delivered via a website. LCA demonstrated that therewere three broad groups of students who used thewebsite in different ways. Approximately half the stu-dents seemed very interested in the website informa-tion, a quarter showed relatively little interest and aquarter used the website in a more selective way.Unsurprisingly, the comprehensive users scoredmore highly on the value of feedback scale. The mini-mal users scored more highly for extrinsic motiva-tion. Students who passed all OSCE stations made themost use of the website, whereas students who justpassed made least use. No student in this latter cate-gory visited the website more than twice.

Unlike other studies that simply demonstratedwhether the feedback was collected, this study pro-vides the opportunity to understand how often thestudents return to the feedback and to see whichareas of feedback different subgroups of studentspreferentially view. Interestingly, the strongest per-formers tended to make more use of aspects of thewebsite that compared their performance with otherstudents (global web pages, detailed comparisonwith other students). Although these sections showthem how they are doing compared with theirpeers, they do not demonstrate why they have per-formed strongly; these aspects are better shown inthe sections focusing on individual skills, which arenot viewed more often by these students. This sug-gests that these students are perhaps looking forpositive affirmation or reassurance from the feed-back, rather than on how to use the feedback in aneducational way for the future. This fits with previ-ous qualitative work on feedback, which foundimplicit evidence that students and trainees wereseeking feedback to build their confidence ratherthan to change their behaviour.10 The small num-ber of students who failed the OSCE made consider-

able use of the website, although there wassubstantial variability between students. These stu-dents are all invited to discuss their results with asenior tutor, so it is possible this could have influ-enced their use of the website.

Intriguingly, the students who just passed the OSCEmade least use of the feedback, yet they are at riskof failing future assessments and arguably have themost to gain from the feedback. This group is unli-kely to be homogeneous, but will comprise studentswho often just pass, others who normally do muchbetter and others who failed previous assessments. Itis therefore surprising that these students behavedin a relatively homogeneous way by visiting the web-site no more than twice, unlike other types of stu-dents who showed a more varied range ofengagement with the feedback. It is possible thatthe comparative data with other students were notso reassuring for this group and therefore there wasless incentive for them to review this information.There is a clear need to study this subset of studentsin more detail to see if it is possible to design inter-ventions that would increase their engagement withfeedback. Although there has been considerablefocus on students who fail OSCEs, the group of stu-dents who just pass the assessments have been lar-gely neglected. Given that there are potentiallymany more students in this category, this lack ofscrutiny cannot be justified.

It was surprising that no clear relationship was foundbetween the students’ learning-related characteristicsand their use of feedback, as relationships have pre-viously been demonstrated in the literature. It isunclear whether this is because the feedback wasdelivered in the context of a summative rather thana formative assessment, or because other factors aremore important. Others have suggested that the roleof cognitive learning styles in affecting performancemay not be as straightforward as previouslythought.38 This requires further study.

There are several limitations to our study. It was car-ried out within a single year group in a single medi-cal school, so it is not certain to what extent theresults will be generalisable to other medicalschools, especially those with a different assessmentregime. The feedback provided was a numericalbreakdown of scores and skills but did not includeexaminers’ comments for reasons of feasibility. It ispossible that a different pattern may have emergedif comments were also included, as there is evidencethat students may engage better with this type offeedback.39 As this was a quantitative study it could

Figure 4 Number of visits to website by objectivestructured clinical examination (OSCE) performance

742 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744

C J Harrison et al

Page 10: Web-based feedback after summative assessment: how do students engage?

not differentiate students who intensely engagedwith the feedback from those who simply looked atthe screen without attempting to understand theirperformance, although the frequency with whichsome students returned to the website provides cir-cumstantial evidence. It also cannot explain why stu-dents used the website in the way that they did. Aqualitative study would help with this.

The time when the feedback is administered may berelevant. It is recognised that feedback should bedelivered to students at a time when it is needed,and when they have an opportunity to make use ofit.2 As these students will not have another OSCE for12 months, it could be argued that they do not havean immediate need to make use of the feedback.However, although this was a summative OSCE, itdid not occur at the end of the academic year, butwas two-thirds of the way through the year, and thestudents did then have some clinical attachments,which provided an opportunity for them to makeuse of the feedback and adapt their clinical skills.

Our study supports the provision of feedback aftersummative assessments as the majority of studentswill engage with the feedback. If we are to meet ourgoal of encouraging students towards optimal com-petence, then it is important to understand thegroup of students who just pass assessments, as theyengage less well with feedback. Once we understandthis group better, then interventions can be plannedto improve their engagement with feedback and,hopefully, their subsequent performance.

Contributors: CH made substantial contributions to theconception and design of the study and to the acquisitionand analysis of data. He drafted and revised the articleand approved the final version to be published. KDKmade substantial contributions to the conception anddesign of the study and to the analysis of data. She revisedthe article critically for important intellectual content andapproved the final version to be published. AM made sub-stantial contributions to the conception of the study andto the acquisition of data. He revised the article criticallyfor important intellectual content and approved the finalversion to be published. LS made substantial contribu-tions to the conception and design of the study and tothe analysis of data. He revised the article critically forimportant intellectual content and approved the final ver-sion to be published. VW made substantial contributionsto the conception and design of the study and to the anal-ysis of data. She revised the article critically for importantintellectual content and approved the final version to bepublished. CV made substantial contributions to the con-ception and design of the study and to the analysis ofdata. He revised the article critically for important intel-

lectual content and approved the final version to be pub-lished.Acknowledgements: none.Funding: none.Conflicts of interest: none.Ethical approval: ethical approval was granted by Keele Uni-versity School of Medicine Ethics Committee.

REFERENCES

1 Dochy F, Segers M, Gijbels D, Struyven K. Assessmentengineering: breaking down barriers betweenteaching and learning, and assessment. In: Boud D,Falchikov N, eds. Rethinking Assessment in HigherEducation: Learning for the Longer Term. Oxford:Routledge 2007; 87–100.

2 Shute V. Focus on formative feedback. Rev Educ Res2008;78:153–89.

3 Brookhart SM. Successful students’ formative andsummative uses of assessment information. Assess EducPrinciples Policy Pract 2001;8:153–69.

4 Wass V, van der Vleuten C, Shatzer J, Jones R. Theassessment of clinical competence. Lancet2001;357:945–9.

5 Norcini JJ. Setting standards on educational tests. MedEduc 2003;37:464–9.

6 Kluger AN, DeNisi A. The effects of feedbackinterventions on performance; a historical review, ameta-analysis and a preliminary feedback interventiontheory. Psychol Bull 1996;119:254–84.

7 Sinclair HK, Cleland JA. Undergraduate medicalstudents: who seeks formative feedback? Med Educ2007;41:580–2.

8 Ashford SJ, Blatt R, VandeWalle D. Reflections on thelooking glass: a review of research on feedback-seeking behaviour in organizations. J Manage2003;29:773–99.

9 Mann K, van der Vleuten C, Eva K, Armson H, CheslukB, Dornan T, Holmboe E, Lockyer J, Loney E, SargeantJ. Tensions in informed self-assessment: how the desirefor feedback and reticence to collect and use it canconflict. Acad Med 2011;86:1120–7.

10 Eva KW, Armson H, Holmboe E, Lockyer J, Loney E,Mann K, Sargeant J. Factors influencingresponsiveness to feedback: on the interplay betweenfear, confidence, and reasoning processes. Adv HealthSci Educ 2012;17:15–26.

11 Nicol D. From monologue to dialogue: improvingwritten feedback processes in mass higher education.Assess Eval Higher Educ 2010;35 :501–17.

12 Sargeant J, Mann K, Sinclair D, van der Vleuten C,Metsemakers J. Understanding the influence ofemotions and reflection upon multi-source feedbackacceptance and use. Adv Health Sci Educ 2008;13:275–88.

13 Dweck CS, Leggett EL. A social-cognitive approachto motivation and personality. Psychol Rev1988;95:256–73.

ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744 743

Web-based feedback after summative assessment

Page 11: Web-based feedback after summative assessment: how do students engage?

14 VandeWalle D. A goal orientation model of feedback-seeking behaviour. Hum Resour Manage Rev2003;13:581–604.

15 VandeWalle D. Development and validation of a workdomain goal orientation instrument. Educ PsycholMeasur 1997;8:995–1015.

16 Teunissen PW, Stapel DA, van der Vleuten CPM,Scherpbier AJJA, Boor K, Scheele F. Who wantsfeedback? An investigation of the variablesinfluencing residents’ feedback-seeking behavior inrelation to night shifts. Acad Med 2009;84:910–17.

17 Maslow AH. A theory of achievement motivation. In:Harriman PL, ed. Twentieth Century Psychology: RecentDevelopments. Manchester: Ayer Publishing 1970;22–48.

18 Wilkinson TJ, Wells JE, Bushnell JA. Medical studentcharacteristics associated with time in study: isspending more time always a good thing? Med Teach2007;29:106–10.

19 Sobral DT. What kind of motivation drives medicalstudents’ learning quests? Med Educ 2004;38:950–7.

20 Zusho A, Pintrich PR, Coppola B. Skill and will:the role of motivation and cognition in thelearning of college chemistry. Int J Sci Educ2003;25:1081–94.

21 Bandura A. Self-efficacy: toward a unifying theoryof behavioural change. Psychol Rev 1977;84:191–215.

22 Bandura A, Locke EA. Negative self-efficacy and goaleffects revisited. J Appl Psychol 2003;88:87–99.

23 Midgley C. Manual for the Patterns of AdaptiveLearning Scales. University of Michigan. 2000Available at: http://www.umich.edu/~pals/PALS%202000_V12Word97.pdf. [Accessed 24 March 2013.]

24 Ketter LC. High-stakes testing, achievement-goalstructures, academic related perceptions, beliefs,strategies, and school belonging among selectedeighth-grade students in a northwest Florida schooldistrict. Education doctorate University of WestFlorida 2006. Available at: http://etd.fcla.edu/WF/WFE0000029/Ketter_Lynn_Carol_200605_EdD.pdf.[Accessed 24 March 2013.]

25 Fedor DB, Rensvold RB, Adams SM. An investigationof factors expected to affect feedback seeking: alongitudinal field study. Pers Psychol 1992;45:779–805.

26 Black P, Wiliam D. Assessment and classroomlearning. Assess Educ Principles Policy Pract 1998;5:7–74.

27 Butler R. Task-involving and ego-involvingproperties of evaluation: effects of differentfeedback conditions on motivational perceptions,interest and performance. J Educ Psychol1987;79:474–8.

28 Srinivasan M, Hauer K, Der-Martirosian C, Wilkes M,Gesundheit N. Does feedback matter? Practice-basedlearning for medical students after a multi-institutional clinical performance examination. MedEduc 2007;41:857–65.

29 Lefroy J, Gay S, Gibson S, Williams S, McKinley RK.Development and face validation of an instrument to

assess and improve clinical consultation skills. Int JClin Skills 2011;5:115–25.

30 Doran GT. There’s a S.M.A.R.T. way to writemanagement’s goals and objectives. Manage Rev1981;70 :35–6.

31 Anderman EM, Urdan T, Roeser R. The Patterns ofAdaptive Learning Survey: history, development andpsychometric properties. 2003. Paper prepared forIndicators of Positive Development Conference, 2003.Available at: http://www.childtrends.org/Files/Child_Trends-2003_03_12_PD_PDConfAUR.pdf.[Accessed 24 March 2013.]

32 Wolters CA, Yu SL, Pintrich PR. The relation betweengoal orientation and students’ motivational beliefsand self-regulated learning. Learn Individ Diff1996;8:211–38.

33 Ashford SJ. Feedback-seeking in individualadaptation: a resource perspective. Acad Manag J1986;29:465–87.

34 Vallerand RJ, Pelletier LG, Blais MR, Briere NM,Senecal C, Vallieres EF. The Academic MotivationScale: a measure of intrinsic, extrinsic andamotivation in education. Educ Psychol Measur1992;52:1003–17.

35 Davier VM. WINMlRA - A Program System for Analyseswith the Rasch-Model, with the Latent Class Analysis andwith the Mixed-Rasch Model. Kiel: IPN 1999.

36 Lazarsfeld PF, Henry NW. Latent Structure Analysis.Boston, MA: Houghton Mifflin Co. 1968.

37 Seidel T. The role of student characteristics instudying micro teaching-learning environments. LearnEnviron Res 2006;9:253–71.

38 Cook DA. Revisiting cognitive and learning styles incomputer-assisted instruction: not so useful after all.Acad Med 2012;87:778–84.

39 Lipnevich AA, Smith JK. Effects of differentialfeedback on students’ examination performance.J Exp Psychol Appl 2009;15:319–33.

SUPPORTING INFORMATION

Additional Supporting Information may be foundin the online version of this article:

Table S1. Indices for one to 10 latent class analysis(LCA) solutions, with 10 measures with respect towebsite usage.

Appendix S1. Table showing the breakdown of webpages.

Appendix S2. Table showing the changes to thequestionnaire items.

Received 21 August 2012; editorial comments to author 2October 2012, 21 January 2013; accepted for publication 20January 2013

744 ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 734–744

C J Harrison et al