Upload
university-of-strathclyde
View
585
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Presentation on the work of the REAQ project by Lester Gilbert.
Citation preview
REAQ
Lester GilbertGary Wills
Bill WarburtonVeronica Gale
Report on E-Assessment QualityA JISC project in eLearning
October 2009
2/24
THE PITCH
3/24
The system
4/24
Assessment
5/24
Quality management
6/24
Report questions
7/24
REAQ process
8/24
The team
Management group Lester Gilbert, ECS, University of Southampton , PI Dr Gary Wills, ECS, University of Southampton
Expert consultants group Cliff Beevers, Heriot-Watt Paul Booth, Question Tools John Kleeman & Greg Pope, Questionmark Harvey Mellor, IoE Chris Ricketts, Plymouth Denise Whitelock, OU
The workers Veronica Gale, Consultant researcher Bill Warburton, iSolutions, University of Southampton
9/24
Interviewees
HEIs Heriot-Watt University of Southampton Newcastle University University of Plymouth The Open University Edinburgh University Institute of Education
Cambridge Assessment SQA Question Tools (Network Rail) Vrije Universiteit Amsterdam
10/24
Questions asked (1)
What denotes ‘high quality’ in summative e-assessment? What steps do you follow to create and use summative e-
assessment? How do you ensure e-assessment:
reliability, validity, security, and accessibility?
How does the process of creating good quality e-assessment differ from the process of creating traditional assessment?
11/24
Questions asked (2)
Please give us examples of good e-assessment; why are these ‘good’?
When you have heard of poor e-assessment, what has made it ‘poor’?
What feedback have you received from students who have taken e-assessments?
What advice would you give to others using summative e-assessment?
What research or other work has informed your thinking about summative e-assessment?
What further research would you like to see conducted?
12/24
EXPERT EXPECTATIONS
13/24
We expected to hear about…
… delivery issues…
14/24
And hear about…
… psychometric measures…… intended learning outcomes …
15/24
As well as hearing about…
… appropriate standards …
… and if we were lucky, capability maturity …
16/24
WHAT WE HEARD
17/24
We did hear an (awful) lot about…
Delivery issues:infrastructure, support, andhow things go wrong …
18/24
But not much about…
Point biserials, Cronbach Alphas,Kuder-Richardsons
Content validity,Conformance to ILOs
19/24
And hardly anything about…
Metrics,Capabilitymaturity
Practicestandards
20/24
And when we did hear…
Difficulty coefficients / facility values were ofteninappropriately used…
21/24
SO…
22/24
Conclusions
Essentially, little support for e-assessment, in the areas of:
Tools & toolkits Guidance & advice focused upon quality Exemplars of good practice
Little evidence for:
Maturity of good practice Expectations (‘demand characteristics’) of quality
23/24
Recommendations
Tools & toolkits Exemplars of good practice Project to develop item bank quality statistics Suppliers to make quality reports more accessible Workshops, guidance, & advice focused upon quality:
Quality management Standards Metrics & psychometrics Capability maturity
JISC bids & project outputs to include, as relevant: Psychometric measures Standards Capability maturity modelling
24/24
THANKS!
Comments, questions, …