3
Formative Assessment and Meaningful Learning Analytics Susan Bull, Matthew D. Johnson, Carrie Demmans Epp, Drew Masci, Mohammad Alotaibi and Sylvie Girard Electronic, Electrical and Computer Engineering, University of Birmingham, United Kingdom [email protected] Abstract—We introduce an independent open learner model for formative assessment and learning analytics based on de- velopments in technology use in learning, also maintaining more traditional numerical and qualitative feedback options. I. INTRODUCTION With development of technologies there have been many changes in the ways that learning and assessment take place. Nevertheless, there is still room for approaches that can rea- dily encompass the variety of new technologies, as well as accommodating more traditional formative assessment and feedback. Individual ‘learner models’ are inferred and dy- namically updated during an interaction. These enable adap- tive teaching systems to tailor the interaction to the individu- al [10]. Much as visualisation in learning analytics displays learning data to users (e.g. [8]), ‘open learner models’ exter- nalise data from learner models [3], and so can be regarded as a specific application of visual learning analytics. Fur- thermore, as it is the learner model that is visualised, data typically relates directly to understanding, skills and compe- tencies, etc. While this can be achieved for teachers using learning analytics data ([4]), learner models can very natural- ly make their inferences about learners available to students and teachers, since the learner model is an inherent and often core component of the adaptive system’s design. Independent open learner models (IOLM) are not part of adaptive teaching systems: the independence of the OLM indicates that it is for use on its own. The aim is to focus on metacognitive skills that help in the learning process, allow- ing learners to assume greater control and responsibility for decisions in their learning [3]. An IOLM may be: part of a system that has tasks or questions ([2]); connected to an e- portfolio ([6]); or take data from a range of sources ([7]). We here focus on an IOLM to integrate data from a variety of sources (activities, applications, etc.), to help learners and teachers understand students’ learning; and to offer oppor- tunities for formative assessment combined with learning analytics, as argued beneficial [1] – but with greater focus on learning or current competencies than performance. It also uses self and peer assessment as suggested, for example, in online learning [8]. That learners have to themselves use the visualised data, and identify what they need to do to improve their learning, is part of the learning process. II. THE NEXT-TELL IOLM Reports on the potential of (I)OLMs to be at the centre of contexts with learner data available from a range of sources, are increasing [5,6,7]). This means that starting points for reflection or formative assessment can take account of a ful- ler range of activities a student might undertake, that contri- bute to the learner model. The learner model in the Next- TELL IOLM is based on competency structures. Various activities, defined by the teacher, allow data to be collected as students interact: these may be online activities, using the IOLM API; or may be traditional feedback from self, peer or teacher assessments. Fig 1 shows four of the Next-TELL IOLM visualisations to illustrate how it can be used in for- mative assessment. Skill meters provide a clear, simple, indi- cation of strength of competencies (and sub-competencies). Especially for small competency sets, these can easily show learners and teachers the extent of the various competencies measured by the various activities and/or technologies. Word clouds are useful when there is a large number of competen- cies. This is not ordered, but for quick viewing, it can help users identify strengths (left) and difficulties (right). It can be a useful way to identify where to focus effort or, if a student releases their model to others, for peers to quickly note who might be able to help with reference to a specific topic or competency. The competency network shows the competen- cy structure, with shade and size of nodes indicating relative strength of competencies. This also allows users to easily see areas of the structure that are under-represented in their skill- set, as well as how their strengths are distributed. The radar plot enables comparison of data sources in the model, for example, comparing self-assessment to data perhaps per- ceived as more ‘expert’, such as teacher assessments or au- tomated tools. Users can also view the influence of the learn- er model algorithm (Fig 2). This example shows evidence of contribution of different sources of data together with recen- cy weighting from manual peer and teacher assessments, and automated data from an online tool (OLMlets [2]). The Next- TELL IOLM can support formative assessment in several ways. The first is by viewing the visualisations. As indicated above, these can be used for different purposes: quick identi- fication of areas of strength or weakness; more detailed in- spection of skills with reference to one or more competency frameworks used in the IOLM; comparison of learner model data from different sources. Each can serve as a prompt for learners to assess their needs and determine suitable follow- on activities (online activities, searches, reading, collaborat- ing, asking the teacher, self-assessment, etc.). Previously, a simple IOLM has been found to prompt spontaneous face-to-face discussion about the learner mod- els, when they could be optionally released to peers [2]. This can also be the case with the Next-TELL IOLM, but follow- ing the earlier finding, it also includes a discussion compo- nent where students can comment on their learner models or request assistance, etc., when they are not in the same loca- tion. For example, Fig 3 shows an excerpt from discussion 2014 IEEE 14th International Conference on Advanced Learning Technologies 978-1-4799-4038-7/14 $31.00 © 2014 IEEE DOI 10.1109/ICALT.2014.100 327

[IEEE 2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT) - Athens, Greece (2014.7.7-2014.7.10)] 2014 IEEE 14th International Conference on Advanced Learning

  • Upload
    sylvie

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT) - Athens, Greece (2014.7.7-2014.7.10)] 2014 IEEE 14th International Conference on Advanced Learning

Formative Assessment and Meaningful Learning Analytics

Susan Bull, Matthew D. Johnson, Carrie Demmans Epp, Drew Masci, Mohammad Alotaibi and Sylvie Girard Electronic, Electrical and Computer Engineering, University of Birmingham, United Kingdom

[email protected]

Abstract—We introduce an independent open learner model for formative assessment and learning analytics based on de-velopments in technology use in learning, also maintaining more traditional numerical and qualitative feedback options.

I. INTRODUCTION With development of technologies there have been many

changes in the ways that learning and assessment take place. Nevertheless, there is still room for approaches that can rea-dily encompass the variety of new technologies, as well as accommodating more traditional formative assessment and feedback. Individual ‘learner models’ are inferred and dy-namically updated during an interaction. These enable adap-tive teaching systems to tailor the interaction to the individu-al [10]. Much as visualisation in learning analytics displays learning data to users (e.g. [8]), ‘open learner models’ exter-nalise data from learner models [3], and so can be regarded as a specific application of visual learning analytics. Fur-thermore, as it is the learner model that is visualised, data typically relates directly to understanding, skills and compe-tencies, etc. While this can be achieved for teachers using learning analytics data ([4]), learner models can very natural-ly make their inferences about learners available to students and teachers, since the learner model is an inherent and often core component of the adaptive system’s design.

Independent open learner models (IOLM) are not part of adaptive teaching systems: the independence of the OLM indicates that it is for use on its own. The aim is to focus on metacognitive skills that help in the learning process, allow-ing learners to assume greater control and responsibility for decisions in their learning [3]. An IOLM may be: part of a system that has tasks or questions ([2]); connected to an e-portfolio ([6]); or take data from a range of sources ([7]). We here focus on an IOLM to integrate data from a variety of sources (activities, applications, etc.), to help learners and teachers understand students’ learning; and to offer oppor-tunities for formative assessment combined with learning analytics, as argued beneficial [1] – but with greater focus on learning or current competencies than performance. It also uses self and peer assessment as suggested, for example, in online learning [8]. That learners have to themselves use the visualised data, and identify what they need to do to improve their learning, is part of the learning process.

II. THE NEXT-TELL IOLM

Reports on the potential of (I)OLMs to be at the centre of contexts with learner data available from a range of sources, are increasing [5,6,7]). This means that starting points for reflection or formative assessment can take account of a ful-

ler range of activities a student might undertake, that contri-bute to the learner model. The learner model in the Next-TELL IOLM is based on competency structures. Various activities, defined by the teacher, allow data to be collected as students interact: these may be online activities, using the IOLM API; or may be traditional feedback from self, peer or teacher assessments. Fig 1 shows four of the Next-TELL IOLM visualisations to illustrate how it can be used in for-mative assessment. Skill meters provide a clear, simple, indi-cation of strength of competencies (and sub-competencies). Especially for small competency sets, these can easily show learners and teachers the extent of the various competencies measured by the various activities and/or technologies. Word clouds are useful when there is a large number of competen-cies. This is not ordered, but for quick viewing, it can help users identify strengths (left) and difficulties (right). It can be a useful way to identify where to focus effort or, if a student releases their model to others, for peers to quickly note who might be able to help with reference to a specific topic or competency. The competency network shows the competen-cy structure, with shade and size of nodes indicating relative strength of competencies. This also allows users to easily see areas of the structure that are under-represented in their skill-set, as well as how their strengths are distributed. The radar plot enables comparison of data sources in the model, for example, comparing self-assessment to data perhaps per-ceived as more ‘expert’, such as teacher assessments or au-tomated tools. Users can also view the influence of the learn-er model algorithm (Fig 2). This example shows evidence of contribution of different sources of data together with recen-cy weighting from manual peer and teacher assessments, and automated data from an online tool (OLMlets [2]). The Next-TELL IOLM can support formative assessment in several ways. The first is by viewing the visualisations. As indicated above, these can be used for different purposes: quick identi-fication of areas of strength or weakness; more detailed in-spection of skills with reference to one or more competency frameworks used in the IOLM; comparison of learner model data from different sources. Each can serve as a prompt for learners to assess their needs and determine suitable follow-on activities (online activities, searches, reading, collaborat-ing, asking the teacher, self-assessment, etc.).

Previously, a simple IOLM has been found to prompt spontaneous face-to-face discussion about the learner mod-els, when they could be optionally released to peers [2]. This can also be the case with the Next-TELL IOLM, but follow-ing the earlier finding, it also includes a discussion compo-nent where students can comment on their learner models or request assistance, etc., when they are not in the same loca-tion. For example, Fig 3 shows an excerpt from discussion

2014 IEEE 14th International Conference on Advanced Learning Technologies

978-1-4799-4038-7/14 $31.00 © 2014 IEEE

DOI 10.1109/ICALT.2014.100

327

Page 2: [IEEE 2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT) - Athens, Greece (2014.7.7-2014.7.10)] 2014 IEEE 14th International Conference on Advanced Learning

prompted by the instructor, based on the learner models. This excerpt demonstrates that students can appropriately discuss their understanding, and clarify their understanding to them-selves and each other. If considered useful, the instructor can ‘add evidence’ to the IOLM in the form of a numerical val-ue, as well as additional text feedback as evidence. This can further focus students back to the discussion for elaboration, but can also be a way to acknowledge improved understand-ing in the IOLM itself, that could be a motivating factor. Students can perform self and peer assessments in a similar manner. Students and teachers can view this evidence by clicking the text icon (by the skill meters in Fig 1).

Figure 1. Examples of Next-TELL IOLM visualisations

Figure 2. Data sources and learner model weightings

________________________________________________

Instructor: Cognitive style is one of the two topics still in the ‘very weak’ position for the group as a whole, according to the table OLM view. Are there specific difficulties that you are encountering here? Student 1: Cognitive style is how information is processed and stored in memory, for example there are two types of styles of learning, the is the deep and surface approach of learning. the surface approach is trying to memory and reproduce of material, often without having much knowledge about the subject and how the memorised materials fit together. But in deep approach is the opposite because a student would have a deep understand and a lot more knowledge about the subject and make sense of the materials instead of just memorising, and not knowing how the things you memorised work. That’s my understanding of cognative learning style. Student 2: Definitely the deep approach of learning style is where the learner tries to link his learning to what he/she already knows or believes in. They try to use the evidence to support their learning and fit it in their schema. Instructor: But is there some confusin between cognitive and learning style? Student 2: Learning style is the method/ way you learn the information however cognitive style is the way you process the information in your brain. Am i ok with this as a basic understanding?

________________________________________________

Figure 3. Excerpts from the discussion facility in the Next-TELL IOLM

If the IOLM is used throughout a course, there can be many formative assessment opportunities. Any activities that use the IOLM API can contribute to the model calculation, and appear in the visualisations. In this case students used another IOLM (OLMlets [2]), which provided data to their Next-TELL IOLM. This was combined with data from teacher assessments of individual and group texts; and self and peer assessments. In addition, written feedback was pro-vided. Performing self-assessment and comparing against ‘expert’ teacher assessments can be useful to gauge the accu-racy of one’s self-assessment. Peer assessments can also help the giver of feedback, in the process of appraising the work of others. Collecting this data across activities and technolo-gies also allows continuing formative assessment to take a large proportion of a learner’s activities into account in prompting reflection through learner model visualisations.

III. USE OF THE NEXT-TELL IOLM Potential benefits of (I)OLMs have been previously de-

scribed [3]. We here consider the extent of use of the IOLM as an indicator of perceived benefit for formative assessment.

Participants were 11 volunteers studying Computer Sys-tems Engineering, from two courses: Personalisation and Adaptive Systems; and Adaptive Learning Environments. The learner models represented competencies and topics from the relevant course, but data from both groups is com-bined here. The IOLM was introduced in labs, but mostly used in students’ own time over 8 weeks. Use was encour-aged as a means to obtain formative feedback, but no sum-mative assessment was based on it. The discussion section began late in the 8 week period. User actions in the system are automatically logged. Questionnaires were administered at the end of the courses, using a 5 point scale (strongly agree (5) – strongly disagree (1)). A ‘not applicable’ option was also available. A few questionnaire responses for two of the students were removed, where those students had rated visualisations, but interaction logs showed no use of them. The questionnaires only asked about 6 of the 8 visualisa-tions, but the logs were reviewed for all eight visualisations.

Looking at high-level learner actions within the IOLM (Table I), we see that all viewed their IOLM (any of the vi-sualisations), inspected the modelling process, and viewed the evidence. The logs also revealed that 4 participants in-spected the modelling process more than twice as much as any of the other learners. The majority of learners also viewed and participated in discussions. All participants com-pleted at least one peer or self assessment, with most com-pleting several. Table II shows the most commonly com-pleted part of an assessment as the numerical value. Around half also entered text describing strengths or possible im-provements. Several instructor assessments by competency were also given (per student: M=16.7, SD=7.7, Min=6, Max=11). The questionnaire revealed general agreement with statements of usefulness, as in Table III. All stated at least one of the visualisations was useful for learning (strongly agree/agree), with most claiming several to be helpful. Responses were mixed for usefulness of assess-ments, but few negative responses.

328

Page 3: [IEEE 2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT) - Athens, Greece (2014.7.7-2014.7.10)] 2014 IEEE 14th International Conference on Advanced Learning

TABLE I. HIGH LEVEL LEARNER ACTIONS

TABLE II. ASSESSMENT RELATED LEARNER ACTIONS

TABLE III. LEARNER PERCEPTIONS OF IOLM USEFULNESS

IOLM Use Useful Neutral Not Usef. N/ANetwork 4 4 2 1Skill Meter 11 0 0 0Smiley 6 2 1 2Table 8 2 1 0Tree Map 6 1 2 2Word Cloud 3 5 1 2Self Assessment 5 4 0 2Instructor Assessment 11 0 0 0Viewing Peer Assessment 3 4 2 2Giving Peer Assessment 5 3 1 2Automated Assessment 7 3 1 0

Discussion: Over the 8 weeks, users accessed the visu-

alisations (Fig 1) 64.2 times on average. The SD was high, indicating different use patterns: some may check frequently for updates, or switch between visualisations for different views on information; others checking less frequently. This is in line with the aim of a flexible tool as a focus for forma-tive assessment, as suits the individual. Users viewed the modelling process (Fig 2) quite frequently given that it is mainly a source of data for additional scrutiny and explana-tion of the learner model, according to the modelling algo-rithm. It is hypothesised that students used this to compare their competencies derived from different activities. What-ever the reason, we recommend developers include such explanation where users are at a level to comprehend it, as most of our users checked this multiple times. All evidence is listed in the same page, and mean viewings suggest users may have checked this every two weeks. The fact that the discussion component (Fig 3) was introduced later may help explain the SD, or this may reflect individual differences in preference for discussion tools. Either way, this suggests the potential for this feature for those who benefit from it. All performed at least one self or peer assessment, with most doing both. Users were more inclined to give numerical values, but 5 of the 11 provided additional comment. We

aim to more actively encourage use of this feature to facili-tate formative assessment sooner in future courses.

Questionnaires revealed higher preferences for some visualisations, but all were considered useful by at least 3 users, which provisionally suggests all visualisations can be retained. Instructor feedback appears to have greater atten-tion, perhaps in line with lower completion of self and peer assessments, or perceived accuracy of assessment. Never-theless, giving peer feedback was considered more helpful for one’s own learning than receiving it, and this was with-out additional support to complete peer assessments. Most considered the automated (OLMlets) data helpful: this may increase if additional applications contribute to the IOLM.

While we have yet to investigate quality of self/peer as-sessments, data so far suggests the potential for IOLMs to combine formative assessment and learning analytics for immediate feedback (see [1]), with additional focus on cur-rent competencies; and allows inclusion of more traditional teacher feedback, as well as self and peer assessments that can themselves be useful formative assessment processes.

ACKNOWLEDGEMENTS The Next-TELL project is supported by the European Community (EC)

under the Information Society Technology priority of the 7th Framework Programme for R&D, contract no 258114 NEXT-TELL. This document does not represent the opinion of the EC and the EC is not responsible for any use that might be made of its content. The third author held a Walter C. Sumner and a Weston Fellowship.

REFERENCES [1] N.R. Aljohani and H.C. Davis, “learning Analytics and Formative

Assessment to Provide Immediate Detailed Feedback Using a Student Centered Mobile Dashboard”, International Conference on Next Generation Mobile Apps, Services and Technologies, IEEE, 2013.

[2] S. Bull and M. Britland, “Group Interaction Prompted by a Simple Assessed Open Learner Model that can be Optionally Released to Peers”, Proceedings of PING Workshop, User Modeling 2007.

[3] S. Bull, & J. Kay, “Open Learner Models”, in R. Nkambou, J. Bordeau & R. Miziguchi (eds), Advances in Intelligent Tutoring Systems, Springer, 318-338, 2010.

[4] M. Ebner, M. Schoen and B. Neuhold, “Learning Analytics in Basic Math Education – First Results from the Field”, eLearning Papers no. 36, http://openeducationeuropa.eu, 2014.

[5] R. Morales, N. Van Labeke, P. Brna, and M.E. Chan. “Open Learner Modelling as the Keystone of the Next Generation of Adaptive Learning Environments”. In C. Mourlas & P. Germanakos (eds), Intelligent User Interfaces, ISR, 288-312, London: ICI Global, 2009.

[6] E.M. Raybourn and D. Regan, “Exploring e-portfolios and Independent Open Learner Models: Toward Army Learning Concept 2015”, Interservice/Industry Training, Simulation, and Education Conference Proceedings, Florida USA, 2011.

[7] P.Reimann, S. Bull, W. Halb and M. Johnson, “Design of a Computer-Assisted Assessment System for Classroom Formative Assessment”, CAF11, IEEE, 2011.

[8] K.Verbert, E.Duval, J. Klerkx, S. Govaerts and J.L Santos, “Learning Analytics Dashboard Appliclations”, American Behavioral Scientist, DOI: 10.1177/0002764213479363, 2013.

[9] S.K. Vonderwell and M. Boboc, “Promoting Formative Assessment in Online Teaching and Learning”, TechTrends, vol 57, 4, 2013.

[10] B.P. Woolf, “Building Intelligent Interactive Tutors”, Elsevier Inc, Burlington MA, 2009.

Learner Action No. Times Performed No.

Learners M (SD) Min MaxViewed IOLM Visualisation 64.2 (50.9) 20 165 11Viewed Mod. Process 5.8 (5.4) 1 18 11Viewed Evidence 4.1 (4.1) 1 12 11Viewed Discussion 26.1 (33.7) 0 119 10Posted to Discussion 6.2 (7.1) 0 22 9

Learner Action No. Times Performed No.

Learners M (SD) Min MaxSelf Assessment 5.8 (5.0) 0 16 10Peer Assessment 3.6 (5.7) 0 19 8Numerical Value 98.7 (72.1) 11 262 11Indicated Strengths 5.6 (14.3) 0 48 5Indicated Improvements 4.5 (10.7) 0 36 5

329