Learnometrics: Metrics for Learning Objects

Preview:

DESCRIPTION

Ph.D. Defense presentation. K.U.Leuven How to measure the characteristics of the different processes involved in the Learning Object lifecycle

Citation preview

Learnometrics:Metrics for Learning

ObjectsXavier Ochoa

Learning Object

Any digital resource that can be reused to support learning

(Wiley, 2004)

Share and Reuse

Sharing

Sharing

Repository

Metadata

Book Metadata

Learning Object Metadata

GeneralTitle: Landing on the Moon

TechnicalFile format: Quicktime MovieDuration: 2 minutes

EducationalInteractivity Level: LowEnd-user: learner

RelationalRelation: is-part-ofResource: History course

Learning Object

LOM

Learning Object Repository

Object Repository Metadata Repository

and /or

Learning Object Economy

Market Makers

Producers Consumers

Policy Makers

Market

How it works?How it can be improved?

Purpose

Generate empirical knowledge about LOE

Test existing techniques to improve LO tools

Quantitative Analysis

Metrics Proposaland Evaluation

Quantitative Analysis of the Publication of LO

• What is the size of Repositories?

• How do repositories grow?

• How many objects per contributor?

• Can it be modeled?

16

Size is very unequal

17

Size Comparison

Repository Referatory OCW LMS IR

Growth is Linear

Bi-phase Linear ln(a.exp(b.x)+c)

Objects per Contributor• Heavy-tailed distributions (no bell curve)

LORP - LORFLotka with cut-off

“fat-tail”

Objects per Contributor• Heavy-tailed distributions (no bell curve)

OCW - LMSWeibull

“fat-belly”

Objects per Contributor• Heavy-tailed distributions (no bell curve)

IRLotka high

alpha“light-tail”

Engagement

Model

Analysis Conclusions–Few big repositories concentrate most of the material

–Repositories are not growing as they should–There is not such thing as an average contributor

–Differences between repositories are based on the engagement of the contributor

–Results point to a possible lack of “value proposition”

Quantitative Analysis of the Reuse of Learning Objects

• Which percentage of learning objects is reused?

• Does the granularity affect reuse?

• How many times a learning object is reused?

26

Reuse Paradox

Measuring Reuse

Measuring Reuse

Measuring Reuse

~20%

Distribution of Reuse

Analysis Conclusions–Learning Objects are being reuse with or without the help of Learning Object technologies

–Reuse paradox need to be re-evaluated

–Reuse seems to be the results of a chain of successful events.

Quality of Metadata

Quality of Metadata

Title: “The Time Machine”Author: “Wells, H. G.”Publisher: “L&M Publishers, UK”Year: “1965”Location: ----

Metrics for Metadata Quality–How the quality of the metadata can be measured? (metrics)

–Does the metrics work?• Does the metrics correlate with human evaluation?

• Does the metrics separate between good and bad quality metadata?

• Can the metrics be used to filter low quality records?

Textual Information correlate with human evaluation

Some metrics could filter low quality records

Study Conclusions–Humans and machines have different needs for metadata

–Metrics can be used to easily establish some characteristics of the metadata

–The metrics can be used to automatically filter or flag low quality metadata

Abundance of Choice

38

Relevance Ranking Metrics–What means relevance in the context of Learning Objects?

–How existing ranking techniques can be used to produce metrics to rank learning objects?

–How those metrics can be combined to produce a single ranking value?

–Can the proposed metrics outperform simple text based ranking?

Metrics improve over Base Rank

RankNet outperform Base Ranking by 50%

Relevance Ranking Metrics• Implications

–Even basic techniques can improve the ranking of learning objects

–Metrics are scalable and easy to implement

• Warning:–Preliminary results: not based in real world observation

Applications - MQM

44

Applications - RRM

Applications - RRM

45

General Conclusions• Publication and reuse is dominated by heavy-tailed distributions

• LMSs have the potential bootstrap LOE

• Models/Metrics sets a baseline against which new models/metrics can be compared and improvement measured

• More questions are raised than answered46

Publications• Chapter 2

– Quantitative Analysis of User-Generated Content on the Web. Proceedings of the First International Workshop on Understanding Web Evolution (WebEvolve2008) at WWW2008. 2008, 19-26

– Quantitative Analysis of Learning Object Repositories. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications ED-Media 2008, 2008, 6031-6040

• Chapter 3– Measuring the Reuse of Learning Objects. Third

European Conference on Technology Enhanced Learning (ECTEL 2008), 2008, Accepted.

Publications• Chapter 4

– Towards Automatic Evaluation of Learning Object Metadata Quality. LNCS: Advances in Conceptual Modeling - Theory and Practice, Springer, 2006, 4231, 372-381

– SAmgI: Automatic Metadata Generation v2.0. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications ED-Media 2007, AACE, 2007, 1195-1204

– Quality Metrics for Learning Object Metadata. World Conference on Educational Multimedia, Hypermedia and Telecommunications 2006, AACE, 2006, 1004-1011

Publications• Chapter 5

– Relevance Ranking Metrics for Learning Objects. IEEE Transactions on Learning Technologies. 2008. 1(1), 14

– Relevance Ranking Metrics for Learning Objects.LNCS: Creating New Learning Experiences on a Global Scale, Springer, 2007, 4753, 262-276

– Use of contextualized attention metadata for ranking and recommending learning objects. CAMA '06: Proceedings of the 1st international workshop on Contextualized attention metadata at CIKM 2006, ACM Press, 2006, 9-16

My Research Metrics (PoP)• Papers: 14• Citations: 55• Years: 6• Cites/year: 9.17• Cites/paper: 4.23• Cites/author: 21.02• Papers/author: 6.07• Authors/paper: 2.77

• h-index: 5• g-index: 7• hc-index: 5• hI-index: 1.56• hI-norm: 3• AWCR: 13.67• AW-index: 3.70• AWCRpA: 5.62

Thank you for your attention

Questions?

51

Recommended