Upload
yeimer-yesith-ayazo-torres
View
217
Download
0
Embed Size (px)
Citation preview
8/12/2019 A Framework for Recommendation in Learning Object Repositories
1/14
A framework for recommendation in learning object repositories: An exampleof application in civil engineering
A. Zapata a,, V.H. Menndez b,, M.E. Prieto c,, C. Romero d,
aAutonomous University of Yucatan, Faculty of Education, 97150 Mrida, MexicobAutonomous University of Yucatan, Faculty of Mathematics, 13615 Mrida, Mexicoc University of Castilla-La Mancha, Computer Science Faculty, Ciudad Real, Spaind University of Cordoba, Dept. of Computer Science, 14071 Crdoba, Spain
a r t i c l e i n f o
Article history:
Received 31 January 2012
Received in revised form 23 July 2012
Accepted 8 October 2012
Available online 23 November 2012
Keywords:
Learning object repository
Metadata
Educational recommender system
Weighted hybrid recommendation
Search ranking algorithm
Civil engineering
a b s t r a c t
Learning Object Repositories (LORs) are an important element in the management, publishing, location
and retrieval of instructional resources. In recent times, the task of finding and recommending a list of
learning objects that fits the specific users needs and requirements is a very active area of research. In
this regard, this paper proposes DELPHOS, a framework to assist users in the search for learning objects
in repositories and which shows an example of application in engineering. LORs can be used in engineer-
ing not only for learning and training for students, instructors and professionals but also for sharing
knowledge about engineering problems and projects. The proposed approach is based on a weighted
hybrid recommender that uses different filtering or recommendation criteria. The values of these weights
can be assigned by the user him/herself or can be automatically calculated by the system in an adaptive
and dynamic way. This paper describes the architecture and interface of DELPHOS and shows some
experiments with a group of 24 civil engineering students in order to evaluateand validatethe usefulness
of this tool.
2012 Published by Elsevier Ltd.
1. Introduction
A Learning Object (LO) is a type of digital content component
that allows flexibility, independence and reuse of content in order
to deliver a high degree of control to instructors and students [46].
LOs are composed of the object content (files, generally with multi-
media elements) and metadata (that describes what is contained
within those LOs). Metadata standards [4,22] such as IEEE-LOM,
Dublin-CORE, and IMS-CP describe the characteristics of the re-
sources contained in LOs, enable cataloguing and searching for
LOs within a repository and also reuse LOs in other repositories
and systems. A Learning Object Repository (LOR) is a collection of
open shared digital resources that are accessible on the networkwithout requiring prior knowledge of the internal structure of
the collection [23]. Repositories are the best way to share, index
and retrieve instructional resources and their proliferation is evi-
dence of the continuous development of e-learning[29]. They pro-
vide the users with knowledge in any moment, anywhere.
Some examples of LORs are: ARIADNE (Alliance of Remote
Instructional Authoring & Distribution Networks for Europe) [6],
MERLOT (Multimedia Educational Resource for Learning and
Online Teaching) [37], and AGORA (from a Spanish acronym that
means Help for the Management of Reusable Learning Objects)
[31].These LORs are multi-purpose, that is, they contain LOs from
a wide range of domains or contents. However, there are also spe-
cific domain LORs, for example, some LORs related with engineer-
ing and architectural are described below. GROW (Geotechnical,
Rock and Water Digital Library)[20]is a civil engineering learning
object repository and portal. MACE (Metadata for Architectural
Contents in Europe) [40] is a European initiative to integrate LO
repositories distributed over several countries to disseminate dig-ital information about architecture. KINOA platform [26] is a digital
repository focused on civil engineering topics with resources
expressed in RDF (Resource Description Framework) language.
SPeL (Sistem Pengurusan E-Learning)[43]is a repository for engi-
neering courses of the Universiti Teknikal Malaysia Melaka. And
OE3 (Objetos Educacionais para Engenharia de Estructuras which
is the acronym in Portuguese for learning objects for structural
engineering)[36]is a repository focused on helping teaching and
learning activities of structural engineering and related areas.
LORs can be used in engineering not only for learning, training
and continuing education for students, instructors and profession-
als [43]but also for sharing practical knowledge about engineering
0965-9978/$ - see front matter 2012 Published by Elsevier Ltd.http://dx.doi.org/10.1016/j.advengsoft.2012.10.005
Corresponding authors. Tel.: +34 926295300 (A. Zapata, V.H. Menndez, M.E.
Prieto), tel.: +34 957212257 (C. Romero).
E-mail addresses: [email protected] (A. Zapata), [email protected]
(V.H. Menndez), [email protected] (M.E. Prieto), [email protected]
(C. Romero).
Advances in Engineering Software 56 (2013) 114
Contents lists available atSciVerse ScienceDirect
Advances in Engineering Software
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / a d v e n g s o f t
http://dx.doi.org/10.1016/j.advengsoft.2012.10.005mailto:[email protected]:[email protected]:[email protected]:[email protected]://dx.doi.org/10.1016/j.advengsoft.2012.10.005http://www.sciencedirect.com/science/journal/09659978http://www.elsevier.com/locate/advengsofthttp://www.elsevier.com/locate/advengsofthttp://www.sciencedirect.com/science/journal/09659978http://dx.doi.org/10.1016/j.advengsoft.2012.10.005mailto:[email protected]:[email protected]:[email protected]:[email protected]://dx.doi.org/10.1016/j.advengsoft.2012.10.0058/12/2019 A Framework for Recommendation in Learning Object Repositories
2/14
problems and projects [40]. For example, most of the engineers use
computer software [2] for carrying out the specification, design,
implementation and verification of their projects. These tools gen-
erate a great amount of electronic documents (specifications,
plans, schedules, diagrams, figures, artifacts, code, etc.). In fact,
some engineering projects are developed using the available infor-
mation generated by the engineers of the same company in previ-
ous projects. In a similar way, a specific LOR could be used for
solving engineering problems or projects such as a case-based rea-
soning system that use previous experiences and implementation
results by other engineers. Case-based reasoning systems [1]rely
on a database (or case base or LO repository) of past problem solv-
ing experiences as their primary source of problem solving exper-
tise. New problems can be solved by retrieving a case (LO) whose
specification is similar to the current target problem and then
adapting its solution to fit the target situation. In this way, previous
documents of similar projects or problems can be used as models
or guides to the development or solution of new projects or prob-
lems. However, in order to be able to incorporate all this informa-
tion (LOs) into a repository, it is necessary to do an adequate
characterisation of these documents. Usually, document manage-
ment systems use Dublin-Core (DC) [14] as representation stan-
dard for recording metadata associated with documents. DC is a
well-known standard focused on the description of any digital doc-
ument that defines a base set of metadata about the document
content, intellectual property and information for the instantiation.
In the proposed approach in this paper, we have used the IEEE
Learning Object Metadata (IEEE-LOM)[22]instead of DC for creat-
ing a fuller description of documents, since IEEE-LOM can be used
not only for technological development but also for education and
training. IEEE-LOM is the main standard for cataloguing learning
objects and it provides the required syntax and semantics to de-
scribe an object both adequately and completely. Regarding inter-
operability, IEEE-LOM defines relationships between its elements
and the metadata Dublin Core standard, allowing the exchange
schema definition between them.
One of the main drawbacks of the majority of current LORs isthat they use simple search engines that return an unordered list
of LOs. That is, the result of a query in a simple search engine is
based only on the provided keywords and uses a general basis
for all the users. Thus, a vast amount of LOs with the same ranking
are, generally, displayed to all the users. For example, users such as
engineering students may spend a lot of time searching for a large
number of cases similar to their actual situation or problem to re-
solve, in order to get cues and suggestions on how to proceed[40].
This happens because of the great variety of information that can
be retrieved from a single search, despite its title or general sub-
jects (i.e. a technical solution for a window frame detail may often
be deduced by observing a picture in a monograph on a great archi-
tect, and not from a technology manual). This means that the selec-
tion of the most appropriate learning object for a specific user canbe a hard task that may require extra time and effort. A way of alle-
viating this situation consists of somehow limiting the number of
LOs in a repository that is displayed for users. This can be done
by means of filtering or recommendation techniques. Recommen-
dation systems are software tools that offer stock tips and re-
sources that can be useful to the specific needs of users [34]. In
fact, a recommendation system can be used to find the most re-
lated documents about a specific subject such as engineering pro-
jects or problems. Additionally, LORs generally have a high number
of LOs with poor metadata and users with poor profiles which
makes it more difficult to adapt the recommendation of LOs to
the individual knowledge, goals and/or preferences of each user.
In order to resolve these problems, we propose DELPHOS [15]: an
integral and intelligent solution for the recommendation of learn-ing objects. The main goal of this framework is to assist users
(instructors, students and professionals) in the search and selec-
tion of LOs using a new personalised ranking method that uses a
weighted composition of different filtering or recommendation cri-
teria (content, collaborative and demographic).
The remainder of the work is organised as follows: Section 2
introduces recommendation systems and reviews some specific
works about searching for LOs in repositories. Section3 describes
DELPHOS architecture; Section4 describes the DELPHOS Interface
in a practical and tutorial way; Section 5shows some experiments
and results that validate the efficiency of this tool; and finally, Sec-
tion6outlines some concluding remarks and future research lines.
2. Background and related work
Recommender Systems (RSs) are software tools and techniques
that provide suggestions about items which can be useful to a
users requirements [10,34,33]. Items are the objects that are rec-
ommended and may be characterised by their complexity and their
value or utility. RSs can be used to predict whether a particular
user will like a particular item (prediction problem) or to identify
a set ofNitems that will be of interest to a certain user (top-Nrec-
ommendation problem). This paper deals with the problem of top-
NLOs recommendation.
There are different types of RSs [10] or different types of recom-
mendation approaches:
Collaborative: The system generates recommendations using
only information about rating profiles for different users. Col-
laborative systems locate peer users with a similar rating his-
tory to the current user and generate recommendations using
this neighbourhood.
Content-based: This system generates recommendations from
two sources: the features associated with products and the rat-
ings that a user has given them. Content-based recommenders
treat recommendation as a user-specific classification problem
and learn to classify the users likes and dislikes based on prod-uct features.
Case-based: A case-based recommendation is a form of content-
based recommendation where individual products are
described in terms of a well-defined set of features. It borrows
heavily from the core concepts of retrieval and similarity in
case-based reasoning [39]. Items or products are represented
as cases and recommendations are generated by retrieving
those cases that are most similar to a users query or profile.
Demographic: A demographic recommender provides recom-
mendations based on a demographic profile of the user. Recom-
mended products can be produced for different demographic
niches, by combining the ratings of users in those niches.
Knowledge-based: A knowledge-based recommender suggests
products based on processing deductions made about a usersneeds and preferences. This knowledge will sometimes contain
explicit functional knowledge about how certain product fea-
tures meet user needs.
Hybrid: These RSs are based on a combination of the aforemen-
tioned techniques. In this paper, a weighted hybrid recom-
mender approach is proposed.
Next, some specific works on the application of RSs for search-
ing for learning objects in repositories are described:
One of the first attempts to develop a recommender system for
learning resources was the work developed by Anderson et al. [5],
who proposed the RACOFI system (Rule-Applying Collaborative Fil-
tering). This application is the result of two integrated systems:
COFI (Collaborative Filtering) and RALOCA (Rule Applying LearningObject Comparison Object). RACOFI combines two recommenda-
2 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114
8/12/2019 A Framework for Recommendation in Learning Object Repositories
3/14
tion approaches by integrating a collaborative filtering engine that
works with ratings provided by users. This filtering engine discov-
ers association rules between the learning resources and uses them
for recommendation.
Another interesting system has been proposed by Fiaidhi [17].
In this work, a model is presented for combining content, collabo-
ration, collaborative filtering and searching techniques in an inte-
gral engine call RecoSearch. The model enforces a collaborative
infrastructure for authoring, searching, recommending and
presenting Java source code learning objects. It also uses two spec-
ialised filtering engines which work simultaneously, Collabro-
Search and CollabroRecommender, to present relevant LOs from
presented queries or from the mined text collected from the collab-
orative chatting channel between users.
Altered Vista system, by Walker et al. [44], has been proposed
for the recommendation of learning objects based on collaborative
filtering applied in an educational setting. This system is specifi-
cally aimed at instructors and students who review web resources
targeted at education. The aim was to explore how to collect user-
provided evaluations of learning resources and then to propagate
them in the form of word-of-mouth recommendations on the qual-
ity of the resources.
Avancini and Straccia [7] proposed the CYCLADES tool. This
application offers a broad range of functionality for both individual
scholars, who wish to search and browse in digital archives, and for
communities of scholars who wish to share and exchange informa-
tion. This functionality was designed by a multi-disciplinary and
distributed team with backgrounds in digital libraries, databases,
information retrieval, and web-based systems, as well as com-
puter-supported cooperative work and virtual communities.
The QSIAs (Questions Sharing and Interactive Assignments) sys-
tem has been developed by Rafaeli et al. [32]. This systemis used in
the context of online communities in order to harness the social
perspective on learning and to promote collaboration, online rec-
ommendation and further formation of learner communities. The
main characteristic of the system is focused on the user being able
to decide whether to assume control on who advises (friends) or touse a collaborative filtering service.
The approach proposed by Tang and McCalla[41]describes an
evolving web-based learning system which can adapt itself not
only to its users, but also to the open Web. More specifically, the
novelty with respect to the system lies in its ability to find relevant
content on the web, and its ability to personalise and adapt this
content based on the systems observation of its learners and the
accumulated ratings given by the learners. Hence, although learn-
ers do not have direct interaction with the open Web, the system
can retrieve relevant information related to them and their situ-
ated learning characteristics.
The LORM tool (Learning Object Recommendation Model) pro-
posed by Tsai et al.[42]was developed to retrieve and recommend
suitable learning objects for learners. This tool adopts an ontolog-ical approach to performing semantic discovery, as well as both
preference-based and correlation-based approaches to rank the de-
gree of relevance of learning objects to a learners intention and
preference. The mechanismin this tool is a hybrid method that rec-
ommends the learning objects. First, the preference-based algo-
rithm will calculate a learners preference score and the second
correlation-based algorithm will provide similar learners experi-
ences to calculate the helpfulness score. Finally, the two scores will
be aggregated to one recommendation score.
A proposal that explores a new way of obtaining the quality rat-
ing of LOs is proposed by Kumar et al. [24]. This system uses Bayes-
ian Belief Networks to overcome the incompleteness and absence
of learning object quality reviews, as well as the divergence of ap-
plied quality rating standards and the monoculture of weighingevaluations from different reviewers. It contains a hybrid approach
(content-based and collaborative filtering) which implements a
Markov model to verify current learning object quality rating stan-
dards and determines whether they have caught all variables in the
quality evaluation model.
In the proposal by Al-Khalifa [3], an Arabic Learning Object
Repository with recommendation capabilities is described. It was
created for hosting Arabic learning objects and serving the needs
of the Arabic educational community. The repository has inte-
grated advanced features that cannot be fulfilled using well-known
search engines.
The LRMDCR tool (a Learners Role-based Multidimensional Col-
laborative Recommendation) is proposed by Wan et al. [45]. This
tool uses the Markov Chain Model to divide the group of learners
into advanced and beginners by using the learning activities and
the learning processes. For a calculating schema, it used multidi-
mensional collaborative filtering to decide on the recommended
learning objects for every learner of the group.
Ruiz-Iniesta et al.[35]have developed important studies in the
area. In this proposal, a proactive recommendation approach for
repositories of learning objects that adapts to the student profile
is described. In doing so, it uses an ontology of programming topics
as an index to organise the LOs in the repository while the profile
of students stores information about their navigation history. The
method incorporates hybrid approaches that combine content
and social aspects.
The proposal by Bozo et al. [9] presents a recommender ap-
proach for LO searches focused on the teachers context model.
The main contribution of this proposal is that four filtering ap-
proaches are incorporated in order to be able to improve personal-
isation; that is, to recommend the most interesting or relevant LOs
to each particular user. This proposal uses a conceptual model of
the curriculum of the educational systemin Chile. For the moment,
their results are experimental.
Finally, Manouselis et al.[27]describe a pilot study called CEL-
EBRATE in which the common aim is to allow European teachers
and learners to easily locate, use and reuse both open content as
well as content from commercial suppliers. For this, the CollaFiSsimulation environment, which allows parameterising, executing
and evaluating all considered variations of the studied algorithm,
has been used. This proposal shows an overview of selected recom-
mender systems for learning resources and related evaluations
studies.
After a general review of all the previous projects, the following
points can be outlined:
Less than half of the proposals (5 out of 13) are full sys-
tems: Altered Vista, CYCLADES, QSIA, Tang & McCalla and
Manouselis;
Only six proposals were evaluated by using their own
repositories or data bases: RACOFI, Altered Vista, CYC-
LADES, Tang and McCalla, Al-Khalifa and Manouselis. Seven proposals were pilot experiments with human users:
Altered Vista, CYCLADES, QSIA, Tang and McCalla, LRMDCR,
Ruiz-Iniesta, and Manouselis.
On the other hand, a closer and more in-depth look into the cur-
rent status of these proposals reveals the limitations each one pos-
sesses.Table 1provides an overview and comparison of the main
characteristics of the previous research work versus the DELPHOS
systemproposed in this paper, which will be explained in more de-
tail in the following sections. These characteristics are described
below:
(1) Hybrid approach: combines at least two recommendation
approaches such as collaborative, content-based, demo-graphic, and knowledge-based.
A. Zapata et al. / Advances in Engineering Software 56 (2013) 114 3
8/12/2019 A Framework for Recommendation in Learning Object Repositories
4/14
(2) Advanced search: allows not only the use of keywords, but
also metadata values for the LOs search.
(3) Filtering criteria: contains some filtering criteria for the LOs
search. These filters allow ranking the list of recommended
LOs.(4) Rating of LOs: shows the final rating obtained for each of the
recommended LOs.
(5) Statistics of LOs: shows statistics associated with the recom-
mended LOs, such as the number of downloads, number of
evaluations, and average evaluation.
(6) Explanation: shows an explanation about why these specific
LOs have been recommended and not others.
(7) Evaluation: provides an instrument to evaluate the recom-
mended LOs.
As shown in Table 1, the characteristics which most of the com-
pared systems contain are advanced search, rating of LOs and eval-
uations. On the other hand, more than half of them (seven) use a
hybrid approach. But DELPHOS is the only tool that provides all
the evaluated characteristics. In fact, our proposed tool not only
contains several filtering criteria (content similarity, usage, quality
evaluation and profile similarity), but also provides statistics about
the recommended LOs and shows some related LOs that have been
downloaded by other users. Finally, it is the only tool that provides
explanations of why an object is recommended.
3. DELPHOS architecture
The DELPHOS tool is a framework to assist users in the persona-
lised search for learning objects in repositories. A first version of
the system has been developed to recover the resources which
are most relevant to users requirements, using the repository of
a Learning Object Management System called AGORA [31]. TheAGORA platform is a proposal to assist the instructordesigner in
the construction process of LOs, conforming to an instructional
need. It includes a repository that stores learning objects, including
metadata and its associated resources. AGORA exposes its func-
tionalities and information through a collection of services and
components which facilitate its extensibility and interoperability
with other applications like DELPHOS.
The DELPHOS model and modules implemented into architec-
ture are explained in the next subsections. A general description
of its functions and interactions with AGORA is shown in Fig. 1.
These two applications communicate when an instructor uses a
Graphic User Interface (GUI) to define a learning requirement.
The DELPHOS interface employs a collection of Web components
to allow an easy and interactive operation. The GUI characteristicsare explained in detail in Section 4. The query parameters are used
to retrieve a collection of possible Learning Objects (LOs pre-selec-
tion) stored in the AGORA repository as well as other important
information like metadata, user profiles and activity records re-
lated to LO management. As we can see Fig. 1,AGORA firstly exe-
cutes a basic search of coincidental learning in order to find
those that are more similar to the specifications given by the users
query. In order to do this, AGORA uses LOs metadata information
(Metadata table) and then sends the list of retrieved LOs to DEL-
PHOS for post processing. Then, DELPHOS uses this subset of learn-
ing objects (LOs pre-selection) in order to make a recommendation
personalised to the user. For its execution, DELPHOS uses several
tables that store all the information associated with contents and
evaluations of objects (ContentSimilarity andLOEvaluations respec-
tively), users (UserProfile), and usage records (LOActivities). These
tables are initially obtained from AGORA and automatically up-
dated by DELPHOS as result of the executions of consecutive users.
All this information is used for filtering and ranking LOs to provide
a list of recommendations that is shown to the user for selection
and downloading.
It is important to note that, although DELHOS shares some char-
acteristics of a case-based recommendation system[39]:
It relies on more structured representations of item content
that traditional content based recommender systems that
operate in situations where content items are represented
in an unstructured or semi-structured manner.
It also uses weights to encode the relative importance of
each particular filter in a similar way that case-based rec-
ommender systems calculate the similarity function using
a weighted sum metric.
DELPHOS is a hybrid recommendation approach that uses dif-
ferent filtering techniques based on criteria for refining, improving
and customising search results. In fact, it shares some characteris-
tics of other types of recommender systems such as:
Collaborative recommendation when using the historyinformation on the most used LOs and the users evaluation
of the LOs.
Content recommendation when calculating the similarity
degree between LOs.
Demographic recommendation when calculating the simi-
larity between users.
The general architecture of DELPHOS consists of three main
modules [48] that are executed in sequential order. Firstly, there
is a preselection of LOs, then the filtering criteria are applied and
finally the LOs are rankedin order to show the list of recommended
LOs to the user (seeFig. 2).
3.1. Learning objects pre-selection module
The aim of this module is to obtain an initial set of LOs available
in the repository that match with the users query. This module
uses only the text or keywords provided by the users in the query
in order to find and select only the LOs that contain the full text/
keywords. This module is similar to the basic search (that provides
it with most of the repositories) in which the user only has one in-
put field available where he/she can introduce a short text to de-
scribe the LOs he/she is looking for.
The subset of preselected LOs delivered by this module will be
used during the next step in the filter criteria module. By using this
initial pre-selection, DELPHOS can reduce the number of LOs to ap-
ply the filters and thus increase the speed of the filtering process.
This module is carried out in DELPHOS using the repository ofthe AGORA platform.
Table 1
Comparison of DELPHOS versus some similar and related tools.
Systems 1 2 3 4 5 6 7
RACOFI No Yes Yes Yes No No Yes
RecoSearch Yes Yes No Yes No No Yes
Altered Vista No Yes No No Yes No Yes
CYCLADES No Yes Yes Yes No No Yes
QSIA No Yes No Yes No No Yes
Tang and McCalla No No No No No No YesLORM Yes No No Yes No No Yes
Kumar et al. Yes No No Yes No No Yes
Al-Khalifa No No No Yes Yes No Yes
LRMDCR Yes No Yes Yes No No Yes
Ruiz-Iniesta et al. Yes No No No No No Yes
Bozo et al. Yes Yes Yes Yes No No Yes
Manouselis et al. No Yes Yes Yes Yes No Yes
DELPHOS Yes Yes Yes Yes Yes Yes Yes
4 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114
8/12/2019 A Framework for Recommendation in Learning Object Repositories
5/14
3.2. Filter criteria module
The aim of this module is to apply different ranking to the pre-
vious LOs subset depending on which filtered criteria have been
selected. It allows personalising the order or ranking of the recom-
mended LOs. This is similar to an advanced search (which is not
provided by most of the repositories) in which the user has several
input fields (some of them optional) which can specify severalparameters for tuning the search of the desired LOs.
DELPHOS provides four filtering criteria based on content simi-
larity, usage, quality evaluation and user profile similarity.
3.2.1. Filtering by content similarity
This filter is based on a content-based approach. It uses a metric
that calculates the similarity degree (Sim) between two LOs. The
first LO is always the same and is called the Learning Object Pattern
(LOP) or virtual LO, which is defined as the ideal learning object tosatisfy a request. This LOP is matched with each one of the
Fig. 1. Interaction between DELPHOS tool and AGORA platform.
Fig. 2. DELPHOS architecture.
A. Zapata et al. / Advances in Engineering Software 56 (2013) 114 5
8/12/2019 A Framework for Recommendation in Learning Object Repositories
6/14
preselected LOs (in the previous module) to obtain their similarity
levels. The particular aim of this filter is to give a higher score to
the objects which are most similar to the users query. The context
similarity value (between0 and 1) of an object (FOxCS) is calculated
using the following equation:
FOxCS SimOx;Oy
Pm2MsimMetamX;my
jMj 1
where |M| is the total number of metadata to compare and sim-
Meta(mX, my) is the semantic distance between LO metadata m
(Ox) and the ideal LOP (Oy) considering the average metadata simi-
larity[28].
3.2.2. Filtering by usage
This filter is based on a collaborative approach and it obtains a
ranking of the previously preselected LOs depending on their his-
torical level of usage by users. The particular aim of this filter is
to give a higher score to the most used objects. In order to obtain
it, implicit information about the users interaction with LOs is
used. In our case, the download frequency of a learning object is
the only activity considered (the visualisation frequency of a LO
is not used). The usage value (between 0 and 1) of an object
(FOxUsage) is calculated using the following equation:
FOxUsage
PNI1DOxi
MaxDOy 2
whereDOxi is the number of downloads of a Learning Object (Oxi)
andMaxDOyis the maximum number of downloads that a learning
object has (Oy) in the repository.
3.2.3. Filtering by evaluation
This filter is based on a collaborative recommendation approach
to perform the different ranking of preselected LOs depending on
their evaluations by users. Its particular aim is to give a higher
score to the best evaluated objects. This evaluation is done by
the users themselves (explicit information) using a specific survey
on different pedagogical issues of the LO. The evaluation value (be-
tween 0 and 1) of an object (FOxQE) is calculated using the follow-
ing equation:
FOxQE
PNI1
P12J1
aIJ
h i
NP12K1aMaxK
3
wherePN
I1
P12J1aIJis the average score of the evaluation of an ob-
ject; N the total number of users who have evaluated an object
andP12
K1aMaxKis the maximum evaluation value of an object.
3.2.4. Filtering by profile similarity
This filter is based on the recommendation demographic ap-proach based on the users profile. This filter performs a ranking
of the preselected LOs depending on the profile similarity between
the current user, who is making the query, and the owner (the user
who has created or published the LO). Its particular aim is to give a
higher score to those objects that are created or published by other
users and are most similar to the search being carried out by the
current user. The profile similarity value of an object (FOxPS) is cal-
culated using the following equation:
FOxPS SimUpx;Upy
PaeASimAttributeax; ay
jAj 4
where |A| is the total number of attributes to compare and SimAt-
tribute(ax, ay) is the semantic distance between attributes corre-
sponding to user profile (x), LO publisher and the profile (y) of theuser who performed the search.
3.3. Learning objects rating module
The aim of this module is to obtain the final rating of each LO by
combining the score of the previous filter criteria. DELPHOS uses a
weighted hybridisation strategy that uses the weighted union or
sum of the scores. In our case, this value (between 0 and 1) is ob-
tained using the following equation:
FOxCSw1 FOxUsagew2 FOxQEw3 FOxPSw4=N 5
where FOx is the value obtained by the LO in each filter criteria; wxis the weight of each filter criteria (it is a value between 0 and 1);
andNis the number of used filter criteria that is, filter criteria that
have a weight greater than 0. This value can be 1, 2, 3 or 4.
Although static values can be assigned to the parameters (w1,
w2, w3 and w4), it is better to tune the optimal ratios for the
weights dynamically. In order to resolve this problem, this paper
proposes automatically obtaining these weights in an adaptive
and dynamic way. That is to say, the value of these weights will ad-
just or adapt to the amount of available information about each fil-
tering criterion. Next, the four proposed equations to automatically
recalculate the used weights are described.
3.3.1. Adaptive weight for content similarity (w1)
This weight changes according to the percentage of complete-
ness of the metadata provided by users when creating/publishing
LOs. This weight increases as more LOs metadata is provided and
it is calculated using the following equation:
w1UserProvidedMetadataIEEE-LOMMetadata
6
where UserProvidedMetadatais the average number of metadata pro-
vided by users and IEEE-LOMMetadatais the total number of metada-
ta used by standard IEEE-LOM.
3.3.2. Adaptive weight for usage (w2)
This weight changes according to the number of LOs publishedand used in the repository. It increases as more LOs are used
(downloaded) and is calculated using the following equation:
w2LOsDownloadedLOsPublished
7
where LOsDownloaded is the total number of downloaded LOs and
LOsPublished is the total number of published LOs.
3.3.3. Adaptive weight for evaluation (w3)
This weight changes according to the number of evaluated LOs.
It increases as more LOs evaluations are created and is calculated
using the following equation:
w3
LOsEvaluated
LOsPublished 8
where LOsEvaluatedis the total number of evaluated LOs and LOs Pub-
lishedis the total number of published LOs.
3.3.4. Adaptive weight for profile similarity (w4)
This weight changes according to the percentage of complete-
ness of the editable fields provided by the user when registering
their own profile. It increases as more users complete their profiles
and is calculated using the following equation:
w4UserProfileProvidedUserProfileTotal
9
where UserProfileProvidedis the average number of provided fields in
user profiles and UserProfileTotalis the total number of fields in theuser profile.
6 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114
8/12/2019 A Framework for Recommendation in Learning Object Repositories
7/14
4. DELPHOS interface
DELPHOS has a GUI that is designed to be user-centred with
special emphasis on the CSSs (Cascade Style Sheets) language
[12]for easy modification and implementation of new features in
the future. It also uses other Web languages such as HTML (Hyper-
text Markup Language) [21], JavaScript [16] and PHP (Hypertext
Preprocessor)[30].The DELPHOS main page provides a direct link to the following
functions: to edit the user profile, to search for learning objects and
to log out of the system.
4.1. Edit your profile
Users can edit their profiles when they register, or at any other
time. It is very important to our system that users provide informa-
tion about themselvesby filling all thefieldsof theregistration form.
Using this information DELPHOS can improve the recommendation
by personalising the list of LOs. There are two types of requested
information. On the one hand, like most systems, the users have to
provide general and personal data (some of which is optional) such
as Username, Password, First name, Last name, Email, Brief Descrip-
tion, Detailed Description, Affiliations, Date of Birth, Sex, Place of
Residence and Nationality. On the other hand, there is some addi-
tional requested information in the users profile about academic
history such as Education level (Higher Education, Masters, Ph.D.,
etc.), Research area (Agricultural Science, Healthcare Science, Natu-
ral Science,Social Science,Engineering), Language(Spanish, English,
French), Teaching experience (05 years, 610 years, 1115 years,
1620 years, more than 20 years), Information Technology experi-
ence (None, Initial, Medium, Advanced), Didacticexperience (Initial,
Medium, Advanced, None), Design Instruction experience (Initial,
Medium, Advanced, None), Learning Object editor used (Reload,
eXe, Xerte, Advanced SCORM Editor, Other, None), Learning Man-
agement System used (Moodle, Dokeos, Claroline, Atutor, WebCT,
Other, None), and Learning Object Repository used (AGREGA, ARI-
ADNE, MERLOT, LORI, CAREO, MACE, Other, None).
4.2. Search for learning objects
The GUI for searching for LOs is designed to be very flexible,
allowing not only the use of text fields for beginner users, but also
adjusting the weight of the recommendation criteria for advanced
users. When doing a search in the normal or simple way, (see
Fig. 3), users can use a text (obligatory) or keywords and some
metadata values (optional) according to IEEE-LOM. The metadata
were selected according to The Canadian Core (CanCore) initiative
[19], which is described below:
Language: the language of the LO content (English, Spanish,
French, Portuguese and Italian). File format: the format or file extension of the LO (DOC, PDF,
HTML, TXT, XLS, PPT, SWF, MID, MP3, WAV,RA, BMP,
GIF,JPG, PNG, AVI, MPG, MOV, RV, ASF, WMV or FLV).
Resource type: the use or type of resource of the LO (Exer-
cise, Questionnaire, Figure, Graph, Slide, Table, Exam,
Experiment, Lecture, Photograph, Video or Music).
Semantic density/Media content: the amount of information
that the LO contains (Very low, Low, Medium, High, and
Very high).
Receiver: the type of user which the LO is aimed at (Teacher,
Learner, Manager or Professional).
Context: the academic level which the LO is aimed at
(School, Higher Education, Training, Other).
Difficulty/Complexity: the difficulty degree of the content ofthe LO (Very easy, Easy, Medium, Hard, Very hard).
DELPHOS uses default values for the weights of the four filters
or recommendation criteria (see previous Section 3.3). However,
these specific values of the recommender criteria can be viewed
and modified by clicking on the Filters icon (seeFig. 3at top-right)
that shows the advanced search interface (seeFig. 4).
The recommendation criteria or filters panel (seeFig. 4) allows
more advanced users to modify different values of the weight of
each recommendation criterion. Users can assign new values (ina range from 0% to 100%) by using a sliderbar, and they can also
activate or deactivate every recommendation criteria by simply
clicking the corresponding checkbox. However, at least one crite-
rion must remain activated in order to be able to calculate the rat-
ing associated with the recommended LOs. It is important to notice
that DELPHOS provides the user with default weight values that
are dynamically and periodically calculated, as explained in Sec-
tion3.3. Using these default weight values, users can retrieve the
most appropriate LOs by following a traditional search without
needing to set or tune any weights.
4.3. List of recommended learning objects and additional information
DELPHOS shows the user not only a personalised ranked list of
recommended LOs, but it also provides diverse additional informa-tion about each LO (seeFig. 5). This information can be very useful
to the users as it helps them to select the best and most interesting
LOs to download and use (none, one or several) from the recom-
mended list of LOs.
As shown inFig. 5, the DELPHOS tool offers a short explanation
which shows the reasons why this particular object has been rec-
ommended (why? Icon). With this in mind, we discretised the val-
ues of each filteringcriterionobtained in the same way: High(value
>0.7 and 61), Medium (value 60.7 andP0.3) and Low (value
8/12/2019 A Framework for Recommendation in Learning Object Repositories
8/14
600 LOs that have been published by approximately 300 users of
different Spanish and Latin-American Universities. Currently
AGORA has approximately 70 LOs related with several engineering
such as electrical, civil and environmental engineering.
Several experiments have been carried out in order to complete
a first evaluation of DELPHOS using a group of 24 beta tester users.
All these users were first year students of civil engineering degrees
at the Autonomous University of Yucatan in Mexico. The experi-
ments were executed during the practical lessons of the subject
of Introduction to Development of Computer Science Applications
at the end of the second semester of 2012 year. The subject teacherintroduced DELPHOS and AGORA to the students in one session
and in another session students used it for searching specific LOs
on engineering by following the instructions given by the teacher.
The first experiment shows how the four filtering or recommen-
dation criteria affect the ranking of the LOs. The second experiment
compares our proposal of using default adaptive weights versus
using random values or only one filtering criterion. The third
experiment validates the usefulness of the tool.
5.1. Experiment 1
This first experiment shows how DELPHOS works with different
filtering criteria. The objective was to complete a first trial of thetool by the students and to test the behaviour in the ranking of rec-
ommended LOs when different weights are used in the same
search/query (the same text/keywords and parameters/metadata
values).
A total of seven test configurations were compared, in which
the same search/query was used but with different weighted val-
ues for each recommendation criterion in each test (see Table 3).
The objective of each test is outlined below:
Test 1: To test the use of the default adaptive weighted val-
ues of each recommender criterion automatically calcu-
lated by the system.
Tests 2, 3, 4 and 5: To test the use of only one recommenda-
tion criterion each time; that is to say, one criterion has aweight of 100% when the other three criteria have a weight
of 0%.
Tests 6 and 7: To test the use of random values of the
weights of the four recommendation criteria.
In this experiment, all students carried out the query that they
wanted; that is to say, each student used one different search. Ta-
ble 4, shows an example of search/query used by one of the stu-
dents during this experiment.
When DELPHOS executed the previous search/query (seeTable
4), the pre-selection module returned an initial subset of 19 LOs.
Then, the filter criteria and rating module applied the equations
(explained in Section3.2) to those 19 LOs in order to obtain their
final rating in each one of the seven tests. Fig. 7 shows the finalranking of the Top-10 LOs in the seven tests.
Fig. 3. Interface to search learning objects.
Fig. 4. Recommendation criteria panel.
8 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114
8/12/2019 A Framework for Recommendation in Learning Object Repositories
9/14
As can be seen inFig. 7, all the test configurations show differ-
ent rankings; that is to say, the list of recommended LOs has a dif-
ferent order in each test. Every LO has a different position in the
ranking according to the specific recommender criteria activated
and its weight values. As an example of how the positions of LOs
change in each test, the behaviour of the LO with ID 684 was ana-
lysed (follow the arrows in Fig. 7). We can see that LO 684 is lo-
cated in the first or second position in the ranking in tests 1, 3, 6
and 7. However, in tests 2, 4 and 5 it has dropped several positions.Then, as each student used a different query, the spearmans rank
correlation matrix was used to show the statistical relationships
between all the rankings (different orders of LOs) obtained when
using the 24 students/searches for each different configuration or
test. Spearmans rank correlation coefficient or Spearmans rho is
a non-parametric measure of statistical dependence between two
ranks [18]. It assesses how well the relationships between two
variables or ranks can be described using a monotonic function
and it is defined as the well-known Pearson correlation coefficient
between the ranked variables. In order to obtain the Spearmans
rho correlation matrix, we have calculated the Spearmans rho
coefficient between each pair of configurations or tests (seeTable
5). This correlation matrix is symmetric because the correlation be-
tween two variablesXand Yis the same as the correlation betweenYandX. We have also demonstrated the Student t-testPvalue (p)
that shows whether the differences between the two variables can
be considered statistically significant with a confidence level of
99% (p< 0.01) or 95% (p< 0.05).
As can be seen inTable 5, all tests have a positive correlation in
ranking LOs. On the one hand, the highest correlations (greater
than 0.9) are between the configurations or tests 1, 6 and 7. It could
be expected that this high correlation, due to these three specific
configurations, uses values in the four weights; that is to say, they
use information from the four filtering criteria. On the other hand,from the other four configurations that only use one single criteria,
test 3 (that uses the usage information) also shows a high correla-
tion (greater than 0.8) with tests 1, 6 and 7. This shows that, in this
case, the usage information seems to be the most important infor-
mation between all the information available (content, usage, eval-
uation and profile) for recommendation purposes.
In conclusion, this first experiment demonstrates how the DEL-
PHOS tool obtains different LOs rankings to the same search/query
depending on the used recommendation criterion and its weight
values. In this way, the system can personalise the LOs ranking
by using the default weight values or the user himself/herself
can prioritise which recommendation criterion is the most inter-
esting and in what percentage.
5.2. Experiment 2
This second experiment analyses what the most interesting LOs
are for each particular user, and what the best test configuration is;
that is to say, which configuration returns these LOs in the highest
order. Implicit rating has been used starting from the users clicks
data[38]of downloaded LOs in order to know which LOs the users
are really interested in. In our case, users can click or select to
download one, several or none of the recommended LOs and we as-
sume that they are interested in those LOs which have been down-
loaded. The order or position that the downloaded LOs have in the
list of recommended LOs is used to measure the interest of the user
in these LOs. The objective of this experiment is to compare and to
find which one of the previously proposed configurations of rec-ommendation criteria or weight values (see Table 3) obtains the
Fig. 5. Example of additional information of a list of recommended LOs.
Table 2
Icons description.
Icons Description
Related
objects
It shows a list of objects of the most downloaded
objects by users that have also downloaded this LO
Similar
objects
It shows a list of the most similar objects accordingto
IEEE-LOM metadata
Downloads It shows how many users have downloaded this LO
Pedagogical
reviews
It shows how many users have evaluated this LO
Why? It shows a short explanation about why this
particular object has been recommended
A. Zapata et al. / Advances in Engineering Software 56 (2013) 114 9
8/12/2019 A Framework for Recommendation in Learning Object Repositories
10/14
best results with the highest precision in the top ranked LOs; that
is to say, when the user selects/downloads the LOs that have been
recommended at the highest ranking/positions. In this experiment,
the students were asked to search for LOs related with a civil engi-
neering topic proposed by the instructors. The specific topic was
bridge design and construction. Each student carried out three
searches in order to find two or three LOs. The objective was to
select/download only the most interesting or better LOs aboutthe topic for each student. Some example of sentences used by
the students during the searches were: bridge project, bridge con-
struction, bridge design, civil engineering bridge, bridge plan, etc.
In summary, the 24 students executed a total of 72 searches, using
the same null configuration. This null configuration means that no
filter criteria have been used; that is to say, the four weight valueswere set to 0%. In this way, the list of recommended LOs was not
ranked by any filter criteria. The users could then see all the infor-
mation about the list of LOs and select or click to download on the
LOs they are interested in without knowing the ranking informa-
tion of each LO. Later and in off-line mode, the seven test configu-
rations (seeTable 3) were automatically calculated starting the 72
searches in order to obtain what the position of the clicked LOs was
in each one of the seven rankings. To evaluate the performance of
each test configuration, two metrics have been used: average reci-
procal hit rate and recall.
Firstly, the Average Reciprocal Hit Rate (ARHR), also known as
Mean Reciprocal Rank (MRR), has been used in order to compare
the position of the first clicked/downloaded LOs in the seven test
configurations. The MRR of each single search or query is the reci-procal of the rank or position that the first clicked/downloaded LO
Fig. 6. Example of learning object about bridges design.
Table 3
Recommendation criteria values used in each test configuration.
Recommendation
criteria
Test
1 (%)
Test
2 (%)
Test
3 (%)
Test
4 (%)
Test
5 (%)
Test
6 (%)
Test
7 (%)
Content similarity 55 100 0 0 0 80 10Usage 73 0 100 0 0 50 90
Evaluation 52 0 0 100 0 65 30
Profile similarity 70 0 0 0 100 10 15
10 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114
8/12/2019 A Framework for Recommendation in Learning Object Repositories
11/14
held in the list, or 0 if none of the recommended LOs have been
clicked. The score for a sequence of searches is the mean of the sin-
gle searchs reciprocal ranks [8]as is expressed in the following
equation:
MRR 1
Sj j
XSjj
i1
1
ranki10
whereSis the number of searches and rank iis the rank or positionof the first clicked/downloaded LO for search i.
Fig. 8shows a bar chart of the total MRR obtained starting from
the 72 search of the 24 users in each one of the seven test
configurations.
As can be seen inFig. 8, the configuration number 1 (our pro-
posed default configuration) obtained the highest MRR result com-
pared to all the other six configurations that obtained very similar
values. Configurations 6 and 7 obtained the second and third high-
est MRR values, followed by the configuration 4.
Secondly, recall on top-Nrecommendation tasks has also been
used as an accuracy metric of the top-Nperformance [13]. Recall
computes the percentage of known relevant or interesting LOs that
appear in the top-Npredicted LOs. Recall for a single search at level
Ncan assume either the value 1 if the user clicks/downloads a LO
of a position/order 6 N, or 0 if the user does not complete this ac-
tion. The overall recall at each level Nis defined by averaging over
all the searches:
RecallN #clicks
S 11
where #clicks is the number of clicks/downloads; Sthe number of
searches; andNis the position or level in the list of LOs.
Fig. 9 shows a comparison of the seven configurations versus re-
call at different top-N (from top 17). In general, recall increases
very fast as N increases. All the configurations show a similar
behaviour obtaining very high recall values (near to 1) from
N= 5. The highest recall values are obtained again by the Configu-
ration number 1, followed by configurations 6, 7 and 3.
Table 4
An example of the search parameters used in the first
experiment.
Parameters Values
Text/keywords Engineering
Language English
File format All
Resource type All
Semantic density MediumReceiver Learner
Context Higher education
Difficulty Medium
Fig. 7. Ranking of top-10 LOs of an example search with the seven test.
Table 5
Spearmans rank correlation matrix.
Test 1 Test 2 Test 3 Test 4 Test 5 Test 6 Test 7
Test 1 1 0.73978* 0.82837** 0.73336* 0.66792* 0.96694** 0.95804**
Test 2 0.73978* 1 0.65203* 0.51602 0.41860 0.61851* 0.54475*
Test 3 0.82837** 0.65203* 1 0.64226* 0.49907* 0.83584** 0.86936**
Test 4 0.73336* 0.51602 0.64226* 1 0.72225** 0.70002** 0.73058**
Test 5 0.66792* 0.41860 0.49907* 0.72225** 1 0.61519* 0.54195
Test 6 0.96694** 0.61851* 0.83584** 0.70002** 0.61519* 1 0.95796**
Test 7 0.95804** 0.54475* 0.86936** 0.73058** 0.54195 0.95796 1
*
p< 0.05.** p< 0.01.
Fig. 8. Results of the total MRR of 24 users and 72 searches for each different test
configurations.
A. Zapata et al. / Advances in Engineering Software 56 (2013) 114 11
8/12/2019 A Framework for Recommendation in Learning Object Repositories
12/14
In conclusion, this second experiment has shown that the con-
figuration number 1 performs better than the others due to the factthat it obtains the highest MRR and recall values. That is to say, the
default adaptive weight values (automatically calculated by the
system) have shown in this experiment that they can adapt well
to each particular user and search. This is very important for the
DELPHOS system, as it is related to how easy it is to use a hybrid
recommender system. By using default values, it is not necessary
to ask the user for specific weight values to personalise the search.
5.3. Experiment 3
The third experiment endeavours to validate the usefulness
of the DELPHOS tool for an LO personalised search. In order
to do this, the same 24 students who participated in experi-
ments 1 and 2 were invited to complete two questionnairesgiving their own opinion about the usability of the tool (see Ta-
ble 6).
On the one hand, we used the System Usability Scale (SUS)[11]
that is a simple 10-item scale giving a global view of subjective
assessments of usability. All the questions in this survey require
an answer on a Likert scale from 1 (strongly disagree) to 5 (strongly
agree). On the other hand, we applied the Computer System
Usability Questionnaire (CSUQ)[25]that is a survey developed at
IBM and is Composed of 19-item scale where each item is a state-
ment and a rating on a seven-point scale of 1 (strongly Disagree) to
7 (strongly Agree) and a Not Applicable (N/A) point outside the
scale. Then, the general degree of usability of the system in each
Fig. 9. Recall at different top-N.
Table 6
Results of SUS and CSUQ questionnaires.
SUS Average
1. I think that I would like to use this system frequently 4.45
2. I found the system unnecessarily complex 2.37
3. I thought the system was easy to use 4.16
4. I think that I would need the support of a technical person to be able to use this system 2.08
5. I found the various functions in this system were well integrated 4.08
6. I thought there was too much inconsistency in this system 1.83
7. I would imagine that most people would learn to use this system very quickly 4.12
8. I found the system very cumbersome to use 1.87
9. I felt very confident using the system. 3.95
10. I needed to learn a lot of things before I could get going with this system 2.04
Usability 76.46%
CSUQ Average
1. Overall, I am satisfied with how easy it is to use system 4.14
2. It was simple to use system 23. I can effectively complete my work using system 3.71
4. I am able to complete my work quickly using system 4.03
5. I am able to efficiently complete my work using system 2.74
6. I feel comfortable using system 3.65
7. It was easy to learn to use system 3.8
8. I believe I became productive quickly using system 2.28
9. System gives error messages that clearly tell me how to fix problems 3.78
10. Whenever I make a mistake using system, I recover easily and quickly 2.31
11. The information (such as online help, on-screen messages, and other documentation) provided with system is clear 1.69
12. It is easy to find the information I needed 3.57
13. The information provided for system is easy to understand 3.66
14. The information is effective in helping me complete the tasks and scenarios 2.29
15. The organisation of information on system screens is clear 3.72
16. The interface of system is pleasant 2.02
17. I like using the interface of system 2.48
18. System has all the functions and capabilities I expect it to have 1.6
19. Overall, I am satisfied with system 3.84
Usability 74.25%
12 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114
8/12/2019 A Framework for Recommendation in Learning Object Repositories
13/14
questionnaire is obtained by averaging the answers of all the stu-
dents in one single value between 0% and 100%.
The results of the SUS and CSUQ questionnaires (see Table 6)
show that users have a good opinion about the functionalities pro-
vided by the DELPHOS tool obtaining a value of 76.46% (SUS) and
74.25% (CSUQ). In general, they feel the system is easy to use,
and greatly facilitates the actions of searching and retrieving LOs
to suit their specific needs. Both questionnaires also included a text
field in which users could express comments and suggestions.
From these comments, there were some which gave rise to the fol-
lowing improvements of the system: to incorporate social ele-
ments like an internal chat room, forums and mechanisms that
allow collaborative search between users.
6. Conclusions and future work
Learning objects repositories are digital libraries that are chang-
ing the way that we search for, find and use resources anywhere
and anytime. In order to help users in searching the most interest-
ing LOs in repositories we propose the DELPHOS framework that
uses a hybrid recommender approach. DELPHOS provides a great
number of advantages when compared with other similar recom-mender tools. In fact, some of its main advantages are: (1) all the
additional information that is provided to the user about each rec-
ommended LO (to help in making a better decisionabout whichLOs
to select); (2) the use of a hybrid approach with several filtering or
recommendation criteria (to personalise the list of recommended
LOs); and (3) dynamic calculation of adaptive weights that provide
default values to the user (to use a hybrid recommendation system
more easily). In this paper, we have carried out several experiments
using a group of 24 civil engineering students that show some
examples of its use and its evaluation. In general, results obtained
confirm that the proposed weighted hybridisation strategy for rec-
ommendation work well for searching LOs and DELPHOS interface
is also useful and usable. Finally, it is important to note that,
although currently DELPHOS is fully integrated in the AGORArepository, the proposed architecture and weighted hybrid recom-
mender approach can be implemented in any other repository.
In the future, we want to carry out more experiments that use a
great number of users with different profiles and from different do-
mains or knowledge areas. In this way, we could carry out a more
in-depth validation of the effectiveness of the recommendations of
DELPHOS framework. We are also working on adding social and
collaborative characteristics such as chat room, forum, tagging
and comments LOs, and group recommendation. in order to allow
the collaborative search between groups of users.
Acknowledgements
This research has been partially supported by TIN2010-20395FIDELIO Project, MEC-FEDER, Spain; PEIC09-0196-3018 SCAIWEB-
2 excellence project, JCCM, Spain; POII10-0133-3516 PLINIO Pro-
ject, JCCM, Spain; the Regional Government of Andalusia and the
Spanish Ministry of Science and Technology Projects, P08-TIC-
3720 and TIN-2011-22408, respectively, and the National Council
of Science and Technology (CONACYT), Mxico.
References
[1] Aamodt A, Plaza E. Case-based reasoning: foundational issues, methodological
variations, and systems approaches. AI Commun 1994;7(1):3952.
[2] Adam JM, Pallars FJ, Bru R, Romero ML, Topping BHV. Editorial. CIVIL-COMP.
Adv Eng Software 2012;50:1158.
[3] Al-Khalifa HS. Building an Arabic learning object repository with an ad hoc
recommendation engine. In: Proceedings of the 10th international conference
on information integration and web-based applications & services (iiWAS 08),New York, USA; 2008. p. 3904.
[4] Al-Khalifa HS, Davis HC. The evolution of metadata from standards to
semantics in e-learning applications. In: Proceedings of the seventeenth
conference on hypertext and hypermedia (HYPERTEXT 06), Odense, Denmark;
2006. p. 6972.
[5] Anderson M, Ball M, Boley H, Greene S, Howse N, Lemire, D, et al. RACOFI: a
rule-applying collaborative filtering system. In: Proceedings of IEEE/WIC
international conference on web intelligence/intelligent agent technology
(COLA03), Halifax, Canada; 2003. p. 1323.
[6] ARIADNE. Alliance of Remote Instructional Authoring and Distribution
Networks for Europe; 2006. .
[7] Avancini H, Straccia U. User recommendation for collaborative andpersonalised digital archives. Int J Web Commun 2005;1(2):16375.
[8] Bian J, Liu Y, Agichtein E, Zha H. Finding the right facts in the crowd: factoid
question answering over social media. In: Proceedings of the 17th
international conference on World Wide Web (WWW 08), New York, USA;
2008. p. 46776.
[9] Bozo J, Alarc R, Iribarra S. Recommending learning objects according to a
teachers Contex model. Learning. In: Proceedings of the 5th European
conference on technology enhanced learning conference on sustaining TEL:
from Innovation to Learning and Practice (EC-TEL10), Barcelona, Spain; 2010.
p. 4705.
[10] Burke R. Hybrid web recommender systems. In: Brusilovsky P, Kobsa A,
Wolfgang N, editors. The adaptive web. Berlin/Heidelberg: Springer; 2007. p.
377408.
[11] Brooke J. SUS: a Quick and Dirty usability scale. In: Jordan PW, Thomas B,
Weerdmeester BA, McClelland AL, editors. Usability evaluation in
industry. London: Taylor y Francis; 1996. p. 18994.
[12] CSS. World Wide Web Consortium. .
[13] Cremonesi, P., Koren, Y., Turrin, R. Performance of recommender algorithms on
top-Nrecommendation tasks. In Proceedings of the fourth ACM conference onrecommender systems (RecSys 10), New York, USA; 2010. p. 3946.
[14] DCMI. Dublin Core Metadata Initiative. .
[15] DELPHOS. Learning Objects Intelligent Recommender System. .
[16] ECMA International. ECMAScript Language Speciation. Standard ECMA-262,
3rd ed. .
[17] Fiaidhi J. RecoSearch: a model for collaboratively filtering java learning objects.
Int J Instruct Technol Distance Learning 2004;1(7):3550.
[18] Fieller EC, Hartley HO, Pearson ES. Tests for rank correlation coefficients. I.
Biometrika 1957;44:47081.
[19] FriesenN. CanCore: interoperability for learning objectmetadata. In: Hillman DI,
Westbrooks EL, editors. Metadata in practice. ALA Editions; 2004. p. 10416.
[20] Han, Y. GROW: Building a High-quality Civil Engineering Learning Object
Repository and Portal. Ariadne: Web Magazine for Information Professionals.
vol. 49, p. 13.
[21] HTML. World Wide Web Consortium. .
[22] IEEE-LTSC. Standard for Learning Object Metadata. In IEEE Standard; 2002.
.[23] IMS Global Learning Consortium. IMS Digital Repositories Interoperability
Core Functions Information Model Version 1.0 Final Specification; 2003.
.
[24] Kumar V, Nesbit J, Winne P, Hadwin A, Jamieson-Noel D, Han K. Quality rating
and recommendation of learning objects. In: Pierre S, editor. E-learning
networked environments and architectures, Advanced information and
knowledge processing. London: Springer; 2007. p. 33773.
[25] Lewis JR. IBM computer usability satisfaction questionnaires: psychometric
evaluation and instructions for use. Int J HumanComput Interact
1995;7(1):5778.
[26] Longueville B. KINOA: a collaborative annotation tool for engineering teams,
International Workshop on Annotation for Collaboration, Paris, France; 2005. p
12332.
[27] Manouselis N, Vourikari R, Van Asschet F. Collaborative recommendation of e-
learning resources: an experimental investigation. J Comput Assis Learning
2010;26(4):22742.
[28] Menendez-Dominguez V, Zapata A, Prieto-Mendez ME, Romero C, Serrano-
GuerreroJ. A similarity-basedapproachto enhance learningobjectsmanagement
systems. In: 11th International conference on intelligent systems design andapplications (ISDA 2011), Cordoba, Spain; 2011. p. 9961001.
[29] Ouyang Y, Zhu M. eLORM: learning object relationship mining based
repository. In: The 9th IEEE international conference on e-commerce
technology and the 4th IEEE international conference on enterprise
computing, e-commerce and e-services (CEC-EEE 2007), Los Alamitos, USA;
2007. p. 6918.
[30] PHP (Hypertext Preprocessor). .
[31] Prieto ME, Menendez VH, Segura A, Vidal C. A recommender system
architecture for instructional engineering. In: Emerging Technologies and
Information Systems for Knowledge Society. LNCS 2008; 5288: p. 31421.
[32] Rafaeli S, Dan-Gur Y, Barak M. Social recommender systems:
recommendations in support of e-learning. J Distance Educ Technol
2005;3:2945.
[33] Resnick P, Varian HR. Recommender systems. CommunACM 1997;40(3):568.
[34] Ricci F, Rokach L, Shapira B, Kantor PB. Recommender systems handbook. 1st
ed. New York: Springer-Verlag; 2010.
[35] Ruiz-Iniesta A, Jimenez-Diaz G, Gmez-Albarrn M. Recommendation in
repositories of learning objects: a proactive approach that exploits diversityand navigation-by-proposing. In: The ninth IEEE international conference
A. Zapata et al. / Advances in Engineering Software 56 (2013) 114 13
http://www.ariadne-eu.org/http://www.w3.org/Style/CSS/http://www.dublincore.org/http://smile.esi.uclm.es/delphos/http://smile.esi.uclm.es/delphos/http://www.ecma-international.org/publications/index.htmlhttp://www.w3.org/http://ltsc.ieee.org/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdfhttp://teacode.com/biblio/er/imsdri_infov1p0.pdfhttp://www.php.net/http://www.php.net/http://teacode.com/biblio/er/imsdri_infov1p0.pdfhttp://ltsc.ieee.org/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdfhttp://www.w3.org/http://www.ecma-international.org/publications/index.htmlhttp://smile.esi.uclm.es/delphos/http://smile.esi.uclm.es/delphos/http://www.dublincore.org/http://www.w3.org/Style/CSS/http://www.ariadne-eu.org/8/12/2019 A Framework for Recommendation in Learning Object Repositories
14/14
on advanced learning technologies (ICALT 2009), Riga, Letonia; 2009. p.
5435.
[36] ScheerS, da Gama CLG. Learning objects fora teaching andlearning network in
structural engineering. In: International conference on computing in civil and
building engineering. p. 112.
[37] Schell GP, Merlot Burns M. A repository of e-learning objects for higher
education. e-Service J 2002;1(2):5364.
[38] Seikyung J, Herlocker JL, Webster J. Click data as implicit relevance feedback in
web search. Inform Process Manage: Int J 2007;43(3):791807.
[39] Smyth B. Case-based recommendation. In: Brusilovsky P, Kobsa A, Nejdl W.
editors. The adaptive web. p. 34276.[40] Stefaner M, Vecchia ED, Condotta M, Wolpers M, Specht M, Apelt S, et al.
MACE-enriching architectural learning objects for experience multiplication.
In: Duval E, Klamma R, Wolpers M, editors. EC-TEL 2007. LNCS 2007; 4753: p.
32236.
[41] Tang TY, McCalla GI. Smart recommendation for an evolving e-learning
system: architecture and experiment. Int J E-Learning 2005;4:10529.
[42] Tsai KH, Chiu TK, Lee MC, Wang TI. A learning objects recommendation model
based on the preference and ontological approaches. In: Proceedings of the
sixth IEEE international conference on advanced learning technologies
(ICALT06), Washington, USA; 2006. p. 3640.
[43] Ummi Rabaah, H. Development of learning object for engineering courses in
UTeM. In: International conference in engineering education (ICEED), 2009, p.
1915.
[44] Walker A, Recker M, Lawless K, Wiley D. Collaborative information filtering: a
review and an educational application. Int J Artific Intell Educ 2004;14:126.
[45] Wan X, Ninomiya T, Okamoto T. A learners role based multi dimensional
collaborative recommendation (LRMDCR) for group learning support. In:
Proceedings of the 2008 eighth IEEE international conference on advanced
learning technologies (ICALT 08), Santander, Spain; 2008. p. 39116.
[46] Wiley DA. Connecting learning objects to instructional design theory: a
definition, a metaphor, and a taxonomy. In: Wiley DA, editor. The instructionaluse of learning objects. Agency for Instructional Technology; 2002. p. 135.
[47] Zapata A, Menndez VH, Eguigure Y, Prieto ME. Quality evaluation model for
learning objects from pedagogical perspective. A case study. In: International
conference of education, research and innovation (ICERI2009), Madrid, Spain;
2009. p. 222838.
[48] Zapata A, Menendez VH, Prieto ME, Romero C. A hybrid recommender method
for learning objects. In: IJCA proceedings on design and evaluation of digital
content for education (DEDCE), vol. 1, 2011. p. 17.
14 A. Zapata et al./ Advances in Engineering Software 56 (2013) 114