44
Volume 3, Issue No. 1, 2017 www.sdiwc . net

Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Volume 3, Issue No. 1, 2017

www.sdiwc .net

Page 2: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Editor-in-Chief

Prof. Jacek Stando, Lodz University of Technology, Poland

Editorial Board

Amreet Kaur Jageer Singh, Sultan Idris Education University,

Malaysia

Anne Le Calve, University of Applied Sciences and Arts

Antonis Mouhtaropoulos, Metropolitan College, Greece Anuranjan Misra, Bhagwant Institute of Technology, India

Ekaterina Pshehotskaya, Moscow Polytechnic University Russia

Elsa Estevez, United Nations University, Argentina Fadhilah Ahmad, University Sultan Zainal Abidin, Malaysia

Hatem Haddad, Mevlana University, Turkey

Khitam Shraim, University of California, USA Nazih Moubayed, Lebanese University, Lebanon

Ramadan Elaiess, University of Benghazi, Libya Spits Warnars Harco Leslie Hendric,

Bina Nusantara University, Indonesia Suphan Nasir, Istanbul University, Turkey

Zhan Liu, University of Applied Sciences and Arts Western

Yoshiro Imai, Kagawa University, Japan

Zhan Liu, University of Applied Sciences and Arts Western

Switzerland (HES-SO Valais-Wallis), Switzerland

Zhou Yimin, Chinese Academy of Science, China Overview

Overview

The SDIWC International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) is a refereed online journal designed to address the networking

community from both academia and industry, to discuss recent advances in the broad and quickly-evolving fields of computer

and communication networks, technology futures, national policies and standards and to highlight key issues, identify

trends, and develop visions for the digital information domain.

In the field of Wireless communications; the topics include:

Intelligent Tutoring Systems, Security Aspects, Immersive

Learning, Computer-Aided Assessment, Collaborative Learning,

Errors in E-Learning-Community Building, Accessibility to Disabled Users, Context Dependent Learning, E-Learning

Platforms, Portals, Mobile Learning (M-Learning), Learning

Organization, Standards and Interoperability, Virtual Labs and

Virtual Classrooms, Digital Libraries for E -Learning, Joint

Degrees, Web-based Learning, Wikis and Blogs, Authoring

Tools and Content Development, Synchronous and

Asynchronous Learning, Medical Applications, E-Learning Hardware and Software, AV-Communication and Multimedia,

Ontologies and Meta-Data Standards, Simulated Communities.

Publisher

Society of Digital Information and Wireless Communications

20/F, Tower 5, China Hong Kong City, 33 Canton Road,

Tsim Sha Tsui, Kowloon, Hong Kong

Further Information

Website: http://sdiwc.net/ijeetdm/

Email: [email protected] Tel.: (202)-657-4603 - Inside USA 001(202)-657-4603 - Outside USA

Permissions

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) is an open access

journal which means that all content is freely available without charge to the user or his/her institution. Users are allowed to

read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior

permission from the publisher or the author. This is in accordance with the BOAI definition of open access.

Disclaimer

Statements of fact and opinion in the articles in the

International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) are those of the

respective authors and contributors and not of the

International Journal of E-Learning and Educational

Technologies in the Digital Media (IJEETDM) or The Society of

Digital Information and Wireless Communications (SDIWC).

Neither The Society of Digital Information and Wireless

Communications (SDIWC) nor the International Journal of E- Learning and Educational Technologies in the Digital Media

(IJEETDM) make any representation, express or implied, in

respect of the accuracy of the material in this journal and

cannot accept any legal responsibility or liability as to the errors

or omissions that may be made. The reader should make

his/her own evaluation as to the appropriateness or otherwise of any experimental technique described.

Copyright © 2017 sdiwc.net, All Rights Reserved

The issue date is March 2017.

Page 3: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Volume 2, No. 4

TABLE OF CONTENTS Original Articles

Paper Title Author Pages

REVIEW OF USABILITY EVALUATION METHODS AND OTHER FACTORS FOR IMPLEMENTING AN

OPEN SOURCE LEARNING MANAGEMENT SYSTEM IN SAUDI ARABIA

Thulani Phakathi, Francis Lugayizi, B. Esiefarhenrhe, Bassey Isong

1-12

INTERPRETING THE EXPERIENCES OF TEACHERS

USING EDUCATIONAL ONLINE TECHNOLOGIES TO

INTERACT WITH CONTENT IN BLENDED TERTIARY

ENVIRONMENTS: A PHENOMENOLOGICAL STUDY

Tzu-Fan Chen, Wei-Sheng Yang, Jyh-Horng Jeng

13-22

A COMPARATIVE ANALYSIS OF THE PERFORMANCE

OF THREE MACHINE LEARNING ALGORITHMS FOR

TWEETS ON NIGERIAN DATASET

M.S. Tabra, Abdulwahab Lawan

23-30

CHINESE AND MOROCCAN HIGHER EDUCATION

MOOCS: RATIONALE, IMPLEMENTATION AND

CHALLENGES

Oubibi Mohamed , Zhao Wei

31-34

STABILIZATION OF PARTICLE FILTER BASED

VERTEBRAE TRACKING IN LUMBAR SPINAL

VIDEOFLUOROSCOPY

Amina Amkoui, Ibrahim Guelzim, Hammadi

nait charif, Lahcen Koutti

35-40

Page 4: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Review of Usability Evaluation Methods and Other Factors for Implementing anOpen Source Learning Management System in Saudi Arabia

Kholod Jeza AlotaibiVice Head of Computer Science Department, College of Computer Science and Information Technology, Taif

University, Saudi Arabia [email protected]

ABSTRACT

Possible methods suitable for evaluating the usabilityof e-learning websites that provide a LearningManagement System are examined. In doing so, somesystems are sng which system and which usabilityevaluation metuggested with a focus on the Saudieducational context and open-source solutions so as toaid those considerihod to adopt. Other issues related toimplementation besides software usability are alsohighlighted including the appropriateness of theTechnology Acceptance Model which takes usabilityinto account.

KEYWORDS

Software usability; Usability evaluation; LearningManagement System; Technology Acceptance Model;Open source; Moodle

1 INTRODUCTION

This review of the literature examines possiblemethods for evaluating the usability of e-learningor Learning Management Systems (LMS) foracademic institutions, especially those in SaudiArabia that seek a low-cost solution. Theinformation may also be applicable to othersimilar contexts to be useful for anyoneconsidering implementing an LMS and evaluatingits usability, and where an LMS may also bereferred to as an e-learning platform, CourseManagement System (CMS), or Virtual LearningEnvironment (VLE). The purpose is to givevaluable information that may inform decisionmakers faced by the choice of evaluation methodsfor evaluating the usability of an LMS whilesubject to time, cost and other constraints.

In addition, the paper gives a brief overview ofMoodle as one example of an open-source LMSsince the cost factor is an important considerationfor these institutions, and it also highlights somerelevant issues besides software usability that maybe important in deciding which LMS toimplement. This wider consideration is necessarybecause regardless of how easy to use or effectivea certain technology is, it cannot serve its purposewithout a proper implementation [1]. This alsoleads to highlighting the Technology AcceptanceModel (TAM) as a possible theoretical frameworkfor studying the acceptance of an LMS based onits usefulness and usability.

2 LEARNING MANAGEMENT SYSTEMS

An LMS and all other such systems mentionedabove rely on a computer connected to theInternet, and make it possible for students to learnby obtaining course materials, sendingassignments, taking quizzes, communicating withtheir teachers and fellow learners, etc. Forteachers, an LMS assists by allowing them tocreate, make available, manage, customise andmodify a range of digital content and learningobjects, to reuse that content and track theirstudents’ learning, and for a university, an LMSenables it to expand its student body thoughdelivering courses to students around the world.Since the advent of the Internet, varioustechnologies have been used to enhance learning,such as email, Bulletin Board Systems (BBS),

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

1

Page 5: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

blogs, wikis, and chat clients, but a typical LMSoffers many more features to provide a morecomprehensive learning environment.

Notably, an LMS enables the communication to beconducted remotely, and either synchronously ornon-synchronously. That is, an LMS removes therestrictions of time and distance in providing aneducational environment [2]. This offers manyadvantages for learners, especially in terms ofbeing able to learn at their own pace at their ownconvenience, and from anywhere as long as theyhave the aforementioned physical hardwarerequirements. It does however, impose on studentsthe need to be independent, collaborative andactive participants.

In short, an LMS is characterised by its provisionof four types of spaces: (1) information space forproviding educational content and referencematerials, (2) exhibition space to exhibit learningproducts, such as documents and videos, (3) aninteraction space for users to communicate andexchange information, and (4) production spacewhere processes are implemented to generatetraces of learning, such as exercises and tests [3].As a piece of software, a typical LMS would bedescribed as being multiplatform, having agraphical interface, based on a client-serverarchitecture, and which allows for multimedia,information management, communication anduser interaction.

One way of distinguishing most LMS’s is bydescribing them as being either proprietary oropen-source. By open source is meant that thesource code is openly available, which enablesusers to have access to the source code, modify it,add features and redistribute the software. Theseare typically available for free. A closed-sourceLMS on the other hand does not provide open-source access to the code, which is typically a

proprietary LMS provided by a commercial entityand is not therefore free. A few examples ofpopular commercial LMS’s are Blackboard,WebCT, Brightspace, and WizIQ, and of open-source ones are Moodle, Sakai, Ilias, ATutor,Canvas, and Schoology. In deciding between thesetwo types, important factors to consider would befeatures, supported technologies, license fees,support and maintenance, security, and ITresources. In comparison, although commercialLMS’s may be costlier, they generally come withbetter support, whereas open-source LMS’s areusually obtainable free of charge and are moreflexible and customisable.

Delivering a course of study through a LearningManagement System (LMS) has increaseddramatically in Saudi Arabia in recent years [4]. Amajor reason for this trend is Saudi Arabiabecoming the largest market for information andcommunication technologies in the region [5], andthe huge budget allocation for e-learning systemsand encouraging their implementation by theMinistry of Higher Education. One of the earliestLMS’s to be implemented was EMES (E-LearningManagement Electronic System) at KingAbdulAziz University in 2007. Although mostimplementations are in universities, LMS’s havenow also been introduced into K-12 public schoolsin Saudi Arabia [6].

Of the 34 universities in the kingdom (as of 2015),a Ministry of Higher Education (MoHE) survey of25 of them revealed that Blackboard is by far themost commonly used LMS [64]. It is used by 76%of the universities in the sample (Table 1) althoughmany of these universities have only beendeveloped in recent years as part of the kingdom’sdrive to transform itself into a ‘knowledgeeconomy’ [65]. In this sample, Moodle is onlyused by one single institution (University of

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

2

Page 6: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Tabuk), so there is a lot of scope to promote opensource alternatives.

Table 1: Types of LMS's in use in Saudi universities

No. LMS No. ofUniversities

Percentage

1 Blackboard 19 76%

2 None or not yet installed

3 12%

3 Desire2Learn 2 8%

4 Moodle 1 4%

TOTAL 25 100%

The situation is more or less similar in othercountries as well in that there is a discerniblediffusion of online education. Reference [66]identified four distinguishable stages of thisdiffusion at a selected Mexican university over a13 year period (1996-2009). It was noticed thatinitially, online education began as individualinitiatives by academics in the late 1990s whoemerged as agents of change and stimulatedinterest in other academics, which led to theformation of small communities sharinginformation and experiences. The next stage(1999-2002) was characterised by a more activerole in the diffusion, and the third stage (2003-2007) by a clearer structured focus centred on theuse of ICT for teaching and learning. During thesubsequent period (2008-2009), a clearinstitutional policy had not been defined by theuniversity examined. Although the periods maydiffer for other universities and countries, similardiffusion characteristics can be observed, and withit, issues such as selecting and evaluating asuitable LMS have become important.

3 LMS USABILITY

3.1 LMS Selection

Besides the above criteria for selecting an LMS,its evaluation may be undertaken from apedagogical or institutional perspective [7], or byconducting a usability evaluation, or acombination of these. The use of an LMS, whichplays a central role in this arrangement forlearning, introduces potential software usabilityissues at both ends, and usability has become animportant concern in developing an LMS [8]-[9].The problem is that evaluating an LMS can be acomplex task, as found for instance whenevaluating the effectiveness of an open sourceLMS [10]. The price and feature list of an LMSare not therefore the only factors to consider. Inparticular, a technologically mediated educationalprocess should be expected to provide an easy touse, clear and understandable interface, accessiblecontent and course materials, and an efficientmeans for a two-way communication between theteachers and students.

3.2 Usability

Usability is an important software quality attributerecognised in the standard ISO/IEC 9126-1. Thesame standard defines usability as “the capabilityof the software product to be understood, learned,used, and attractive to the user, when used underspecified conditions”. The traditional attributes ofusability, as described by Nielsen [11], were thatthe software should be easy to learn, efficient touse, easy to remember, have few errors, and besubjectively pleasing, which may be labelled aslearnability, efficiency, memorability, reliabilityand satisfaction respectively. He [11] furtherintroduced the concept of web usability asdescribing web pages that are intuitivelyorganised, easy to navigate, and which help usersfind the information they seek with ease. This

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

3

Page 7: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

position is supported by Roy & Pattnaik [12] whoalso argued that the most important aspects of awebsite from a usability point of view to satisfyusers, are the user friendliness of its navigationsystem, and its effectiveness in enabling users toaccomplish tasks. Effectiveness is in relation toperformance in being able to accomplish tasks,which may be measured for instance, by numberof users being able to accomplish them within acertain time. A site would be considered as user-friendly by its users if they can easily interact withit in order to perform the tasks required of them[13].

Although different researchers have definedusability in terms of different components, thecomponents that may be particularly important forensuring a highly usable e-learning system arelearnability, rememberability, efficiency in use,reliability and user satisfaction [14]. Learnabilityfor instance, refers to the degree of learningrequired to accomplish tasks, and which may bemeasured by the time taken to perform them, andgeneral satisfaction of the user would be ensuredby making them want to continue to use thesystem happily; want to see it improved in someway, or prefer not to use it or to use anothersystem in its place. Evaluation based on a range offactors is essential for evaluating usability becauseusability does not pertain to any one of themexclusively, and a range of technical, attitudinal,cognitive and other factors provides a morebalanced indication of usability. An ideal LMSwould be one that has all such componentsexpected of an LMS, and which can ensurestudents can learn effectively from an institutionaland also system perspective [15].

In a study specifically on the usability of e-learning systems, [16] recommended that such asystem can only be considered as usable if it iseasy to use and useful for learners with respect to

accomplishing their learning tasks, that is, if thesoftware is able to help them improve in theirlearning. Others have also highlighted furthermotivational aspects as being important for e-learning, such as a feedback mechanism,comprehensiveness, and curiosity [17];interactivity, avoiding interruptions or distractionsto learning, and providing a continuous feeling ofchallenge [18].

3.3 Importance of Considering Usability

For software in general, enhancing usability canlead to improvements such as making it easier touse the software, reducing time spent in learningto use it, improvements in productivity, greateruser satisfaction, etc. The usability of websites hasalways been a matter of concern since the verybeginning of the Internet era [19], and thecharacteristics of human-computer interaction playa major role in defining their usability [20]. Someusability aspects can also be subjective, as it isalso affected by users’ cognitive and perceptualabilities [21].

Usability is an especially important considerationfor e-learning software because it can help todevelop systems with improved didactical andpedagogical approaches [22]. For an LMS, theconsideration of usability is moreover importantbecause it can affect the learning experience forstudents and their academic performance [23]-[24]. Any lacking in usability compromises thequality of the online course delivery system, cancause waste of time, and increase the need for andcost of providing training.

In spite of such potential benefits, usability isoften neglected in designing and implementing e-learning software [25]. The reason for a lack ofusability evaluation may be that in comparison toother tests, it is considered tedious, leastrewarding and expensive to implement [26], and it

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

4

Page 8: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

may also require training and close coordinationbetween developers and programmers. Moreover,only a few studies have been conducted, and thereis still no standard adopted for evaluating theusability of learning management systems.

3.4 Usability Evaluation Methods

Importantly, the effectiveness of educationalsoftware also relies on some principles thatdistinguishes it from other web-based software, sothese would need to be taken into account whenselecting an evaluation method. These include thedesign of learning objects and learning activities,the medium of presentation, and the provision forcommunication between teachers and students[27]. And for methodology, one possibleframework that can be applied is the DECIDEframework based on the following six components[28]: (1) Determine goals for the evaluation toaddress, (2) Explore questions to be answered, (3)Choose evaluation paradigm and techniques foranswering those questions, (4) Identify practicalissues to be addressed, (5) Decide on how to dealwith ethical issues, and (6) Evaluate, interpret andpresent the data.

Many usability evaluation methods have beendevised for evaluating various kinds of softwareincluding web-based software. These methodsmay be categorised as: (1) Survey questionnairebased methods, (2) Other non-survey structuredmethods, (2) Inspection based methods, and (4)Non-user involved methods. There are severalexamples of the first type, such as SoftwareUsability Measurement Inventory (SUMI), SystemUsability Scale (SUS), Questionnaire for UserInteraction Satisfaction (QUIS), Website Analysisand Measurement Inventory (WAMMI), ComputerSystem Usability Questionnaire (CSUQ), andUsefulness, Satisfaction and Ease of Use (USE).Of these, WAMMI has been specifically prepared

for evaluating websites [29], so it would appearthat it may be particularly suitable for evaluatingan LMS.

The Think Aloud approach, which is another formof systematic method, may also be used forwebsites as an alternative to WAMMI, but it islikely to be more time consuming [30], and unlikeWAMMI, the results cannot be used to compareusability between different systems [31]. Anotherpossible alternative is eye-tracking, but it requiresspecial equipment and technical expertise [32],and Heuristic Evaluation, an inspection basedmethod, has been shown to be particularly usefulfor detecting structural defects of sites [33]. Othermethods also used are interviews, surveys, expertreviews, and personas. However, the decision ofwhich method to adopt in evaluating usabilitywould be subject to the typical constraints of timeand cost [26], and depend on such factors as thestage of the software development lifecycle(SDLC), availability of skills and expertise inevaluation [34], and the extent of need for anobjective, systematic and complex evaluation. It isalso possible to combine different methods, asdone by [27] in a study that combined evaluationfrom three different categories identified above,namely heuristic evaluation, usabilityquestionnaire, and a task-driven technique. As forsample size in evaluating usability, as pointed outby Nielsen [11], it is usually sufficient todetermine this quickly with as little as three to fiveusers [35].

The System Usability Scale (SUS) mentionedabove is a very short, “quick and dirty” [36], andfreely available usability evaluation questionnairewidely used for measuring usability. It isrecognised as a robust tool [37], even for smallsample sizes [38]. A comparison of five differenttools (SUS, QUIS, CSUQ and two vendor specificones), has shown that along with the CSUQ, the

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

5

Page 9: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

SUS achieved the goal of providing a reliablemeasure across a range of sample sizes thequickest [38]. This tool comprises of 10 items (5positively and 5 negatively worded statements),which are assessed using a five-point Likert scaleranging from strongly disagree to strongly agree.A survey by [39] listed 14 products for which theSUS questionnaire was used for testing usabilityand proposed a further 9. Although e-learningplatforms were not included in either of their listsunder the category of web products, e-learningplatforms can also be assessed using SUS likeother web products.

3.5 LMS Usability Evaluation

The SUS has been used for evaluating theusability of an LMS by [40] who evaluated theSPIRAL platform; [41] who used it to evaluate adistributed learning resource repository calledDELTA; [42] who used it to measure usersatisfaction of three edutainment platforms; [43]who evaluated the UNITE e-learning platform innine schools that used it; [44] who used it forevaluating the usability of a Moodle based VLE inconjunction with heuristic and cooperativeevaluation; [45] who assessed the Topolor systemwhich combines social e-learning with adaptive e-learning; [46] who used it to assess the perceivedusability of a simulation based e-learning system,and in eleven studies conducted by [24]. Theselatter studies together involved 769 students inwhich eClass and Moodle were evaluated. The‘perceived usability’ of these LMS’s was found tobe satisfactory, especially in terms of validity andreliability. In the study by [44], all three toolsidentified the existence of usability issues.

Reference [47] developed an instrument forevaluating the usability of an LMS. with respect toits user interface in terms of features, its webapplication features, and other features specifically

related to the LMS. The LMS evaluated was an in-house system designed around the time Moodlewas made, and the heuristic evaluation of Nielsenwas applied, which involved identifying itsfunctioning and optimum operating conditions. Inaddition, they checked for its compliance withinternational standards: ISO 9241 based on thecriteria of efficiency, effectiveness andsatisfaction, and ISO 9126 based on learnability,operability and comprehensibility. This led todevising six questions for evaluation from whichthe following essential attributes were identified:searchability, communicability, reliability,configurability, design, comprehensibility, ease ofuse, and navigability. Six experts were hired toevaluate the system. This method enabled serioususability problems to be uncovered, especially interms of reliability in the form of frequentinterruptions, error messages and failures, but alsocommunicability, searchability and configurability.The software scored highest in terms ofcomprehensibility.

Reference [10] compared a range of open sourceLMS’s based on a simple evaluation of technicalfactors and features to find one suitable for ahigher education institution. Commercial oneswere not included because the objective was tofind a cost effective solution, but there was also arequirement to have all the major featuresexpected to be found in a commercial product.Five open source LMS’s were selected from fiftyon UNESCO’s website: Moodle (v.2.2), ATutor(v.2.0.3), Ilias (v.4.2.1), EFront (v.3.6.10) andClaroline (v.1.10). All five of them are themselvesbased on open source technologies, namely PHP, aserver-side scripting language, and MySQL, adatabase management system. The analysis wasundertaken by creating courses, updating theircontents, and creating learning activities. In termsof usability, all of them were found to be easy to

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

6

Page 10: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

use. Three of them – Moodle, ATutor and Ilias –excel with respect to providing flexible languagesupport, usage statistics, and an advancedassessment system; EFront stood out alone byhaving the most visually attractive interface, andMoodle by providing the ability to track userlogging, rich graphical statistics of activities andreports, and more advanced access and securitycontrols. Moodle was also found to have certainfeatures not present in the other LMS’s at the timeof comparison, such as support for external videoconferencing, file transfer, and whiteboard toolsthat can be integrated in the LMS. Since their firstrecommendation was Moodle, more details aregiven of this LMS.

3.6 Moodle

Moodle (Modular Object-Oriented DynamicLearning Environment) is web-based coursemanagement system that is recognised as apopular LMS. It is available for free and is basedon open-source technologies, which allows it to beeasily modified and adapted. Moodle provides awide range of features typical of an LMS, such asa registration system for both instructors andlearners, user profiles, assignments, classschedules, wikis, chats, glossary, email,performance statistics, etc. Several studies haveevaluated Moodle, some focusing on specificmodules [48]-[49], [27] and others have comparedMoodle with other LMS’s [50]-[51].

Moreover, Moodle is designed to provide acollaborative learning environment based on thepedagogical principles of social constructivism,has multi-lingual support, and it supports theSCORM (Sharable Content Object ReferenceModel) and IMS (Instructional ManagementSystem) open standards for an LMS. Support forthese open standards is beneficial because itensures greater interoperability, portability,

reusability and sequencing for the LMS [52], andalso accessibility, adaptability, durability, andmaintainability [27]. The SCORM specificationfor instance, contains elements of provisions byIEEE, AICC and IMS in a single document, whichmakes it easy to implement.

3.7 Consideration of Other Factors

From the perspective of project management,many other factors would also need to beconsidered besides those already consideredabove, namely price, features and technicalfactors, in deciding which open source LMS toimplement. In the case of an institution forinstance, it should ensure the LMS is in line withits vision and mission, and that it can easily handlethe number of potential users [53].

A study by [53] highlights examples of suchchallenges in implementing an e-learning systemfaced by a university in Saudi Arabia, which theyascertained through adopting a case studyapproach. Their findings show that various issuescan arise that must therefore be consideredbeforehand. These issues include the time it wouldtake to develop online courses from scratch,availability and sustainability of human resources,and uncertainties related to technology.Technological uncertainty and the issue withhuman resources can also change over time, whichmay require changes in the implementation.

Apart from arranging for the e-learning system inan attempt to improve the quality of studentlearning and in a way that it could evolve in linewith educational change processes, they also hadto convince the institution to adopt the technologyand overcome the initial “course of techno-hype”[53], which was challenging and necessitated anumber of changes to the original plan. Thesechanges included adopting Moodle instead tobenefit from better specifications, providing

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

7

Page 11: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

improved interactivity, and meeting therequirements of the Saudi MoHE (Ministry ofHigher Education) to blend the course with face-to-face teaching, There may also be legal issues toconsider, as currently, Saudi Arabia does notpermit the offering of online degrees.

Furthermore, cultural issues also need to beconsidered, as it was found to present a majorchallenge in the case of Saudi Arabia. Somefaculty members found it difficult to adapt to thenew way of teaching, and some studentsexperienced difficulties. Difficulties for studentswas not exclusive to Saudi students, as thephenomenon has also been reported by others,such as [54] and by [55] for Jordanian students.Overcoming such cultural issues would requiretraining, which can be time consuming and costly.In some countries, other such issues peculiar tothat place may arise. For instance, when [1]investigated the implementation of an e-learningsystem in a university in Pakistan, Englishproficiency and electricity failure were found to bethe most significant barriers. However, someLMS’s such as Moodle do have multilingualsupport and the problem of frequent poweroutages only restricts synchronous learning, notasynchronous learning.

For developing countries generally, limitedresources and lack of technical expertise can berestricting factors to proper implementation, ascan cost of technology, deficient strategies,resistance to change, poor course delivery andcompetition [56]. As pointed out by [57], mostdeveloping countries lack quality experts toimplement and maintain information andcommunication technologies. Lack of computerskills is not only prevalent in developing countrieshowever, as it is also not uncommon in developedcountries.

Reference [58] identified four barriers to adoptingan LMS in a university in New Zealand: computerskills, conceptions of an LMS’s role, desire forrecreating old work practices, and emotions.Computer skills were highlighted as barriers, a‘cognitive load’ and hindrance to mastering thefunctionality of an LMS. Difficulties wereexperienced especially by staff who lacked evenbasic computer skills, and the researchers alsofaced issues relating to getting users to interactwith each other. The study also showed thepotential of technology to arouse negativeemotions of frustration and exhibiting reluctanceto change. Consequently, there was a tendency toreplicate some past practices to ease the LMSadoption process. For instance, they preferred toperform certain tasks the way they werepreviously accustomed to doing. Nonetheless, theLMS implementation was successful overall, asthe institution was praised for its strong technicalsupport, which shows the importance of this factorin making an implementation successful. Othersuch enabling factors identified were learningfrom the implementations of other institutions, andallowing for a feedback mechanism.

The above study highlights the importance oftaking the users into account. A study by [59]specifically analysed how learners interact with ane-learning site by means of a survey with a view toassessing the effectiveness of e-learning materialused by a particular Indian e-learning portal, andto predict the acceptability of the adaptiveenvironment for learners. The learners reportedperceiving e-learning as valuable due to its role inreducing time, effort and money in retrievinginformation. Moreover, a vast majority of the 146students surveyed expressed satisfaction with thee-learning environment due to the aforementionedand various other benefits, such as convenience,scope for interaction, ease in seeking clarification,

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

8

Page 12: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

etc. User satisfaction is an important component ofusability, and factors such as usefulness and easeof use perceived by users makes it easier for themto accept the technology and new way of learning.

3.8 The Technology Acceptance Model

A useful theoretical framework that can be appliedfor understanding the likely acceptance and usageof the implementation of an LMS is theTechnology Acceptance Model (TAM), aninformation systems theory proposed by [60] thattakes into account perceived usefulness andperceived ease of use. That is, the usefulness andease of use perceived by users are considered to bepredictors of the attitude to accepting atechnology. Reference [61] used this model for thee-learning context in an empirical study involvingstudents from a university in Taiwan. They foundit to have good internal consistency; that the moreuseful the system is perceived to be and the easieror friendlier the system interface is to use, theusers are indeed more willing to use it. Likewise,user satisfaction is closely linked with activeparticipation and commitment [62].

In an adaptation of TAM specifically for e-learning systems, [63] combined the model withInnovation Diffusion Theory (IDT) to investigatefactors that may affect the behavioural intentionsof 552 business employees in Taiwan to use an e-learning system. The study validated both TAMand IDT for the organisational context asproviding better results overall when combined.Although the results confirmed the research modeland hypotheses, those who had expectations forthe system to be usable in terms of it being simpleto understand and easy to use were disappointed,as it did not prove helpful in improving jobperformance. To address these findings, theytherefore recommended for e-learning systems tobe designed in a way that is relevant to employees’

and user-friendly for enhancing the perception ofease of use. This confirms the view that twoconstructs of usefulness and ease of use areimportant factors in determining the extent towhich an e-learning system would be accepted.

4 CONCLUSION

A number of software usability evaluationmethods were identified in this paper that can beadopted for evaluating the usability of a learningmanagement system. It also defined an LMS,identified its characteristics, examined some issuesin evaluating an LMS, described usability andidentified its components relevant to an LMS,established the importance of consideringusability, and highlighted other related factors toconsider in implementing an LMS. TheTechnology Acceptance Model was alsointroduced as a theoretical framework forunderstanding technology acceptance in terms ofperceived usefulness, and perceived ease of use asone important aspect of usability.

Given that many universities in Saudi Arabia haveonly been established in recent years, the findingthat many existing universities use a proprietaryLMS, and the neglect of usability considerations,there is plenty of scope for promoting both anawareness of the importance of usability andadopting open source based learning managementsystems for supporting e-learning. It isrecommended to conduct a thorough investigationof usability issues related to the use of an LMS.Moreover, the newness of the technology and thefinding that some universities are not currentlyusing any LMS shows the potential for applyingthe Technology Acceptance Model in furtherresearch in this context.

5 REFERENCES

1. I. A. Qureshi, I. Khola, R. Yasmin & M. Whitty.“Challenges of implementing e-learning in a Pakistani

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

9

Page 13: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

university.” Knowledge Management & E-Learning: AnInternational Journal, vol.4, no.3. 2012.

2. R. J. Epping. “Innovative Use of Blackboard [R] toAssess Laboratory Skills.” Journal of Learning Design, vol.3, no.3, pp.32-36. 2010.

3. M. E. C. Núñez. “Tendencias en el diseño educativopara entornos de aprendizaje digitales.” Revista Digital University, vol.5, no.10, p.68. 2004.

4. R. Alebaikan & S. Troudi. “Blended learning in Saudiuniversities: challenges and perspectives.” ALT-J, Research in Learning Technology, vol.18, no.1, pp.49-59. 2010.

5. SAGIA. (2014). “ICT. Saudi Arabia General InvestmentAuthority.” [Online]. Available:http://www.sagia.gov.sa/en/Key-sectors/ICT.

6. A. Alahmari & L. Kyei-Blankson. “Adopting andimplementing an e-learning system for teaching andlearning in Saudi public K-12 schools: the benefits,challenges and concerns.” World Journal of EducationalResearch, vol.3, no.1, pp.11-32. 2016.

7. R. Lanzilotti, C. Ardito, M. F. Costabile & A. DeAngeli. “eLSE methodology: a systematic approach tothe e-learning systems evaluation.” EducationalTechnology & Society, vol.9, no.4, pp.42-53. 2006.

8. E. P. Rozanski & A. R. Haake. “Curriculum andcontent: The many facets of HCI.” Paper presented atthe 4th Conference on Information TechnologyCurriculum on Information Technology Education,Lafayette, Indiana, USA. 2003.

9. A. Inversini, L. Botturi & L. Triacca. “Evaluating LMSusability for enhanced elearning experience.” In WorldConference on Educational Multimedia, Hypermediaand Telecommunications, vol. 2006, no.1, pp.595-601.2006.

10. S. Alshomrai. “Evaluation of technical factors indistance learning with respect to open source LMS.” Asian Transactions on Computers, vol. 2, issue 1, pp. 11-17. 2015.

11. J. Nielsen. “Enhancing the explanatory power ofusability heuristics.” CHI'94 Conference Proceedings. 1994.

12. S. Roy & P. K. Pattnaik. “Some popular usabilityevaluation techniques for websites.” Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications. Advances in Intelligent Systems and Computing, vol.247, pp.535-543. 2014.

13. A. Al-Badi, S. Ali & T. Al-Balushi. (2012).“Ergonomics of usability/accessibility-ready websites: tools and guidelines.” Webology, vol.9, no.2. Available:http://www.webology.org/2012/v9n2/a98.html.

14. L. L. Constantine. (2011). Software for use: A practicalguide to the models and methods of usage-centred design. Addison-Wesley Professional.

15. B. Randall, J. Sweetin & D. Steinbeiser. “LearningManagement System Feasibility Study.” North Carolina

Community College System Office: Learning Technology Systems. 2010.

16. V. Venkatesh, M. G. Morris, G. B. Davis & F. D. Davis.“User Acceptance of Information Technology: Toward A Unified View.” MIS Quarterly, vol.27, no.3, pp.425-478. 2003.

17. S. Shilwant & A. Haggarty. (2005). “Usability Test-ingfor E-Learning.” Available: http://www.clomedia.com/content/templates/clo_article.asp?articleid=1049.

18. M. F. Costabile, M. De Marsico, R. Lanzilotti, V. L.Plantamura & T. Roselli. “On the Usability Evaluation of E-Learning Applications.” Proceedings of the 38th Hawaii International Conference on System Sciences – 2005, ACMPress: New York, pp.1-10. 2005.

19. A. Rukshan & A. Baravalle. “A quantitative approach tousability evaluation of web sites.” Advances in Computing Technology, London, United Kingdom. 2011.

20. E. Sung & R. Mayer. “Affective impact of navigationaland signaling aids to e-learning.” Computers in Human Behavior, vol.28, pp.473-483. 2012.

21. S. Thuseethan & S. Kuhanesan. “Effective Use ofHuman Computer Interaction in Digital Academic Supportive Devices.” International Journal of Science and Research, vol.3, no.6, pp.388-392. 2014.

22. M. Novak, M. Binas & F. Jakab. (2010). “Contributionof human-computer interaction in usability and effectiveness of e-learning systems.” 8th International Conference on Emerging eLearning Technologies and Applications, held in the High Tatras in Slovakia on 28-29 October, 2012.

23. N. Tselios, N. Avouris V. & Komis. “The effectivecombination of hybrid usability methods in evaluating educational applications of ICT: Issues and challenges.”Education and Information Technologies, vol.13, no.1, pp.55-76. 2008.

24. K. Orfanou, T. Nikolaos & K. Christos. “Perceivedusability evaluation of learning management systems: Empirical evaluation of the system usability scale.” International Review of Research in Open and Distributed Learning, vol.16, no.2, pp.227-246. 2015.

25. K. Kruse. (2002). “E-Learning and the Neglect of UserInterface Design”, Available: E-LearningGuru.com.

26. C. J. Mueller. “An economical approach to usabilitytesting.” 33rd Annual IEEE International Computer Software and Applications Conference, 2009, pp.124-129. 2009.

27. G. Kakasevski, M. Mihajlov, S. Arsenovski & S.Chungurski. “Evaluating usability in learning management system Moodle.” Paper presented at the Proceedings of the ITI 2008 30th International Conference on Information Technology Interfaces, held in Cavtat, Croatia on 23-26 June. 2008.

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

10

Page 14: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

28. J. Preece, Y. Rogers & H Sharp. “Interaction design:Beyond human-computer interaction.” Hoboken, NJ:John Wiley & Sons. 2002.

29. J. Sauro & J. Lewis. “Quantifying the user experience:Practical Statistics for User Research.” MA, USA:Elsevier Inc. 2012.

30. M. W. M. Jaspers. “A comparison of usability methodsfor testing interactive health technologies:Methodological aspects and empirical evidence.”International Journal of Medical Informatics, vol.78,pp.340-353. 2009.

31. M. T. Boren. “Thinking aloud: Reconciling theory andpractice.” IEEE Transactions on ProfessionalCommunication, vol.43, no.3, pp.261-278. 2000.

32. A. Blecken, D. Bruggemann & W. Marx. (2010).Usability evaluation of a learning management system.Proceedings of the 43rd Hawaii InternationalConference on System Sciences, pp. 1-9.

33. A. Sivaji. “Usability testing methodology: effectivenessof heuristic evaluation in e-government websitedevelopment.” 2011 Fifth Asia Modelling Symposium,held in Kuala Lumpur on 24-26 May, 2011, pp.68-72.2011.

34. J. O. Bak, K. Nguyen, P. Risgaard & J. Stage.“Obstacles to usability evaluation in practice: a survey of software development organizations.” Proceedings ofthe 5th Nordic Conference on Human-Computer Interaction: Building Bridges, held in New York, pp.23-32. 2009.

35. R. Stanley & P. Kurtz. (2011). “Usability testing: A keycomponent in e-learning design.” Proceedings of WorldConference on E-Learning in Corporate, Government,Healthcare, and Higher Education 2011, held inHonolulu, Hawaii, USA, on 18 October, 2011.

36. J. Brooke. “SUS: a “quick and dirty” usability scale.” InP. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L.McClelland (Eds.), Usability evaluation in industry.London: Taylor and Francis. 1996.

37. J. Sauro & J. Lewis. “Quantifying the user experience:Practical Statistics for User Research.” MA, USA:Elsevier Inc. 2012.

38. T. Tullis & J. Stetson. “A comparison of questionnairesfor assessing website usability.” In UsabilityProfessionals Association (UPA) 2004 Conference,pp.7–11. 2004.

39. P. Kortum A. & Bangor. “Usability ratings for everydayproducts measured with the system usability scale.”International Journal of Human-Computer Interaction,vol.29, no.2, pp.67–76. 2013.

40. C. Renaut, C. Batier, L. Flory & M. Heyde. “Improvingweb site usability for a better e-learning experience.”Current Developments in Technology-AssistedEducation, pp.891–895. 2006.

41. G. Venturi & N. Bessis. “User-centred evaluation of ane-Learning repository.” In Proceedings of the 4th

Nordic Conference on Human-computer Interaction: Changing Roles, pp.203–211. New York, NY. 2006.

42. K. Ayad & D. Rigas. “Comparing virtual classroom,game-based learning and storytelling teachings in e-learning.” International Journal of Education and Information Technologies, vol.4, no.1, pp.15–23. 2010.

43. A. Granic M. & Cukusic. “Usability testing and expertinspections complemented by educational evaluation: A case study of an e-Learning platform.” Educational Technology & Society, vol.14, no.2, pp.107–123. 2011.

44. A. P. Simões & A. de Moraes. “The ergonomicevaluation of a virtual learning environment usability.” Work: A Journal of Prevention, Assessment and Rehabilitation, vol.41, pp.1140–1144. 2012.

45. L. Shi, M. S. K. Awan & A. I. Cristea. “Evaluatingsystem functionality in social personalized adaptive e-Learning systems.” Scaling up Learning for Sustained Impact, vol. 8095 of Lecture Notes in Computer Science, pp. 633-634. 2013.

46. G. H. Luo, E. Z. F. Liu, H. W. Kuo, & S. M. Yuan.“Design and implementation of a simulation-based learning system for international trade.” The International Review of Research in Open and Distance Learning, vol.15, no.1. 2014.

47. R. Medina-Flores & R. Morales-Gamboa. “Usabilityevaluation by experts of a learning management system.” IEEE Revista Iberoamericana De Tecnologias Del Aprendizaje, vol.10, no.4, pp.197-2013. 2015.

48. J. Melton. “The LMS Moodle: A usability evaluation.”Languages Issues, vol.11/12, no.1, pp.1-24. 2006.

49. M. P. Debevc, M. Povalej, & Z. Verliþ. “Stjepanoviþ:Exploring Usability and Accessibility of an E-Learning System for Improving Computer Literacy.” ICTA, April12-14, Hammamet, Tunisia. 2007.

50. M. Matteo. “Evaluation of Collaborative Tools in Web-Based E-Learning Systems” Master’s Degree Project, Stockholm, Sweden. 2005.

51. D. Bremer & R. Bryant. “A comparison of two learningmanagement systems: Moodle vs Blackboard.” Proceedings of the 18th Annual Conference of the National Advisory Committee on Computing Qualifications. 2005.

52. ADL. (2016) “SCORM overview.” AdvancedDistributed Learning. [Online] Available: https://www.adlnet.gov/adl-research/scorm/.

53. A. M. Zabadi & H. A. D. Amnah. “A conceptualframework of the implementation of e-learning in University of Business and Technology.” International Journal of Scientific and Research Publications, vol.6, issue 6, pp.209-216. 2016.

54. J. O'Donoghue, G. Singh & D. Handy. “Highereducation – IT as a catalyst for change”, On the Horizon, vol.11, no.3, pp.23-28. 2003.

55. S. Al-Jaghoub, Y. Hussein, M. Hourani, R. Al-Haddadeh & M. Salim. “E-learning adoption in higher education in Jordan: vision, reality and change.”

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

11

Page 15: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

European and Mediterranean Conference in InformationSystems, held on July 13014 in Crowne Plaza Hotel, Izmir. 2009.

56. F. Elloumi. “Value chain analysis: A strategic approachto online learning.” In A. Anderson & F. Elloumi (Eds.),Theory and practice of online learning, pp. 61–92.Athabasca, Canada: Athabasca University. 2004.

57. J. K. Bakari, C. N. Tarimo, L. Yngstrom & C.Magnusson. “State of ICT security management in theinstitutions of higher learning in developing countries:Tanzania case study.” Paper presented at the Fifth IEEEInternational Conference on Advanced LearningTechnologies (ICALT'05). 2005.

58. Q. Liu & S. Geertshuis. “Explorations in learningmanagement system adoption.” Proceedings of TheFifth International Conference on E-Learning and E-Technologies in Education, held in Malaysia. 2016.

59. Mahajan, Renuka & Vishal Mahajan. (2015). Real timeanalysis of attributes of an Indian e-learning site. TheInternational Journal of E-Learning and EducationalTechnologies in the Digital Media, vol. 1, no. 2, pp.109-114.

60. F. D. Davis. “Perceived usefulness, perceived ease ofuse, and user acceptance of information technology.”Management Information Systems Quarterly, vol.13,no.3, pp.319-340. 1989.

61. C. Feng-Peng, W. Kai-Hsuan & L. I-Chun. “Measuringthe adoption and resistance of e-learning by students.”International Conference on E-Technologies andBusiness on the Web, held in Bangkok, Thailand on 7-9May, 2013, pp.247-249. 2013.

62. R. Klamma, M. A. Chatti, E. Duval, H. Hummel, E. H.Hvannberg, M. Kravcik, E. Law, A. Naeve & P Scott.Social software for life-long learning. EducationalTechnology & Society, vol.10, no.3, pp.72–83. 2007.

63. Y-H. Lee, H. Yi-Chuan & H. Chia-Ning. “AddingInnovation Diffusion Theory to the TechnologyAcceptance Model: Supporting employees’ intentions touse e-learning systems.” Educational Technology &Society, vol.14, no.4, pp.124-137. 2011.

64. MOE. (2016). LMS platforms in Saudi universities.Ministry of Higher Education, Kingdom of SaudiArabia. Available athttp://www.moe.gov.sa/ar/Pages/default.aspx (accessedDecember, 2016).

65. 67. Habibi, Nader. (2015). Is Saudi Arabia training too many graduates? University World News, issue 441, 26 December, 2015. Also available at http://www.universityworldnews.com/article.php?story=20150714013422488 (accessed December 2016).

66. McAnally-Salas, Lewis; Mayer Cabrera-Flores, JavierOrganista-Sandoval et al. (2016). Influential agents inthe online education diffusion at a Mexican university:What the social network analysis tell us. TheInternational Journal of E-Learning and Educational

Technologies in the Digital Media, vol. 2, no. 1, pp. 17-24.

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 1-12 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

12

Page 16: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Interpreting the Experiences of Teachers Using Educational Online Technologies to Interact with Content in Blended Tertiary Environments: A Phenomenological Study

Kimberley Tuapawa University of Newcastle

Callaghan, NSW, Australia [email protected]

ABSTRACT

Although educational online technologies (EOTs) have enhanced the dissemination of learning in higher education, key EOT obstacles have hindered their effectiveness, preventing widespread implementation. The persistence of these obstacles suggests that tertiary education institutes (TEIs) have experienced difficulties in understanding their key stakeholders’ EOT needs. This research made an interpretation of key stakeholders’ EOT experiences, to establish their existing EOT needs and challenges, and provide a foundation from which to recommend methods for effective EOT support. It analysed the experiences of 10 students and 10 teachers from New Zealand and Australia and interpreted the meanings of these phenomena through an abstraction of local and global themes. This paper is the sixth in a series of six publications that presents the local themes. It documents the interpretations of teachers’ experiences with content, in reference to their use of two types of EOTs: learning management systems, and online video platforms. These interpretations, which include descriptions of teachers’ EOT challenges, helped to inform a set of recommendations for effective EOT use, to assist TEIs in their efforts to address EOT challenges and meet their stakeholders’ needs.

KEYWORDS

Tertiary education, blended learning, online technology, student experiences, phenomenology

1 INTRODUCTION

Educational online technologies (EOTs) have revolutionised the delivery of online education, making a significant contribution to the global increase in demand for higher learning. In an era of considerable online growth, their rapid

emergence, adoption and demand has engendered significant advances across the higher education sector. Traditional classroom spaces have evolved into dynamic blended tertiary environments (BTEs), providing tertiary education institutes (TEIs) with a modern means through which to augment course delivery. These transformations signal exciting prospects for teachers and students, the key stakeholders in BTEs1.

Despite the growth and demand for technology-based learning, considerable obstacles impede the use of EOTs. Such challenges include, but are not limited to attitudinal pre-dispositions, insubstantial training, and inadequacies in instructional design support [2]. Other challenges include resistance to change, ineffective EOT usage, lack of motivation, technical constraints, and accessibility [3]. These challenges pose a clear risk to the future success of BTEs [4], and create difficulties for stakeholders as they deliver and engage in learning.

Significant efforts have been made to learn more about EOT challenges. These have resulted in considerable subject-specific research, with varied and noteworthy contributions to the literature. Some studies have considered technology integration into blended environments [5], technology to support institutional roles [6], barriers to adoption of online learning [7], and the needs of online students [8]. However, while “our research foundation is rich” [9], not all problems have been adequately identified and addressed.

The continuation of these challenges suggests that TEIs have experienced difficulties in

1 Predictions about future growth, along with forecasts for EOT use are discussed in the first of these six papers [1].

13

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 17: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

understanding their key stakeholders’ EOT needs. Over time, these needs have evolved, and in an environment of rapid technological change have not been addressed effectively. With their operations based in a dynamic environment, TEIs must maintain relevance by evolving and adapting to meet their stakeholders’ needs. However, doing this effectively requires that they have sound, up-to-date understandings of their stakeholders’ EOT challenges, to deliver relevant and meaningful support.

Through a phenomenological approach, this research aimed to interpret key stakeholders’ EOT experiences, establish their existing EOT needs and challenges, and recommend methods for effective EOT support. Using a 5-step qualitative analysis of data, it analysed the EOT experiences of ten students and ten teachers, categorised these to reflect the nature of their interactions with other key entities and then interpreted their meanings through an abstraction of local and global themes. The global themes delivered a broad set of interpretations about the meaning of key stakeholders’ experiences with other students, other teachers and content, and the local themes developed meanings that were specific to their use of distinct EOTs.

This paper is the sixth in a series of six publications that present the local themes of this research, through written interpretations that describe the meaning of the phenomena. It documents teachers’ EOT experiences with content, in reference to their use of two different EOTs: Learning management systems (LMS) (Blackboard), and online video platforms (YouTube). Included in its interpretations are descriptions of stakeholders’ EOT challenges. These delivered a realistic portrayal of the phenomena to help strengthen knowledge about stakeholders’ needs. The interpretations helped to inform a set of recommendations for effective EOT use in teacher-to-content interactions. They were designed to assist TEIs to adapt to meet their stakeholders’ needs by providing a basis from which to tackle EOT challenges and deliver support.

To lay a sound basis for this phenomenological study, the author undertook preliminary research, which clarified and verified issues from the literature, and created a basis for the selection of participants. It identified EOTs in BTEs [14], produced a classification system for EOTs [15][16], identified key stakeholders in BTEs [17], identified the EOT challenges of key stakeholders [3] and discussed a key challenge (resistance to change) in using EOTs [18].

2 METHODOLOGY

The analysis of this data was guided by the methodology of interpretive phenomenology. It aimed to make an interpretation of the meanings of stakeholders’ experiences [39]; [40]; [41]. Linked to the principles of Heideggerian philosophy [42], this analysis of experience [29] abstracted themes from students’ and teachers’ experiences into a range of interpretations, to illuminate the phenomena [40] of EOT activity. This choice in methodology was influenced by the research aim, which aimed to interpret key stakeholders’ EOT experiences in BTEs, the key research questions [43], which were: What were the EOT experiences of key stakeholders in BTEs? and What interpretations could be made from their meanings? It was also influenced by the researcher’s “interest in the meaning of a phenomenon as it [was] lived by other subjects” [27].

A group of ten students and ten teachers from TEIs in New Zealand and Australia were chosen as participants using a purposive sampling strategy [41]. This ensured that the data would be gathered from those with first-hand experiences of the phenomena [44]. The rationale for this number was based on literature about qualitative and phenomenological research. Nicholls [28] for example, explained that “phenomenological studies … commonly use[d] as few as five … participants” (p. 639). Rawat [45] also stated that usually “four or five respondents” were chosen for such interviews. It was on this basis that 20 participants were chosen [27]; [28]; [39].

14

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 18: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Further criteria were set in the selection of participants. To be interviewed, teachers had to be on full-time tenure with an accredited TEI, delivering a course in a blended learning modality. Students had to be aged 18 years or older, enrolled full time with an accredited TEI and in a course delivered in a blended learning modality. Teachers were identified from TEI website profiles of staff teaching in New Zealand or Australia. Students were identified with the help of a staff member at each TEI. Invitations sent out stated that participation was voluntary.

The rationale for the selection of only teachers and students was based on a study by the author [17], which identified key stakeholders in BTEs. In this study, students were recognised as key stakeholders because of the requirement for them to “buy into” blended learning, “participate fully, and be convinced” of its value [17]. Teachers were acknowledged as key stakeholders due to their direct involvement in the teaching and learning process and their every-day focus on and influence over learning activity.

The phenomenological interviews followed a semi-structured format and were conducted using web-based conferencing technology (Skype) and recorded using Pamela software. Participants set aside approximately 45 minutes to engage [46] and were asked a set of 27 questions. They responded with first-hand narratives [35]; [47]; [44] of their EOT experiences, which included descriptions about their use of different EOTs to interact with various key entities (students, teachers and content). The situational aspects of their descriptions were crucial to the study, since understandings of a phenomenon [i.e. EOT use] had to be “connected to a specific context in which the phenomenon [had been] experienced” [i.e., a BTE] [27].

To encourage a candid portrayal of the phenomena, the questions were developed to draw out experiences that included descriptions of stakeholders’ EOT challenges. Probes were used to clarify and encourage participants’ in-depth explanations [48]; [37]; [44]. The questions were also framed to encourage their recollections of

encounters with different key entities. These types of encounters were based on the classification by interaction taxonomy augmented by Culatta [19] and the original classification proposed by Moore [20]. These categorised technologies by the relationship between learners and other entities. The first three interaction types of the original taxonomy were learner to expert, learner to learner, and learner to content. Culatta [19] presented a fourth category: learner to context. Tuapawa, Sher, and Gu [15][16], recommended a fifth category: learner to media. These categories were adapted to interviews with teachers, as follows: (1) teacher to student, (2) teacher to teacher, (3) teacher to content, (4) teacher to context, and (5) teacher to media. The use of a relationship-based taxonomy for arranging the questions helped refine stakeholders’ experiences into recognisable EOT interactions. It revealed distinctions between the phenomena in different key relationships, and established a structure through which to arrange the themes or meanings of the phenomena [44]. Table 1 outlines the questions asked of teachers about their EOT experiences with content.

Table 1 Interview questions Interaction type

Questions

Teacher-to-content

(a) Describe an experience in which you used an EOT in a teacher-to-content interaction while studying in a BTE?

(b) Did you face issues or challenges using the EOT? Explain.

(c) What do you think would be a solution to this issue?

(d) What do you think would have helped you make more meaningful use of this EOT?

(e) Did you experience benefits in using this EOT? Explain.

Recordings of the interviews were transcribed using pre-formatted templates. This process, enabled the researcher to become deeply familiar with the content [49] and prepare it for analysis.

15

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 19: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Yin’s [50] five phases of qualitative analysis, compiling, disassembling, reassembling, interpreting, and concluding, were used to structure and conduct the analysis. Table 2 shows the connection between these phases, and the techniques used.

Table 2 Phases of qualitative analysis vs phenomenological research techniques Stage Stage description Phenomenological

research technique

1 Compiling Data transcripts imported and arranged

2 Disassembling Data coded with nodes

3 Reassembling Memos used to build understandings of EOT phenomena

4 Interpreting (thematic analysis and interpretation)

Themes abstracted, meanings of phenomena described through written interpretations

5 Concluding (conclusions and recommendations)

Recommendations made for effective EOT use to support stakeholders’ needs.

NVivo software [21] was used to import, compile, and organise the interview transcripts into an organised structure [50]. These data were disassembled and coded, and the data were separated into categories that matched to the interview questions. These categories represented the data clearly and enabled it to be assigned, referenced and held in manageable groupings [22]. Table 3 shows the link between the labels used for coding, and the questions.

Table 3 Nodes linked to teacher interview questions Node Node Related question

description 1 Teacher-to-

content Q1a Describe an experience in which you used an EOT in a teacher-to-content interaction while teaching in a BTE.

2 Teacher-to-content Q1b

Did you face issues or challenges using the EOT in this case? Explain.

3 Teacher-to-content Q1c

What do you think would be a solution to this issue?

4 Teacher-to-content Q1d

What do you think would have helped you make more meaningful use of this EOT?

5 Teacher-to-content Q1e

Did you experience benefits in using this EOT? Explain.

The data were reassembled, and moved from the nodal position into analytic memos [50], where they were used to elaborate ideas [21] and develop understandings about the phenomena [23]. Finally, the data were subjected to a thematic analysis, which involved an abstraction of local and global themes [39]. In this process, the essential meanings of the phenomena, were discovered through engagement with the descriptions of the experiences [44]. These were written into a series of interpretations to illuminate the phenomena [40] of EOT use. The global themes developed broad interpretations of the phenomena, whereas the local themes derived meanings from the use of individual EOTs.

The example in Figure 1 demonstrates how the data were gathered, transcribed, sorted and coded using nodes, refined into a teacher-to-content based memo and interpreted through an analysis of themes. These provided the foundation for the discussion of results in this paper.

Figure 1: Process of data analysis

16

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 20: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

3 DISCUSSION OF RESULTS

This section discusses the local themes that were abstracted from teachers’ EOT experiences with content2. They are delivered as a series of written interpretations of teachers’ lived experiences [40]; [44] and organised into two sections based on the EOT types teachers had identified: Learning management systems (LMS)(Blackboard), and online video platforms (YouTube). Each section includes a description of the EOT brand exemplar, and an interpretation of teachers’ experiences, which include their comments on EOT issues , challenges, usage, and solutions. The labels used to describe the EOT types are based on the Pentexonomy [15][16], a robust, contextualised and multi-dimensional framework for categorising EOTs3.

EOT: Learning management system Example: Blackboard

Description Blackboard is a comprehensive and flexible e-Learning software platform that provides a complete course and learning management system [24]. It is can serve as a ‘repository’ for learning resources, or be used in more innovative ways such as ‘an e-learning portal around a particular programme of work or sets of activities’ [14].

Experiences Teachers’ experiences using Blackboard to interact with content revealed that

2 This discussion also includes a small amount of data from interviews with blended learning experts, some of whom were teachers. 3 It is important to note that the views expressed by participants reflected the state of development of software at a particular point in time, the ways in which it was implemented and maintained, and the manner in which it was used. Notwithstanding these realities, much was gained from their comments.

their EOT activity involved editing materials, providing links to resources, and marking and uploading assessment material. Negative views of Blackboard involved problems with usability and technical issues, large file sizes, and copyrighted content. One teacher described Blackboard as the ‘central [means] to formalising content’, and commented on its value for disseminating materials. ‘I like being able to take content …and upload [it] to Blackboard,’ he said, ‘because then … [it exists] in a virtual area that everyone canaccess.’ Another teacher used Blackboard as an upload point for his lecture recordings, and as a repository for slides that students would later use for course revision. ‘You haven’t got anything locally on your computer…it’s all on Blackboard.’ This meant that teachers ‘[didn’t] have to download anything to [their] computer, [they could] do it all online’, which ‘[was] handy’. One teacher adapted her content to suit both distance students and on-campus students, so that ‘all students’ had access to ‘all the material every week that [they had] lectures.’ Another ‘put material up…which [was] fairly low quality’ to ensure that ‘students who [were] out in the wild, and might not have a very good [internet] connection’ could access the learning materials. Some uploaded content that included ‘PowerPoints and mp3 recordings of the lectures’. Others made ‘tutorial links’ available, and some developed content that instructed students ‘where they should be at’ in their course progress.

Using Blackboard to edit and maintain content meant ‘there was no printing out…no formatting’ because ‘it was [all] there online’. For teachers interacting with assessment content, Blackboard also provided an efficient method for ‘online marking’, which ‘[made] it easier’. ‘You [could] mark and give feedback online, without having to put anything down on paper.’ Explaining the ease in doing this, one teacher said ‘it’s there online, [you] put the marks in, put comments in’…and ‘then at the end of the course, you download the spreadsheet, and can get [access] to all the marks.’ Pleased with Blackboard’s efficient assessment methods, one teacher stated that it was ‘ just easier to mark online.’ Despite these benefits, various challenges impacted teachers’ experiences with

17

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 21: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

content. ‘Blackboard has quirks’, said one teacher, ‘it suddenly freezes…it is slow’. Expressing her frustration, she added ‘I find Blackboard very annoying’ because ‘the screen layout is messy’ and requires that I ‘constantly change tabs’. Blackboard was described by others as ‘clunky’. Some experienced problems interacting with assessment functions. ‘You end up with several different places you can enter marks’. This led to ‘a bit of uncertainty’ as to whether ‘comments…[were]…going to get to the student’. One teacher recommended improving ‘the interface, [it] need[ed] cleaning up’, and ‘could do with some smoothing’ out.

Another challenge related to oversized file uploads which contained media-rich content. ‘Often the recordings…[could] be very large.’ While ‘that’s great if you’ve got a good download speed…if [however] you’re out in the country, that’s not good’. Teachers suggested taking time to consider how ‘bandwidth issues’ impacted content interactions. Having a built in ‘way of lowering the quality’ of files would improve its efficiency. Another teacher recommended creating or augmenting the ‘system … [to] enable…lower [quality] downloads…to make it easier’, since the use of large files ‘ha[d] become an issue.’ Stating where the responsibility lay, one teacher stated that ‘the manufacturers at Blackboard need[ed] to work at it, and come up with some answers.’ Another commented on the importance of achieving a balance between download speed and quality visual content. ‘It’s a mixture, [but] visual people learn better with visual prompts.’ While improvements to content were justified, some felt that ‘technology was still not there in almost all of these aspects.’ Reflecting on the content issues he experienced using Blackboard, one teacher remarked ‘I would love them to come into my office, when I’m having a hard time so I [could ask] ‘why is it doing this? Why can’t it do this?’ Some also expressed frustration over Blackboard’s lack of ‘intuitive design’, describing its ‘html style’ as ‘outdated’. One recommended that ‘the design of the environment be more customisable.’

Teachers recognised the work involved in creating and delivering effective content, but asserted that

in some cases the system made it ‘problematic.’ ‘Copyright issues’ around posting academic content also raised challenges for teachers. ‘We’ve got to be very careful about putting up chapters’. Acknowledging the need for access to this content, one admitted ‘you’ve got people who are away from libraries…who may be out in the middle of nowhere’ where unfortunately, ‘there are no libraries’. While these users ‘rely on…online journals’, help is limited because ‘our copyright rules get in our way…and really slow things down.’ Teachers said that while ‘having some of the chapters online would be good…we can’t do it.’

EOT: Online video platform Example: YouTube

Description YouTube is designed to enable users to upload and share videos that can be viewed by anyone [25]. It utilises repositories to enable users to manage their profiles, share content and collaborate [26]. YouTube is being used extensively to showcase video clips that support learning. ‘One of the trends’ is to ‘put [the media clips] on YouTube…instead of using local storage or LMS’ [14].

Experiences Themes from teachers’ experiences about the use of YouTube to interact with content showed they valued it as a means to view, edit and upload teaching materials that showcased hands-on tasks, and delivered practical learning experiences. They valued the level of ‘currency’ that YouTube content added to their teaching. The difficulty with ‘textbook examples’ was that ‘even if you’re using a 2014 textbook,’ the examples within these chapters ‘[were] 2013, 2012.’ Explaining the advantage of using online videos, one teacher stated that ‘you [could] use really current examples’ to support learning activity. One teacher used YouTube as a repository for teaching content, and had a ‘YouTube channel that [she] put videos on’ and also a ‘a class YouTube channel’ which contained ‘links to useful websites and games’ to support student activity. Teachers also valued YouTube’s ability to handle large files. ‘What’s awesome about this [capability], is that…it splits your files up, [and] while it’s

18

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 22: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

uploading, you [could] put all the [supporting video] info up.’ Similarly, another teacher valued being able to store files on YouTube ‘instead of using local storage or LMS storage’.

Despite the advantages of using YouTube to interact with content, some teachers experienced a lack of ‘control over what you’re linking to in the long term.’ Explaining this problem, one teacher indicated that ‘a YouTube video’ planned for use during class ‘might [later] not be there, or might be replaced with something inappropriate.’ File owners occasionally removed their files from YouTube, creating issues for teachers who interacted with this content on an ongoing basis. ‘More often than not,’ she explained, ‘the videos I’d linked to had been made private,’ preventing access to learning material.’ Other video clips ‘had been…taken down for copyright treasons’, which increased the workloads of teachers who had to ‘run around trying to update links.’ ‘Trying to keep those links up to date’ was difficult, but teachers knew it was important to ‘make sure there had been no exchange to inappropriate materials.’ Referring to one example, a teacher explained how she ‘had a link to a commercial site, which used a game [that taught] people how to reference correctly.’ She had ‘checked it again before the lecture’, only to find that ‘it now linked to a spam site’. This had happened because ‘the URL had been let go’, and now the site contained ‘flashing gaudy advertising’. Reflecting on the possible outcome, she admitted that ‘this [interaction] could have [had an adverse effect in class] and made [her] look crappy’.

Other challenges with YouTube involved creating content intended for upload. ‘Technical issues with the screen capture’ gave one teacher difficulty, and she had ‘to fiddle around…to get it to work.’ Frustrated, she explained that while ‘sometimes it would work’, she didn’t know ‘how to make it work reliably’. The potential for these issues occurring created anxiety. ‘You [did not] want to give a half-hour lecture’ she stated, ‘and then find out you didn’t record the sound, [and realize that the students would] … see… the slides [without] sound, it [would] not [be] very good.’ Admitting however, that expecting free software

to work consistently without failure was unreasonable, she admitted ‘I’m expecting a lot.’ Training was recommended as a solution. ‘Someone [should] give you a workshop and show you how to do this,’ because ‘there [were] so many tools, it’s very complicated…[and it’s] almost bewildering how many…you can actually use.’ Others felt that while ‘there [was] support there…there could be more.’ Some said that ‘effort’ was required ‘to get over the hard part’. One teacher praised the efforts of her institute’s education development centre, which provided one-of-one EOT help to teachers. For others, ‘it’s better if someone can give you a workshop and show you how to do this.’ ‘Workshops or seminars for academics to come along’ was recommended as a way to help teacher’s improve their YouTube skills.

One teacher was reluctant to engage with YouTube content unless the files were ‘recorded to the quality of the lectures…on TV or on TED talks’, which filmed using ‘multiple perspectives’, and were more likely to increase ‘engagement’ levels. Otherwise, he asserted, ‘you get people standing up’ and walking out, ‘and it just doesn’t work’. Searching for a solution, he continued ‘if I was going to record my lectures’ for uploading to YouTube, ‘I’d want multiple angles, and the same treatment as TED talks.’ Comments like these about YouTube emphasised how the perceived level of content quality influenced teachers buy-in to its use. Content was often developed using other EOTs, and then ‘uploaded to Youtube. For example, one teacher explained ‘sometimes [we] use Jing, but most of us will use Camtasia or Adobe Connect, do it locally’, and copy it to YouTube. In doing ‘content preparation’, teachers found they could ‘merge from one [EOT] to another’.

Other teachers’ experiences involved the use of Moodle, Mindomo, PeerWise, Word Cloud, Echo360, Prezi, Jing, Camtasia, Google Drive, WikiEducator, Vimeo, Blogger.

Teachers experiences using EOTs to interact with content were varied and informative. Their descriptions indicated that effective EOT use

19

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 23: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

contributed to enriched teacher-to-content interactions, whereas ineffective use created challenges that negatively impacted on their activity. Teachers’ descriptions of EOT challenges revealed the reality of their experiences, and the extent to which these obstacles limited their engagement with content. Some expressed frustration, disappointment, and annoyance when faced with technical or usability issues. Despite negative experiences, general recognition of the role EOTs played in enabling engagement was evident. Teachers indicated that improvements to EOT usability, technical support, and design would reduce certain challenges, and enhance their interactions with content. Their recommendations for solutions to challenges signaled that they wanted change and relevant support, to ensure their commitment to EOT use. Teachers valued EOTs that afforded the efficient editing, linking, marking, uploading and demonstration of content.

4 RECOMMENDATIONS

A summary of recommendations for addressing some of the EOT challenges described in teachers’ experiences is outlined below:

• TEIs investigate the potential forextending, enabling and improving LMSfeatures to accommodate teachers’ onlinecourse delivery and assessment needs

• Managers urge and facilitate teachers’ongoing needs-based training for LMS use,to ensure they have relevant skills toundertake effective online courseconstruction and delivery

• Teachers consider bandwidth limitations,and ensure that online file sizes do notimpede access to learning, but can bedownloaded efficiently

• Teachers ensure teaching content fromexternal sources is current and appropriatefor course delivery

• Teachers investigate the potential andfeasibility for developing higher qualityvideo lecture recordings, e.g. capture in-class activity from multiple angles

• Teachers ensure that sound and audioequipment works effectively to support in-class learning experiences

5 CONCLUSION

This research made a phenomenological interpretation of key stakeholders’ EOT experiences to strengthen understandings about their EOT needs and challenges and provide a basis from which to recommend methods for effective EOT support. It analysed the EOT experiences of ten students and ten teachers from TEIs in New Zealand and Australia and interpreted the meanings of the phenomena through an abstraction of themes. This paper was the sixth in a series of six publications that presented the local themes of this research. It documented the interpretations of teachers’ EOT experiences with content, in reference to their use of two different types of EOTs: Learning management systems (LMS)(Blackboard), and online video platforms (YouTube). These interpretations, which delivered insights into the reality of teachers’ EOT challenges and needs, helped to inform a set of recommendations for effective EOT use, to assist TEIs in their efforts to address EOT challenges and needs through relevant, meaningful EOT support.

The small sample size normally used in phenomenological studies makes it challenging to generalise results across large groups [44]. However, the descriptions of first-hand experiences provides a rich and authentic means from which to extract in-depth levels of knowledge about the phenomena. Although an interpretive phenomenological approach supported the researcher’s “interest in the meaning of a phenomenon as it [was] lived by other subjects”, it also permitted their personal preconceptions to affect the analysis of data [27].

The interpretations in this research could be used to support understandings about other similar EOTs. For example, the themes drawn from students’ experiences with Blackboard could in some cases be applied to Moodle. This research

20

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 24: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

has the potential to be replicated and applied to other TEI stakeholders, such as administrators or educational support staff, to strengthen understandings of their EOT challenges and needs.

6 REFERENCES

1. Tuapawa, K. (2017). Interpreting Experiences ofStudents Using Educational Online Technologies toInteract with Teachers in Blended TertiaryEnvironments: A Phenomenological Study. Universityof Newcastle, Australia. Australasian Journal ofEducational Technology, 33(1).https://doi.org/10.14742/ajet.2964

2. Panda, S., & Mishra, S. (2007). E-Learning in a MegaOpen University: Faculty Attitude, Barriers andMotivators. Educational Media International, 44(4), 16.doi: 10.1080/09523980701680854

3. Tuapawa, K. (2016). Challenges Faced by KeyStakeholders using Educational Online Technologies inBlended Tertiary Environments. International Journalof Web-Based Learning and Teaching Technologies,11(2).

4. Moskal, P., Dziuban, C., & Hartman, J. (2013).Blended Learning: A Dangerous Idea? The Internet andHigher Education, 18(0), 15-23. doi:http://dx.doi.org/10.1016/j.iheduc.2012.12.001

5. Moore, M. (2013). Handbook of Distance Education(pp. 753). Retrieved from EBooks Corporationdatabase

6. Huynh, B., Gibbons, M. F., & Fonda, V. (2009).Increasing Demands and Changing InstitutionalResearch Roles: How Technology Can Help. NewDirections for Institutional Research. Special Issue:Imagining the Future of Institutional Research,2009(143), 59-71.

7. Bacow, L. S., Bowen, W. G., Guthrie, K. M., Lack, K.A., & Long, M. P. (2012). Barriers to Adoption ofOnline Learning Systems in U.S. Higher Education (pp.34): Ithaka.

8. Mupinga, D. M., Nora, R. T., & Yaw, D. C. (2006).The Learning Styles, Expectations, and Needs ofOnline Students. College Teaching, 54(1), 185-189.doi: 10.3200/CTCH.54.1.185-189

9. Passey, D. (2013). Inclusive Technology EnhancedLearning: Overcoming Cognitive, Physical, Emotional,and Geographic Challenges (pp. 262).

10. Nagel, D. (2013). 6 Technology Challenges FacingEducation. The Journal, Transforming Educationthrough Technology, 1. Retrieved from Ed Tech Trends| News website:http://thejournal.com/articles/2013/06/04/6-technology-challenges-facing-education.aspx

11. Christie, M., & Jurado, R. G. (2009). Barriers toInnovation in Online Pedagogy. European Journal of

Engineering Education, 34(3), 7. doi: 10.1080/03043790903038841

12. Beckem, J. M., & Watkins, M. (2012). Bringing Life toLearning: Immersive Experiential LearningSimulations for Online and Blended Courses. Journalof Asynchronous Learning Networks, 16(5), 61-70.

13. Gregory, S., Lee, M. J. W., Ellis, A., Gregory, B.,Wood, D., Hillier, M., . . . Matthews, C. (2010).Australian Higher Education Institutions Transformingthe Future of Teaching and Learning Through VirtualWorlds. Paper presented at the ASCILITE Conference:Curriculum, Technology & Transformation for anUnknown Future, Sydney 2010.

14. Tuapawa, K. (in press). Educational onlinetechnologies in blended tertiary environments: Experts’perpectives. International Journal of Communicationand Information Technology Education 13(3)

15. Tuapawa, K., Sher, W., & Gu, N. (2014).Pentexonomy: A Multi-Dimensional Taxonomy ofEducational Online Technologies. . InternationalJournal of Web-Based Learning and TeachingTechnologies, 9(1), 41-59.

16. Tuapawa, K., Sher, W., & Gu, N. (2016).Pentexonomy: A Multi-Dimensional Taxonomy ofEducational Online Technologies. In M. Raisinghani(Ed.), Revolutionizing Education through Web-BasedInstruction (pp. 225-252). Hershey, PA: IGI Global.

17. Tuapawa, K. (2017). Identifying Key Stakeholders inBlended Tertiary Environments: Experts’ Perspectives.International Journal of Communication andInformation Technology Education, 13(4).

18. Tuapawa, K. (2015). Resistance to Change Concerningthe Use of Educational Online Technologies in BlendedTertiary Environments. Paper presented at the TheFourth International Conference on E-Learning and E-Technologies in Education (ICEEE2015), SuryaUniversity, Tangerang, Indonesia, September 10-12,2015. http://sdiwc.net/digital-library/resistance-to-change-concerning-use-of-educational-online-technologies-in-blended-tertiary-environments.html

19. Culatta, R. (2011). Categorisation of LearningTechnologies. fromhttp://innovativelearning.com/instructional_technology/categories.html

20. Moore, M. (1989). Three Types of Interaction. TheAmerican Journal of Distance Education, 3(2), 6.

21. QSR International. (2015). NVivo 10 For Windows.fromhttp://www.qsrinternational.com/products_nvivo.aspx

22. Williams, A. (2003). How to Write and Analyse aQuestionnaire. Journal of Orthodontics, 30(3), 245-252

23. Saldana, J. (2011). Fundamentals of QualitativeResearch (pp. 200).

24. Blackboard Inc. (2015). Welcome to the BlackboardLearning System. fromhttp://library.blackboard.com/docs/cp/learning_system/release6/student/Chapter_1_Welcome_to_the_Blackboard_Learning_System.htm

21

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 25: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

25. Digital Unite. (2015). What is YouTube? , fromhttp://digitalunite.com/guides/tv-video/what-youtube

26. Churchill, D., Wong, W., Law, N., Salter, D., & Tai, B.(2009). Social Bookmarking-Repository-Networking:Possibilities for Support of Teaching and Learning inHigher Education. Serials Review, 35(3), 142-148.

27. Englander, M. (2012). The Interview: Data Collectionin Descriptive Phenomenological Human ScientificResearch. Journal of PhenomenologicalPsychology(43), 13-25.

28. Nicholls, D. (2009b). Qualitative Research: Part Three- Methods. International Journal of Therapy andRehabilitation, 16, 11.

29. Friesen, N., Henriksson, C., & Saevi, T. (2012).Hermeneutic Phenomenology in Education: Methodand Practice Vol. 4. (pp. 37).

30. Nicholls, D. (2009a). Qualitative Research: Part One -Philosophies. International Journal of Therapy andRehabilitation, 16(October), 526-533.

31. Chapleo, C., & Simms, C. (2010). StakeholderAnalysis in Higher Education. Perspectives: Policy andPractice in Higher Education, 8. doi:10.1080/13603100903458034

32. Mainardes, E., Alves, H., & Raposo, M. (2013).Identifying Stakeholders in a Portuguese University: ACase Study. Revista de Educación, 362, 17. doi:10.4438/1988-592X-RE-2012-362-167

33. Sanderson, M. (1997). Distance Learning and BestPractice Report SafetyNet Esprit Project 23917 (pp.19): Riso National Laboratory, Roskilde.

34. Wagner, N., Hassanein, K., & Head, M. (2008). Who isResponsible for E-Learning Success in HigherEducation? A Stakeholders' Analysis. Journal ofEducational Technology & Society, 11(3).

35. Dowling, P., & Brown, A. (2012). DoingResearch/Reading Research: Re-InterrogatingEducation (pp. 225).

36. Martin, W. (Producer). (2010). PhenomenologyBeginnings and Key Themes. Retrieved fromhttp://www.youtube.com/watch?v=Oev9GAm2MrI

37. Penner, J. L., & McClement, S. E. (2008). InternationalJournal of Qualitative Methods, 7(2), 93-101.

38. Wikispaces. (2015). Introduction. fromhttps://www.wikispaces.com/content/classroom/about

39. Padilla-Diaz, M. (2015). Phenomenology ineducational qualitative research: Philosophy as scienceor philosophical science? International Journal ofEducational Excellence, 1(2), 101–110.doi:10.18562/IJEE.2015.0009

40. Sloan, A., & Bowe, B. (2014). Phenomenology andhermeneutic phenomenology: The philosophy, themethologies and using hermeneutic phenomenology toinvestigate lecturers' experiences of curriculum design.Quality & Quantity, 48(3), 1291–1303.doi:10.1007/s11135-013-9835-3

41. Yuksel, P., & Yildirim, S. (2015). Theoreticalframeworks, methods, and procedures for conductingphenomenological studies in educational settings.Turkish Online Journal of Qualitative Inquiry, 6(1), 1–

20. Retrieved fromhttp://dergipark.ulakbim.gov.tr/tojqi/article/view/5000093521/5000087061

42. Reiners, G. M. (2012). Understanding the differencesbetween Husserl’s (descriptive) and Heidegger’s(interpretive) phenomenological research. Journal ofNursing & Care, 1(5). doi:10.4172/2167-1168.1000119

43. Marelli, F.B. (2016). Qualitative research methods &methodology. Retrieved fromhttp://atlasti.com/qualitative-research-methods/

44. Waters, J. (2016). Phenomenological researchguidelines. Retrieved fromhttps://www.capilanou.ca/psychology/student-resources/research-guidelines/Phenomenological-Research-Guidelines/

45. Rawat, K. (2014, August 15). Phenomenology as aresearch method [Blog post]. Retrieved fromhttp://rawat.blogspot.co.nz/2014/08/phenomenology-as-research-method.html

46. Simon, M. K., & Goes, J. (2012). What isphenomenological research? Retrieved fromhttp://dissertationrecipes.com/wp-content/uploads/2011/04/Phenomenological-Research.pdf

47. Moustakas, C. (1994). Phenomenology and humanscience inquiry. In Phenomenological researchmethods (pp. 43–68). Thousand Oaks, CA: Sage.doi:10.4135/9781412995658

48. Lester, S. (1999). An introduction to phenomenologicalresearch. Retrieved fromhttps://www.rgs.org/NR/rdonlyres/F50603E0-41AF-4B15-9C84-BA7E4DE8CB4F/0/Seaweedphenomenologyresearch.pdf

49. Daniels, C. (2016). How to transcribe interviews.Retrieved from http://work.chron.com/transcribe-interviews-11397.html

50. Yin, R. K. (2015). Qualitative research from start tofinish. New York, NY: Gilford Publications.

22

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 13-22The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 26: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

A Comparative Analysis of the Performance of Three Machine Learning Algorithms

for Tweets on Nigerian dataset

1M.S. Tabra and

2A. Lawan

1Department of Computer Science, Bayero University, Kano

*2Department of Information Technology, Bayero University, Kano

*Corresponding authors email: [email protected]

ABSTRACT

The popularity of Twitter as a social media platform

is increasingly becoming very important, this has no

doubt impact positively on the business, social and

political aspect of our lives, and hence, the need for

researchers to focus on Twitter opinion miming

became crucial. Despite the increase in the number of

Twitter users in Nigeria generating a huge amount of

data from the discussions on national issues became a

challenge; limited research has been carried out on

Twitter data sources from Nigerian context. The

objective of this paper is to use Twitter data from

Nigeria to carry out a comparative analysis of the

performance criteria based on accuracy, precision

and recall on three classification algorithms. The

purpose of which is to find out the best algorithm that

fits the dataset. Dataset used was gathered using

Twitter Application Programming Interface (API)

given a trending hash tag on national issues. This

dataset was preprocessed before modeling. Twitter

opinion mining classification algorithms: Naïve

Bayse, Support Vector Machine (SVM) and

Maximum Entropy were chosen, modeled and

evaluated. Findings from this research show that

dataset from Nigeria can be used to mine the

opinions of citizens on national issues; it also shows

that SVM was disappointing using the dataset, while

Naïve Bayse classifier outperform others with

accuracy of 83%; this is implying that the best among

the three models can classify tweets wrongly with the

rate of 17%. The best model (Naïve Bayse) was

implemented as Software; the Software downloads

and automatically classifies tweets into three

opinions namely positive, negative and neutral.

KEYWORDS

Social Media, Twitter, Data Mining, Classification

Algorithms

1 INTRODUCTION

Since the advent of the World Wide Web

(WWW) by Tim Berners-Lee (1993), documents

of all sorts of formats, content and description

have been collected and inter-connected with

hyperlinks making it the biggest repository of

data ever built [28]. Despite the dynamic,

unstructured nature, redundancy and discrepancy

of WWW, it is the most important data

collection regularly used for reference because

of the broad variety of topics covered and the

continuous contributions of resources and

publishers [28]. Increase in utilization of social

media such as a Facebook, Twitter, etc, leads to

the generation of massive amount of data at high

rate, especially now that organizations

innovatively device means to attract customers

using these social media platform [10]. This data

mostly contain vital information which can be

used nationally and internationally for business

purposes and decision making. For social media

users, social interaction has turn out to be an

indispensable part of their lives [26]. Therefore,

it is easier to get their opinion from their social

interaction on social media. Among the trending

social media, Twitter has attracted several types

of research in significant areas like predicting

electoral events, the stock market, consumer

brand etc [1].

Twitter was just created and introduced in 2006

but it has gain popularity in that it has more than

500 million users who send more than 400

million tweets daily [15]. This popularity and

fast spread of tweets make Twitter data worthy

of research for information exploration. Also

because it is easy to obtain Twitter dataset using

Twitter API, Twitter turns out to be a suitable

data source for research.

23

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 27: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Several type of research has been carried out

using Twitter dataset for citizens‟ opinion

mining and the research has provided an insight

into the citizens‟ opinion. Volume and

magnitudes of re-tweets as well as the sentiments

in the tweets correlated with the public opinion

polls, results in a South Korean election [11]. In

another study [1] use Twitter data for electoral

prediction and come up with 88% prediction

accuracy in 2012 US presidential election.

Messages of Twitter users can be used to predict

their voting behavior [14]. Confronted with huge

collections of data, we have now created new

needs to help us make better managerial choices.

The needs as opined by [28] “were an automatic

summarization of data, extraction of the

“essence” of information stored, and the

discovery of patterns in raw data”.

Opinion mining and sentiment analysis are the

trending research using Twitter datasets. Opinion

mining is taking over the traditional and web -

based surveys carried out by companies in order

to find public opinion about their products and

services, it also assists people and organizations

interested in knowing what other people say

about their products, service topic, issue and

event to find a perfect choice for which they are

looking for [2]. Several data mining algorithms

for both supervised and unsupervised learning

have been used in classifying Twitter dataset

such as Support Vector Machine (SVM), Naïve

Bayes classifier, k-nearness, Maximum Entropy

e.tc. [5] [7] [9].

This paper presents the result of a comparison of

three supervised learning classification

algorithms namely Naïve Bayes classifier,

Support Vector Machine (SVM) and Maximum

Entropy for classification of tweets of citizens

related to the trending topic in Nigerian

government administration.

Twitter

Twitter is an online social networking service

where users post and read messages. The

messages referred to as tweets are short

messages of a maximum length of 140

characters, the character limit applies to plain

text and emoticons and excludes quotes, polls,

videos, and images, giving users additional room

for discussion. A message once tweeted can be

re-tweeted by followers of the user of the

original messages and a retweet can be retweeted

by followers of the person that retweet the tweet

and this continue subsequently. Thus, a tweet

can reach a large number of individuals within a

short period of time thereby getting the opinion

of a large number of users all over the world.

Twitter Search API

The Twitter Search API is part of Twitter‟s

REST API. It allows programmers to have

access to a sample of recent tweets published in

the past 7 days by querying against the indices of

recent or popular Tweets. It requires a

programmer to supply authentication before

accessing the search API.

2 RELATED WORKS

Several research works have been carried out on

tweets data mining from sentiments analysis to

political prediction [11]. Research by [16] shows

how to collect Twitter corpus for sentiment

analysis and opinion mining automatically; also

Twitter can be used as a source to identify and

predict changes in social issues as shown by

[13].

User classification on Twitter was addressed by

[17] through inferring user attributes such as

political orientation or ethnicity by using

noticeable information such as the user behavior,

network structure and the linguistic content of

the user‟s feed. A framework for detecting and

tracking political abuse was described by [19].

Wegrzyn-wolska, [26] has conducted a content

analysis of social network by analyzing the

intensity and polarity of opinion and developed a

system that collects, evaluate and rate tweets

automatically for trend evaluation using Twitter

dataset related to French presidential election.

De Groot,[5] implemented a sentiment analysis

tool that classifies tweets related to popular

mobile phones and their brands. Using tweets

analysis, [24] build model that automatically

detect target stake holders based on firm

communication and reaction of readers on news

headlines through Twitter. O‟Banion and

Birnbaum, [14] Described an approach through

the use of predictive modeling using SVM to

mine preference data from messages of Twitter

users, which are then used on other individuals

24

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 28: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

whose preferences are not known. An approach

based on the combination of two sentiment

analysis classifiers on one classification task was

presented by [6] and the approach improves the

accuracy of the independent classifiers.

Comparison of twitter classification algorithms

has been treated in literature for instance

[27][25][1][6][8]. Naïve Bayes, Maximum

Entropy, and Support Vector Machine were

described in the literature by previous studies

[9][16][8][23][1]. The popularity of twitter

resulted in many researches for national decision

opinion mining purposes using Twitter dataset

[5][13][14][22][18]. Despite the reasonable

performance achieved by some of this research,

limited studies have been done on Twitter data

sources from Nigeria; since the approaches were

tested using data sources from other countries,

they might not work for Nigerian dataset because

dataset usually differs in terms of location, the

data available and the population through which

the data was obtained. This research is aimed to

use Twitter dataset from Nigeria to build,

evaluate and compare the performance of

opinion mining models using three classification

algorithms and use the most appropriate

algorithm to develop software for determining

citizens‟ opinion on national issues.

3 METHODOLOGY

Every data mining task usually follows a

particular data mining process model,

irrespective of the chosen model, the activities of

data acquisition, preprocessing, modeling,

evaluation and deployment have to be executed.

In this study, CRSP-DM model is used and the

activities involved are described subsequently in

this section.

3.1 Data collection

Twitter Search API was used to acquire data

related to trending national decisions in Nigeria

from February to November, 2016; the data was

gathered using a popular hashtag such as

#2016budget, #fuelsubsidy and #budgetpadding.

The Sample of the collected tweets is shown in

figure 1.

Figure 1 Sample of collected tweets

3.2 Data Preprocessing

A sample of tweets consisting of positive,

negative and neutral sentiments was selected for

modeling; the selected tweets were manually

labeled based on the tweets‟ sentiments. 60

percent of the manually labeled sample was used

for training and 40 percent for testing the model.

The raw Twitter data are noisy; using the library

regular expressions in python programming

language the tweets were cleaned as follows:

1. Word lengthening: people repeat some

characters in words e.g. Gooooood, sorrrrrrrrry

e.t.c; the repeated characters were removed

leaving only two repetitions as in Good and

Sorry respectively.

2. URL and Username removal: Urls e.g

https://t.co/TpUgaTCJUG and usernames e.g

@daily_trust were also removed.

3. Stop words filtering: Stop words are

common and high-frequency words like ”an”,

“at”, “the”, “of”, “and”, e.t.c. removal of stop

words improves the performance of feature

extraction algorithm by reducing the

dimensionality of the data sets thereby making

keywords left in the datasets to be identified

more easily by feature extraction techniques.

Stop words to be removed are taken from the

available list of stop words; stop words are

iterated in chosen word list and removed from

text. This technique is implemented in this

research using python as it is supported by

NLTK.

After cleaning the dataset, features were

constructed using T is tweet and W is feature

(„W of T‟ or „T has W‟) which result in high

accuracy [2] together with the use of Uni-gram

dataset as it gives more accuracy and require less

25

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 29: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

training time than bigram datasets (Ismail et al,

2016).

3.3 Modeling

In the field of classification in opinion mining,

machine learning algorithms are generally used

[1]. We used three popular classification

algorithms to model the constructed dataset.

i. Naïve Bayse Algorithms

The classifier was chosen according to findings

from [16], Naïve Bayse classifier resulted to

better performance.

Naïve Bayse classification is based on Bayse

theorem

P(c|x) = P(c) · P(x|c) (1)

P(x)

Where

P(c|x ) is the posterior probability of class

(target) given predictor (attribute)

P(c) is the posterior probability of class

P(x|c) is the likelihood which is the probability

of predictor given class

P(x) is the prior probability of predictor

P(c|X) = P(x1|c)× P(x2|c) × …× P(xn|c) × P(c)

To find the probability of tweet‟s class given the

tweet, we adopt equation 2 as used in [21].

P(C/t)=P(C)P∏ C) (2)

Where C represents the class negative, positive

and neutral; t represents tweet; f represents the

feature.

The classifier was build using NLTK

implementation of Naïve Bayse algorithm using

the training set as an argument.

ii. Support Vector Machine

The Support Vector Machine (SVM) classifier

was chosen according to findings from [1] that

Support Vector Machine (SVM) classifier

perform better than Naive Bayes, Maximum

Entropy and Artificial Neural Networks based

supervised classifiers.

Given the optimal hyperplane function

h(x)=wTx+b, (3)

for any new point z, SVM classifier predict

its class as

ˆy =sign(h(z))

= sign(wTz+b) (4)

where the sign(·) function returns +1 if its

argument is positive, and −1 if its argument

is negative[12].

SVMs are inherently two-class classifiers; a

multi-class problem is done by constructing of

multiclass SVMs, where a two-class classifier is

built over a feature vector derived from

the pair consisting of the input features and the

class of the datum.

At test time, the classifier chooses the class

(5)

The margin during training is the gap between

this value for the correct class and for the nearest

other class, and so the quadratic program

formulation will require that

(6)

The classifier was built using NLTK

implementation of Support Vector Machine

using the training set as an argument.

iii. Maximum Entropy

The Maximum Entropy classifier uses a model

that is very similar to the model employed by the

naive Bayes classifier. But rather than using

probabilities to set the model's parameters, it

uses search techniques to find a set of parameters

that will maximize the performance of the

classifier. In particular, it looks for the set of

parameters that maximizes the total likelihood of

the training corpus, which is defined as:

P(features) = Σx |in| corpus P(label(x)|features(x) (7)

Where P(label|features), the probability that an

input whose features will have a class label, is

defined as:

P(label|features) =

P(label, features) / Σlabel P(label, features (8)

The classifier was build using NLTK

implementation of Maximum Entropy, using the

training set as an argument.

26

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 30: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

3.4 Evaluation

Confusion matrix was chosen as the tool for

model assessment. The test architecture is shown

in Figure 2 and the test result is described later in

the discussion section.

label

Figure 2 Test architecture

After the modeling stage, the model was

assessed to ensure it meets the right criteria, that

is, the model was evaluated to ensure it can

classify unseen tweets with a reasonable

accuracy. Confusion matrix was chosen as a tool

to evaluate the model. Confusion matrix

evaluates the performance of a classification

model based on the number of correctly and

incorrectly predicted items by the model. The

numbers are tabulated in a Table called the

confusion matrix. Table 3.3 shows the confusion

matrix for binary classification model. Where a

and b are the two classes, Xij gives the number of

items in class i that are predicted to be in class j.

Table 1 Show the table of confusion for two

classes.

Table 1 Two Class Confusion Matrix

3.5 Proposed System

After modeling and evaluation, the model that

gave the overall performance was implemented

as software, which has the capability to

download tweets given a hashtag (category

related to national issues in Nigeria) and

automatically classify and give statistics of the

classified tweets using the build model. Figure 3

depicts the overall system architecture.

Figure 3 System architecture

4 RESULTS AND DISCUSSION

Three classification algorithms were trained and

tested on the same dataset; the measures used for

the algorithm performance evaluation were

accuracy, precision and recall. Table 2 illustrates

the results based on accuracy, precision and

recall for the three classifiers (Naïve Bayse, SVC

and Maximum Entropy). The result of this

experiment showed that Naïve Bayse was more

accurate while support vector machine was more

reliable. However, 46% difference in accuracy is

worth the cost of 1% reliability since we are not

considering a critical system. This experiment

result shows that Naïve Bayse outperform SVC

in terms of accuracy and recall. The result

concur with prior research supporting findings

by[8] and differed with findings by findings as in

[1][5], this opposing results could be attributed

as a result of the use of different datasets in

training the model which is implying that

Predicted class

A B

Act

ual

clas

s

A Xaa Xab

B Xba Xbb

Evaluator

Result

Test set Features

Feature Extractor

Classifi

er

model

Label

Opinion miner Classification

User

interface User

CSV File

Data acquisition (Twitter Search API)

27

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 31: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

different source of data might sometimes require

different modeling process.

Naïve Bayse being the best classifier using our

dataset was implemented as an application that

downloads tweets given hashtag and

automatically classifies the tweets into one of the

three opinions. Figure 4 illustrates the interface

of the software.

Figure 4. Main interface of the application

Table 2. Performances of Naïve Bayse, SVC

and Maximum Entropy classifiers

Classifier Accu

racy

(%)

Precis

ion

(%)

Rec

all

(%)

Naïve

Bayse

83 99 83

SVC 37 100 42

Maximum

Entropy

82 99 86

5 CONCLUSIONS

In this research, Twitter data from Nigeria have

been utilized for the purpose of classifying users‟

opinion from their tweets. Twitter datasets

contain relevant information about user‟s

opinion that can be used to obtain citizens‟

opinion regarding national issues.

Twitter dataset from Nigeria was collected over

the period of February to November, 2016 using

Twitter search API. The dataset was subjected to

preprocessing steps such as data selection, data

cleaning, and construction before being used for

modeling. After testing the three classifiers

(Naïve Bayse, SVC and Maximum Entropy), the

result of the experiment shows that Naïve Bayse

outperformed with accuracy of 83%, 99%

precision and 83% recall using our datasets. This

implies that the best among the three models can

classify tweets wrongly with the rate of 17%.

However, this does not imply that Naïve Bayse

would always be the best algorithm as some

algorithms appear to be promising using

different datasets; this is because dataset differs

in terms of location, and the population the data

was collected.

5.1 Limitations

This study has sampling limitation because some

Nigerian citizens do not have access to Internet,

and some have access to Internet but may not use

Twitter. Also, the dataset used was localized to

trending topics being discussed in Nigeria only.

5.2 Future Research

In this research, we have compared the

performance of three classification algorithms-

Naïve Bayse, Support vector machine and

Maximum entropy- using dataset of tweets from

Nigeria with Naïve Bayse outperforming the

other two. However, this research can be

improved in future by:

i. Using more training and testing

datasets.

ii. Using other classification algorithms

such as decision tree, K* etc on the

same data sources.

iii. Mixing the Nigerian dataset with

foreign dataset in order to generalized

the outcome.

iv. Using k-fold cross validation

approach for evaluating the models.

REFERENCES

1. Anjaria, M., and Guddeti, R. M. R.. Influence factor

based opinion mining of Twitter data using supervised

learning. 2014 Sixth International Conference on

Communication Systems and Networks (COMSNETS),

1–8, (2014).

https://doi.org/10.1109/COMSNETS.2014.6734907

2. Asghar, M. Z., Khan, A., Ahmad, S., and Kundi, F.

M.. A Review of Feature Extraction in Sentiment

Analysis. Journal of Basic and Applied Scientific

28

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 32: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Research Www.textroad.com, 4(3), 181–186, (2014)

3. Bhuvaneswari, M., and Srividhya, V. Enhancing the

Sentiment Classification Accuracy of Twitter Data

using Machine Learning Algorithms. Statistical

Approaches on Multidisciplinary Research, Volume I

1-10, (2017). ©Surragh Publishers, India, I.

4. Cristopher, M. D., Prabhakar, R., and Hinrich, S.

Introduction to Information Retrieval. Cambridge

University Press (2008).

5. De Groot, R. Utrecht U. Data Mining for Tweet

Sentiment Classification. Master’s Thesis, Utrecht

Univerity, 1-63,. (2012).

6. Gezici, G., Dehkharghani, R., Yanikoglu, B., Tapucu,

D., and Saygin, Y. SU-Sentilab : A Classification

System for Sentiment Analysis in Twitter. Second

Joint andConference on Lexical and Computational

Semantics (*SEM), Volume 2: Seventh International

Workshop on Semantic Evaluation 2(SemEval), 471–

477, (2013).

7. Hemalatha, I., Varma, G. P. S., and Govardhan, A.

Sentiment Analysis Tool using Machine Learning

Algorithms. International Journal of Emerging

Trends and Technology in Computer Science

(IJETTCS) Web, 2(2), 105-109, (2013).

8. Ismail, H. M., Harous, S., and Boumediene, B. A

Comparative Analysis of Machine Learning

Classifiers for Twitter Sentiment Analysis. 17th

International Conference on Intelligent Text

Processing and Computational Linguistics - CICLing,

110, 1-3, (2016).

9. Jadav, B. M., and Vaghela, V. B. Sentiment Analysis

using Support Vector Machine based on Feature

Selection and Semantic Analysis. International

Journal of Computer Applications (0975 – 8887),

146(13), 26–30 (2016).

10. Dutse, I.I, Bello, B.S., Lawan, A and Yusuf, H.

Sentiment Analysis and Opinion Mining from Social

Networks. Proceedings of the 1st Annual International

Conference held at the Northwest University

Auditorium, Kano city Campus, Ado Bayero House,

Kano, Nigeria on 8 - 12 November, 2015. Pp. 77-79

(2015).

11. Lee, J., and Park, S. J. Citizens ‟ Use of Twitter in

Political Information Sharing in South Korea.

iConference Proceedings, 351–365 (2013).

https://doi.org/10.9776/13210

12. Meira, J. W., and Zaki, M. J.. Data mining and

analysis. Cambridge University Press. Retrieved from

www.cambridge.org/9780521766333 (2014)

13. Min S., and Meen C. K.. RT^2M: Real-Time Twitter

Trend Mining System. 2013 International Conference

on Social Intelligence and Technology, 64–71.

https://doi.org/10.1109/SOCIETY.2013.

14. O‟Banion, S., and Birnbaum, L.. Using explicit

linguistic expressions of preference in social media to

predict voting behavior. 2013 IEEE/ACM

International Conference on Advances in Social

Networks Analysis and Mining (ASONAM), 207–214,

(2013). https://doi.org/10.1145/2492517.2492538.

15. Olaniran, S.. Social Media and Changing

Communication Patterns Among Students : an

Analysis of Twitter Use By University of Jos

Students. Covenant Journal of Communication

(CJOC), 2(1), 40-60, (2014).

16. Pak, A., and Paroubek, P. Twitter as a Corpus for

Sentiment Analysis and Opinion Mining. In

Proceedings of the Seventh Conference on

International Language Resources and Evaluation,

1320–1326, (2010).

https://doi.org/10.1371/journal.pone.0026624

17. Pennacchiotti, M., and Popescu, A. A Machine

Learning Approach to Twitter User Classification.

Proceedings of the Fifth International AAAI

Conference on Weblogs and Social Media A, 281–288

(2011).

18. Pla, F. Political Tendency Identification in Twitter

using Sentiment Analysis Techniques. 183

Proceedings of COLING 2014, the 25th International

Conference on Computational Linguistics: Technical

Papers, Dublin, Ireland, 183–192, (2014).

19. Ratkiewicz, J., Conover, M. D., Meiss, M., Gonc, B.,

Flammini, A., and Menczer, F. Detecting and

Tracking Political Abuse in Social Media.

Proceedings of the Fifth International AAAI

Conference on Weblogs and Social Media Detecting,

297–304 (2011). Retrieved from

http://www.aaai.org/ocs/index.php/ICWSM/ICWSM1

1/paper/viewFile/2850/3274.

20. Savita, B. D., and Gore, P. D. Sentiment Analysis on

Twitter Data Using Support Vector Machine.

International Journal of Computer Science Trends

and Technology (IJCST), 4(3), 365–370. (2016).

21. Spencer, J., and Uchyigit, G. Sentimentor : Sentiment

Analysis of Twitter Data. In Proceedings of European

Conference on Machine Learning and Principles and

Practice of Knowledge Discovery in Databases, 56–

66, (2012).

22. Thapen, N. A., and Ghanem, M. M.. Towards Passive

Political Opinion Polling using Twitter. BCS SGAI

Workshop on Social Media Analysist, Cambridge,

UK, 19–34, (2013).

23. Tong, S., and Koller, D. Support Vector machine

Active Learning with Applications to Text

29

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 33: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Classification. Journal of Machine Learning

Research, 45–66,

(2001)..https://doi.org/10.1162/153244302760185243

24. Tran, H. Discovering entities‟ behavior through

mining Twitter. Theses and Dissertations, University

of Iowa. (2012). Retrieved from

http://ir.uiowa.edu/etd/3545

25. Unnisa, M.. Opinion Mining on Twitter Data using

Unsupervised Learning Technique. International

Journal of Computer Applications, 148(12), 12–

19(2016).

26. Wegrzyn-wolska, K. Tweets mining for French

Presidential Election. Fourth International

Conference on Computational Aspects of Social

Networks (CASoN), 138–143, (2012)..

27. Wu, S. The Comparison of Classification Analyses on

the Public Opinion Focus of Biofuels Based on

Twitter, 20047., (2016)

https://doi.org/10.1063/1.4940295.

28. Zaïane, O. R.. Introduction to Data Mining, Principles

of Knowledge Discovery in Databases. University of

Alberta Press, 1–15, (1999).

30

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 23-30The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 34: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Chinese And Moroccan Higher Education MOOCs: Rationale, Implementation and Challenges

OUBIBI MOHAMED (Author 1) School of Information Science and Technology

Northeast Normal University Changchun, China

[email protected]

Pr. ZHAO WEI (Author2) School of Information Science and Technology

Northeast Normal University Changchun, China

[email protected]

Abstract— Integrating technology in teaching helps students to understand all the concepts they are taught in class. Technology makes teaching a walk in the park, a paperless class is environmental friendly, teachers are able to track their students' progress, students can access eBooks and other learning information at anytime of the day, anywhere and

distance learning has been made possible. The rapid increase use of information technologies and E-learning throughout educational institutions is changing the way teachers and students learn, work, and establish collaboration. Recent declarations from top Universities to turn to new forms of educational delivery called MOOCs (Massive Open Online Courses) The MOOC movement have recently invaded the field of higher education in the world to the point that some describe it as the undisputed future of the university. Authors claim that online learning and hybrid devices are becoming more responsive to learners’ needs at a time where constrained educational budgets and accompanying decrease in professional development opportunities makes it hard for the universities to keep up with the expectations and demands of the 21st century learners. In line with this, this paper sets out first to present the MOOC framework, placing it in the wider context of open, online learning. Then, it discusses how the key elements of openness, anytime and anywhere learning of this new educational trend can meet learners’ needs for flexibility, collaboration and agency. This paper also argues for the need to adopt a blended learning approach that combines both online and face-to-face experiences to optimize learning in Moroccan higher education. Finally, some challenges are discussed. Keywords— blended learning; learner’s needs; massive open online course (MOOC); multi-access learning; open education. I. INTRODUCATION

The education sector is growing in the world with respect to quality of education and the technology used for delivering the required information to the students. The paper reflects on the use of technology in education systems of two Asian countries i.e. Morocco and China based on the statistics gathered from different sources. The statistics issued by UNICEF reflect that

adult literacy rate in China is 93.7% as compared to 56.4% in Morocco which reflects that the Chinese are paying more attention in making their population as asset through education. China being the leading nations in Asia has focused on use of technology in their class rooms even for distant learning in the far flung areas of the country. Dale F. Eickelman in his book Knowledge and Power in Morocco, contended “The Education of a Twentieth-Century Notable that Morocco had to struggle with evolution of its education system mainly due to Islamic notables hindering the ways for modern education specifically based on modern technology”. Integrating technology in teaching helps students to understand all the concepts they are taught in class. Technology makes teaching a walk in the park, a paperless class is environmental friendly, teachers are able to track their students' progress, students can access eBooks and other learning information at any time of the day, anywhere and distance learning has been made possible. China has a large number of students, many of whom study online. Digital education has made it possible for many people get access to education. China recently opened up its education system. II. BREAKING DOWN THE ACRONYM

Integrating technology in teaching helps students to understand all the concepts they are taught in class. Technology makes teaching a walk in the park, a paperless class is environmental friendly, teachers are able to track their students' progress, students can access eBooks and other learning information at any time of the day, anywhere and distance learning has been made possible. MOOC is an acronym; it stands for massive, open, online, course. The course is massive because it involves a large numbers of participants, including both instructors and learners who cannot be physically present same time same place around a specific topic of learning. The course is open because the material put by the facilitators, the work done by the participants are accessible to or shared between all the people taking it. It is also open in the sense that it is free. Participants might pay to get a credit through an institution,

31

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 31-34The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 35: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

but they do not pay for participating in the course. The course is also distributed online because all the blog posts, articles, tweets and tags are shared online by the participants taking it. MOOCs are planned courses in the sense that they have a start and ending sessions. They have also facilitators, course materials, and participants. (Masters, 2011)

III. JUSTIFYING A MOOC APPROACH TO HIGHER

EDUCATION IN CHINA AND MOROCCO

According to a research conducted by the British council, and by Jeremy Chan (2013), China has the biggest market for academic and education items in the world. While considering education as a commodity, the largest education market is the country of China. To be specific, it has been estimated that the number of students in Chinese educational facilities is about 400 million, if not more. Further, it is claimed that about 7.5 percent of the figure, that is, 30 million students are in the higher education sector. The students grouped together are estimated to be equivalent to the population of 3 countries including U.S, Australia and UK. As a matter of fact, many areas of society have undergone rapid change enabled by technology; however, education has been by comparison in a dormant state. This is because most universities in Morocco continue to offer the majority of their courses face-to -face. Consequently, easy access is most of the time limited to only those learners who live within the areas surrounding the institutions. The increase in university enrollments, the regrettable rates of dropping out, the demands of the twenty-first century learners, and technological development make a radical shift to new approaches in higher education in Morocco necessary. Therefore, dialogue has started recently on the future of the Moroccan university, the importance of catching up with the technological advancement, and responding to the different demands of the new generation of Student. In fact, the literature emphasizes that research on how people learn has adopted new perspectives as a result of the advent of new technologies that affect the teaching/learning process. According to Calkins and Vogt (2013) “Next generation learning” research is informed by:

• A deepened understanding of learning: how, where, and why students (and people of all age) learn most effectively.

• A deepened understanding of learners: what’s required to engage and meet students’ complex, individual needs.

• The recognition that the world has changed: so thoroughly, in fact, that it requires a much higher level of achievement for much higher percentages of students.

In this respect, the massive open online course (MOOC) trend has lately emerged in higher education in the world, and the advocators claim the anytime/anywhere mantra and the principle of multi-access learning that underlie the MOOC movement can attend to learners’ needs for personalization, flexibility, and agency.

Code (2010) explains that learners’ agency is reflected in their choice and abilities to interact with personal, behavioral, environmental, and social factors that make up their learning context. So, learner agency can be carried out through three different ways: personal, proxy, and collective. Personal agency refers the learners’ ability to initiate action. Proxy agency is a socially mediated form of agency through which individuals decide to have others act at their own will in order to attain the results they desire. Collective agency makes it possible for people to use interactive and dynamic means to work together towards common objectives (Bandura, 2001). To demonstrate, MOOCs make it possible for learners who cannot attend classes regularly to have access to live-video or video-recorded lectures and assignments whenever and wherever they can .Hence, this mode of learning fosters learners’ ability to control their own pace of learning. Also, one of the prominent aspects of the MOOC framework is how learners can learn from each other within an online course community. Cadi Ayyad University of Marrakech is the pioneer and leader in the educational innovation called MOOC in both Morocco and Africa. Since 2013 the university has started filming courses and putting them online available to students via the university’s servers. In an interview with the Maghreb Arab Press Mr. Miraoui, the president of the university, stated that 65,000 students were enrolled in 2014, 18 000 more than the year before, for only 1400 teachers. This is why the university launched its own MOOC to facilitate access to education for learners who work in crowded classes, and optimize faculty resources. The camera of Réussite, a program produced by the group Jeune Afrique, Canal+ and Galaxie Press, reported the experience of a Professor of Physics who can now, thanks to the MOOC technology, give a tutorial session to five hundred students at a time. In a traditional university this tutorial cannot be given but in small groups of maximum 25 students because of the size of the laboratory. The MOOC can solve the problems of making such kind of tutorials, mainly: not enough laboratories, not enough equipment, and not enough time. IV. A BLENDED LEARNING APPROACH TO

INTEGRETING MOOCS IN MOROCCO

The recent years has seen Morocco invest more on green technologies. This influence can be linked to the Chinese influence in the continent. Notably, the bilateral relation between China and Morocco has seen the country benefit more in terms of technology of education. This is mainly because it is unlikely that the Moroccan universities can abandon face to face lectures as the main delivery mode. However, since the learning styles and requirements of learners differ, it’s mandatory for these institutions to use a blend of learning approaches to be responsive to these differences. Adapting the MOOC principle to the Moroccan university context, a university may use some version of a

32

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 31-34The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 36: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

course management system application to connect all students within a specific department. Through this platform, students can access videos of lectures not just but their group teacher but also teachers of the same subject in the department, track assignments and progress, interact with professors and peers, and review other supporting materials, like PowerPoint presentations or scholarly articles. At a larger scope, teachers of the same major across universities of the country can collaborate to structure online courses and allow students to benefit from the lectures of not just their own teachers but other teachers in other universities, attending hence to different learning styles. These include face-to-face traditional instructor-led lectures, synchronous online, and asynchronous online modes

A. Synchronous Online

According to Irvine et.al (2013), in a synchronous online framework, learners on campus are together in a multi-access enabled classroom and have the instructor present in the classroom with them. Learners who cannot attend participate by joining in via Internet webcam and content can be exchanged between participants using desktop sharing.

B. Asynchronous Online

McKinney et.al (2009) argues that having an asynchronous access to archived synchronous learning events may be a suitable option for those who are unable to attend synchronously. Students can listen to or view archive recordings of lectures (podcasts). To move the asynchronous group beyond merely watching archived class videos, student collaboration can be enhanced through discussion boards, separate personalized synchronous student-led or teaching assistant-led sessions for a pod within a different time zone. V. THE RISE OF E-LEARNING IN CHINA Technology has become a very critical facet of the modern society and its footprints have been felt in the education sector the most. In the contemporary Moroccan and Chinese societies, technology has been embedded in to learning with the sheer objective of improving results and learning experience. A precise comparison of how technology has been integrated in to the education systems of the two countries indicates that China has recorded more success but both countries have reported advantages as well as disadvantages of technologically hinged learning. Market of E-learning in China can be separated in three sectors: The first sector: online degree education, The second sector: enterprise E-learning, language training and the last one is professional training for certificates. From 2004 to 2012, online degree education has been quadrupled. In 2012, the overall market revenue reached 64.6 billion RMB ($10.4 billion approximately) and there were 1095 thousand new registers professional E-training courses and 1270 thousand new students for university E-learning

courses. Online professional trainings and language trainings also take a stable rate of increase.

Fig. 1: China’s E-learning Market during 2004-2012

In China, for online higher education, students should take exams to get the degree certificate and many universities including Beijing University and TsingHua University give this kind of courses to educate Chinese people. VI. LEARNERS’ MOTIVATION AND INVOLVEMENT

In the context of distance learning research, how to motivate student to use MOOCs for learning is one of the most frequently studied variables. It results from the interplay of goals, emotions, and the person’s sense of personal agency. Motivated students for learning, spent more time and effort to achieve higher levels of performance than those who were not confident and motivated. Pintrich and De Groot affirm that student with high learning motivation tended to engage in more meta-cognitive strategies and were likely to persist at a task than student with low learning motivation Current research also suggests that learning involvement is another variable that influences students’ learning and satisfaction within traditional and/or online learning environments. Learning involvement is defined as the degree to which learners interact with other learning component (i.e., learning content, learning activities, peer learners, tutors, and instructors) and are engaged in the learning process. Our research team is interested to the periodic learners because the fixation of learning periods has a major role in learning by encouraging their personal touch. They become more involved and provide the best of themselves; this also helps to empower them. The purpose of this paper is to propose a process of collection and transformation of traces and calculation of activities’ indicators. In order to measure learners’ motivation and involvement in distance learning, we will address the collection of traces of learners’ activities in Moodle platform. VII. CHALLENGES and Future Prospects With China being the largest higher education market in the world (25 million students at undergraduate level) and at the same time having the largest internet population worldwide

33

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 31-34The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 37: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

(390 million internet users), MOOCs has huge potential in China. To implement a MOOC framework in Morocco or in China and create a meaningful learning environment a variety of issues need to be addressed; these are mainly institutional, pedagogical and technological factors (Singh, 2003). The institutional factors are mainly related to the preparedness of the organization, the availability of content and infrastructure, and the implementation of a needs analysis to understand learners’ needs. MOOCs are free of charge courses for massive number of learners on the web; it must be considered that course design and the way of presenting course materials, and interactivity trough social networks and study groups [13]. The Pedagogical or Andragogical dimension is concerned with the combination of content (content analysis), the learner needs (audience analysis), and learning objectives (goal analysis). The pedagogical or andragogical dimension also addresses the choice of the most appropriate delivery method. It is also concerned with whether both the teachers and students have the knowledge required to use new technologies and, possibly, more sophisticated instructional practices. The technology issues need include creating a learning environment and the tools to deliver the learning program. They include also the choice of the most effective learning management system, that would manage multiple delivery types and a learning content management system that catalogs the actual content (online content modules) for the learning program. VIII. CONCLUSION

The use of technology in the Republic of China's education system is highly advanced as compared to the Morocco's system of education. Technological advancement in the education in China has seen many graduates and high school students in China perform well in the world market where digital technology is highly valued. It is estimated that the education market in China has attracted over 450 million students and thus the use of technology in teaching has improved efficiency and accountability. It is however important to note that Morocco is also embracing technology in education and it is an issue that is very essential for the ultimate improvement of the education sector.

Moroccan higher institutions need to adopt reform strategies that are mainly informed by students’ needs. These institutions need also to provide via multi-access learning delivery modes. In this respect, the multi-access Learning framework that underlies MOOCs supports student choice and agency. When talking about MOOCs as an educational technology, the primary focus should not be the technology itself, but rather the pedagogy. That is to say, we should first specify the learning experiences we want to create, the learning outcomes we want to achieve and then enable them with technology.

References

[1] Bandura, A. (2001).Social cognitive theory: An agentic perspective. Annual Review of Psychology. [2] Calkins, A., & Vogt, K. (2013). Next generation learning. EDUCAUSE: Next Generation Learning Challenges. [3] Code, J. R. (2010). Assessing agency for learning (Doctoral dissertation, Simon Fraser University, Burnaby, Canada). [4] Masters, K. (2011). A Brief Guide to Understanding MOOCs. The Internet Journal of Medical Education. 1 (2). doi: 10.5580/1f21 [5] McKinney, D., Dyck, J. L., & Luber, E. (2009). [6] Irvine, V., Code, J. & Richards, L. (2013). 9 (2), 172-186. [7] Singh, H. (2003). Building effective blended learning programs. Issues of Educational Technology, 43 (6), 51-54. [8] M.E. Ford, “Motivating Humans”. Sage Publishing, Newbury Park, CA, 1992, ISBN 0-8039-4529-9. [9] D. H. Lim, M. L. Morris, “Learner and Instructional Factors Influencing Learning Outcomes within a Blended Learning Environment”. [10] P. R Pintrich., E. V. De Groot. 1990, pp. 33-40. [11] I. Kamsa R. ElOuahbi, F. El Khoukhi, F. ElGhibari and S. Chehbi, “Learning time planning in a distance learning system using intelligent agents”. [12] S. Sankaran, T. Bui. Journal of Instructional Psychology, 2001, pp. 191-198. [13] L. Pappano, “The Year of the MOOC,” The New York Times, 2012.[Online].Available: http://www.nytimes.com/2012/11/04/education/edlife/massive -open-online-courses-are-multiplying-at-a-rapid-pace.html. [retrieved: January, 2017.

34

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 31-34The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 38: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Stabilization of Particle Filter Based Vertebrae Tracking in Lumbar Spinal

Videofluoroscopy

Amina Amkoui*, Ibrahim Guelzim*, Hammadi nait charif**, Lahcen Koutti*

* Computer Vision and systems Laboratory, Faculty of Sciences, Ibn Zohr University

** National Centre for Computer Animation, Bournemouth University, UK

[email protected], [email protected], [email protected], [email protected]

ABSTRACT

Tracking of spinal vertebrae in videofluoroscopy

video is useful for diagnosis of certain spine

pathologies, such as vertebral fractures,

spondylolisthesis and low back pain. The aim of this

work is to improve a semi-automatic method for

vertebrae tracking based on particle filter. The

process starts with vertebrae model construction by

splining hand-selected landmark points, then the

particle filter tracks the vertebrae in each frame.

While the process successfully tracks the vertebras,

the trajectories are quite jittery This paper aims to

improve the overall tracking by smoothing the

tracking trajectory using curve fitting function. The

tracking results are more natural with very little

noise.

Keywords:

Objects tracking, Particle filter, Lumbar vertebrae,

Videofluoroscopy, Smoothing.

1. INTRODUCTION

The low back supports the weight of the upperbody and provides mobility for everyday motions such as bending and twisting. Muscles in the low back are responsible for flexing and rotating the hips while walking, as well as supporting the spinal column, all those movements could affect intervertebral joint. Hence, the Low Back Pain (LBP) is classified as one of the most common health problems. Researchers in [1] have found that nearly 10% of the world's population (including children) suffers from low back pain and one third of work-related disabilities are due to low back pain. (That is why several studies still deal with this area). In [2] authors demonstrate that irregular motions in one intervertebral joint can affect adjacent joint motion. In [3] a strong relation between the abnormal kinematic behaviour of the lumbar spine and low back pain was proved by using an electromagnetic tracking system. motion irregularities in the lower lumbar spine can produce compensatory effects in the upper lumbar spine [4].

1.1. Medical Imaging Medical image visualisation- is a helpful tool

in disease diagnosis and in monitoring, as well as in surgical and therapeutic guidance [5].

Many techniques have developed to measure the vertebra motion [6] [7] [8] based on vertebrae segmentation [9] [10]. X-rays, the most used imaging technique, provides detailed information about the geometry, aberrant features, and positions of individual vertebrae, but only for select static postures. Videofluoroscopy however, provides similar information to X-rays in real time. It allows vertebral positions to be observed throughout their course of motion. Compared to X-rays, the videofluoroscopy provide less details about geometry and structural anomalies, and it contains noise which is quantum in nature and typically modelled as a Poisson-distribution [11] [12].

A classical examination with videofluoroscopy technique can produce a sequence of hundreds of images, which makes it a challenge to track the motions of vertebrae accurately. The segmentation and the tracking of objects movements is a complex and still an open problem; with many applications as such as video coding, robotics, video surveillance, and medical imaging.

The aim of this work is to improve the vertebrae tracking based on Particle Filter in lumbar spinal videofluoroscopy using a polynomial fit.

1.2. Objects Tracking The tracking of real-world objects is a

challenging problem due to noise, occlusion, and the presence of other moving objects. The main difficulty in video tracking is to associate the location of the targets in the successive frames, especially when the objects move fast relative to the frame rate [13]. The tracking systems normally use a model that describes how the target's image can change considering the possible movements of the tracked object. Many Algorithm have been implemented to

35

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 35-40 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 39: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

surmount these difficulties; they can be classified into two categories: deterministic methods and Probabilistic methods [14].

Deterministic methods are connecting each object in previous frame with single object in current frame using a group of motion characteristics. The characteristics of the objects commonly used are proximity (limited displacement hypothesis) and appearance (similarity of form and / or photometric content and / or motion). In object-based models, targets can be) characterized by color or contour histograms, a contour map (open or closed contour of the object), or a combination of these models. In [15] information on the contours contained in the objects is used to track people. The distance measurement between tracked object and observations is a correlation measure calculated on the contours of the object

Probabilistic methods Observations obtained by a detection algorithm are often corrupted by noise. In addition, the movement or appearance of an object may vary slightly between two consecutive frames. Probabilistic methods allow to manage these variations by adding

uncertainty to the model of the object and the models of the observation. The tracking of a single target is then obtained by filtering methods such as Kalman filter or particle filter.

1.3. Related Works

In a previous work [16] authors present a

technique to track the vertebrae using particle

filters with image gradient based likelihood

measurement, Balkovec et al. [17] developed an

iterative template matching algorithm to obtain

precise time-course details about individual

vertebrae and intervertebral motions from

videofluoroscopy images.

2. MATERIALS AND METHODS

Poor image contrast, image noise, imagedistortion, and soft-tissue-induced changes in vertebrae appearance, all those causes make the tracking of individual vertebrae in videofluoroscopy images very challenging. In this paper, we seek to overcome these challenges by following the process below (figure 1)

.

36

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 35-40 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 40: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Figure 1. Procedural overview. (a) An image from frontal view. Pincushion distortion is evident in the superimposed

red grid lines. (b) Pincushion distortion is removed using an image distortion correction algorithm (Equation 1). (c) The

contrast of each image is enhanced with many filtering algorithms. (d) image with superimposed Canny edge detector to

help in semi-automatic selection of vertebrae outline. (e) Particle filter tracking phase. (f) Rotation angles adjustment

using curve fitting function. (g) The contour centre coordinates adjustment using curve fitting function.

2.1. Image Acquisition and Processing

Image distortion in videofluoroscopy has various sources. [18] pointed out that, in addition to perspective, the curved image intensifier produces ‘pin-cushion’ distortions [19] where straight lines are curved outwards from the center. The correction method devised in [18] overcomes this problem using a calibration approach. They used a calibration grid composed of wire markers. The grid was placed between the subject and image intensifier and its corners were digitized. Linear correction of object coordinates was then performed within each grid box. pincushion distortion was corrected using the formula:

𝑠 = 𝑟 (1

1 + 𝐶(𝑟𝑅⁄ )

2) (1)

Where s is the modified radial distance to a pixel that was at radius r from the center, and R and C are the radius to a corner of the image and the radial distortion coefficient, respectively. To find the radial distortion coefficient C, we start to adjust it until all squares on the grid appeared to be approximately the same size (Figure 1(b)) and the lines across the image are straight, then use it to correct all images.

In the processing step, images were enhanced using an adaptive noise removal filtering, followed by a median filtering and to increase the dynamic range of the histogram, an adaptive histogram equalization was processed.

2.2. Tracking Algorithm

2.2.1. Semi-automatic selection

Acquisition& processing

(a) Getting images (b) Distortions elimination (c ) pre-processing

(f) Rotation angles adjustment (g) coordinates adjustment

Data adjustment

(d) semi-automatic selection based on canny

edge detector

(e ) tracking vertebrae using particle filter

Tracking Algorithm

37

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 35-40 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 41: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Semi-automatic video object segmentation approach consists of two steps: interactive initialization and automatic tracking. A system using this strategy therefore comprises two corresponding components. The interactive segmentation in the first frame where the user specifies the contour of a semantic object easily and fast with a computer-aided segmentation tool. The manually segmented frame is then used to initialize the automatic tracking component

In this work, we follow this approach using Canny edge detector as a segmentation tool to detect the edge in the first frame, then the user selects few land marks along the vertebra outline basing on the edge detected by canny, these hand-selected points are then fitted with parametric splines (independently to X and Y coordinates) to form the vertebra contour model.

2.2.2. Particle filter

As mentioned above, particle filter is a probabilistic tracking method based on Bayesian perspective. It estimates the internal states Xk in dynamical systems when partial observations Yk are made, at any time step k. to define the problem of tracking, consider the evolution of the state sequence (𝑋𝑘, 𝑘 ∈ 𝑁) of a target given by:

𝑋𝑘 = 𝑓𝑘(𝑋𝑘−1, 𝑊𝑘−1) (2)

fk: nonlinear function of state Xk-1, {Wk-1, 𝑘 ∈ 𝑁 } is the process noise sequence. the aim is to estimate Xk from measurements

𝑌𝑘 = 𝐺𝑘(𝑋𝑘, 𝑉𝑘) (3)

Gk: nonlinear function of state Xk-1, {Vk, 𝑘 ∈ 𝑁 } is the measurement noise sequence.

According to Bayesian perspective, the tracking problem is to calculate some degree of belief in the state Xk at time k, given the data Y1:k up to time k. Thus, it requires to construct the probability density function p(Xk| Y1:k ). The prediction stage involves using the system model (1?) to obtain the prior probability density function of the state at time via the equation:

𝑝(𝑋𝑘|𝑌1:𝑘−1)

= ∫ 𝑝(𝑋𝑘|𝑋𝑘−1)𝑝(𝑋𝑘−1|𝑌1:𝑘−1) 𝑑𝑋𝑘−1

(4)

The probabilistic model of the state evolution

𝑝(𝑋𝑘|𝑋𝑘−1) is defined by the system equation (2)

and the known statistics of 𝑊𝑘−1. To apply particle filtering for visual tracking, the

first issue that needs to be addressed is how to represent the state and the likelihood models, in a previous work [16] authors present a technique to track the vertebrae using particle filters with image gradient based likelihood measurement, the sate of

a vertebra model at a time step k is given by 𝑒𝑘 =(𝑥𝑘; 𝑦𝑘; 𝜃𝑘) where xk and yk are the imagecoordinates of the contour centre and k is the orientation relative to the centre of mass of the contour.

The likelihood measurement is based on the aggregation of intensity gradient information along each vertebra boundary. The gradient-based likelihood measurement (p) is given by:

𝜑(𝑥𝑡𝑛) = ∑

𝑀𝑎𝑥𝑖=1𝑀 {𝛾𝑖,𝑛}

1 + 𝜀𝐷𝑛

𝑁

𝑛=1

(5)

where N is the number of normal search line segments, 𝛾𝑖,𝑛 is the gradient at the ith point/pixel

along the line n, Dn is the Euclidian distance between the point with the maximum gradient and the vertebrae contour model, and 𝜀 = 0.1 is a weighing factor. Then likelihood is computed as follows:

𝑝(𝑦𝑘|𝑥𝑘𝑛) =

𝜑(𝑥𝑡𝑛)

∑ 𝜑(𝑥𝑡𝑛)𝑁

𝑛=1

(6)

2.3. Data Adjustment

2.3.1. Previous results

The proposed tracking technique in the section above is evaluated on a lumbar spinal videofluoroscopy sequences of frontal view.

Using subjective evaluation, the tracking shows good results and all the vertebrae were correctly tracked throughout the sequences (figure 1). To appraise the method objectively, the image coordinates and rotation angles of the contour centre of the second vertebra were recorded and plotted in figures 3 and 4

Figure 2. Tracking results for the frontal view with

lateral flexion to the right: frames 20 and 100

38

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 35-40 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 42: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

Figure 3. second vertebra trajectory original coordinates

Figure 4. second vertebra trajectory original rotation angles

As it shown on Figure 3 and Figure 4 the vertebrae trajectory is very jittery. (and doesn’t give the correct tracking.)

2.3.2. Results Improvement

To make the tracking trajectory appear more naturel, results obtained from the tracking process have been smoothed using a general polynomial fit. Then the centre coordinates and rotation angles of the vertebrae model are adjusted accordingly.

3. RESULTS AND DISCUTION

In this experiment, the performance of the whole system is evaluated. The system ran in the same way as described in 2.2. Firstly, the accuracy on the reported location of the vertebra was tested. One of the vertebra in the video was tracked. The corners will be extracted and reported. Throughout a video sequence of 218 frames was tested. The test results are shown in Figure 3 and Figure 4.

Particle filter based parameters are then recorded and smoothed using the polynomial fitting function., Improved results are shown on Figure 5 and Figure 6. The tracking results of PF can be

seen here, while the improved results can be found in here.

Figure 5. contour centre coordinates adjustment.

Figure 6. Rotation angles adjustment

4. CONCLUSION

In this paper, we focused on improving lumbar vertebrae tracking using curve fitting function to adjust the trajectory resulted from the particle filter tracking algorithm.

The proposed technique produces more natural and accurate tracking results. While this technique has been used here in the context of vertebrae tracking, we believe that can useful whenever the tracking results are jittery.

REFERENCES

[1] D. H. e. al., "The global burden of low back pain:

estimates from the Global Burden of Disease 2010

study.," Clinical and epidemiological research, 2014.

[2] L. Ruberté, R. Natarajan and G. Andersson, "Influenceof

single-level lumbar degenerative disc disease on

thebehavior of the adjacent segments – a finite element

39

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 35-40 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 43: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

modelstudy," J Biomech, p. 341–348, 2009.

[3] C. J. Barrett, k. P. Singer and R. day, "Assessment of

combined movements of the lumbar spine in

asymptomatic and low back pain subjects using a three-

dimensional electromagnetic tracking system," Manual

Therapy, no. 4, p. 94–99, 1999.

[4] S. Lee, S. Daffner, J. Wang, B. Davis, A. Alanay and J.

Kim, "The change of whole lumbar segmental motion

according to the mobility of degenerated disc in the

lower," Eur Spine J., p. 1893–1900, 2015.

[5] "Image analysis and visualisation," Working group 4 of

the Medical Imaging Technology Roadmap, June 2000.

[Online]. Available: http://strategis.ic.gc.ca/medimage.

[6] C. Holt, S. Evans, D. Dillon and A. Ahuja, "Three-

dimensional measurement of intervertebral kinematics in

vitro using optical motion analysis," Proceedings of the

Institution of Mechanical Engineers Part H Journal of

engineering in medicine, no. 219, p. 393–399, 2005.

[7] A. Breen, J. Muggleton and F. Mellor, "An objective

spinal motion imaging assessment (OSMIA): reliability,

accuracy and exposure data," BMC Musculoskeletal

Disorders, vol. 1, no. 7, 2006.

[8] K. Wong, K. Luk, J. Leong, S. Wong and K. Wong,

"Continuous dynamic spinal motion analysis," Spine, no.

31, p. 414–419, 2006.

[9] M. Goesele, T. Grosch, B. Preim, H. Theisel and K.

Toennies, "Segmentation of Vertebral Bodies in MR

Images," Vision, Modeling, and Visualization, 2012.

[10] J. Ma, L. Lu, Y. Zhan, X. Zhou, M. Salganicoff and A.

Krishnan, "Hierarchical segmentation and identification

of thoracic vertebra using learning-based edge detection

and coarse-to-fine deformable model," Medical Image

Computing and Computer-Assisted Intervention, no. 13,

p. 19–27, 2010.

[11] M. Cesarelli, P. Bifulco, T. Cerciello, M. Romano and L.

Paura, "X-ray fluoroscopy noise modeling for filter

design," Int JComput Assist Radiol Surg, no. 8, p. 269–

278, 2013.

[12] C. T, B. P, C. M and F. A, "Acomparison of denoising

methods for X-ray fluoroscopic images," Biomed Signal

Process Control, no. 7, p. 550–559, 2012.

[13] B. Karasulu and S. Korukoglu, "Moving object detection

and tracking in videos," Performance Evaluation

Software, vol. XV, 2013.

[14] M. J. Patel and B. Bhatt, "A Comparative Study of

Object Tracking Techniques," International Journal of

Innovative Research in Science,Engineering and

Technology, vol. 4, no. 3, 2015.

[15] I. Haritaoglu, D. Harwood and L. David, "Real-time

surveillance of people and their activities," vol. 8, no. 22,

pp. 809-830, 2000.

[16] H. Nait-Charif, A. Breen and P. Thompson, "Vertebrae

Tracking in Lumbar Spinal Video-Fluoroscopy Using

Particle Filters with Semi-Automatic Initialisation,"

Advances in Visual Computing, vol. 2, pp. 61-70, 2012.

[17] C. Balkovec, J. H. Veldhuis, J. W. Baird, G. W.

Brodland and S. M. McGilla, "A videofluoroscopy-based

tracking algorithm for quantifying the time course of

human intervertebral displacements," Computer Methods

in Biomechanics and Biomedical Engineering,, vol. XX,

no. 7, pp. 794-802, 2017.

[18] W. WA and J. F, "Detection and correction of,"

JBiomech , no. 14, pp. 123-25, 1981.

[19] J. Cholewicki, S. M. McGill, R. P. Wells and H. Vernon,

"Method for measuring vertebral kinematics from

videofluoroscopy," Clinical Biomechanics, no. 6, pp. 73-

78, 1991.

40

The International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) 3(1): 35-40 The Society of Digital Information and Wireless Communications (SDIWC), 2017 ISSN: 2410-0439 (Online)

Page 44: Volume 3, Issue No. 1, 2017sdiwc.net/ijeetdm/files/IJEETDM_Vol3No1.pdf · Open source; Moodle 1 INTRODUCTION This review of the literature examines possible methods for evaluating

The SDIWC International Journal of E-Learning and Educational Technologies in the Digital Media (IJEETDM) aims to provide a forum for scientists, engineers, and practitioners to present their latest research results, ideas, developments and applications in the field of computer architectures, information technology, and mobile technologies. Original unpublished manuscripts are solicited in the following areas including but not limited to:

1. Research papers: that are presenting and discussing the latest, and the most profound research results inthe scope of IJEETDM. Papers should describe new contributions in the scope of IJEETDM and supportclaims of novelty with citations to the relevant literature.

2. Technical papers: that are establishing meaningful forum between practitioners and researchers withuseful solutions in various fields of digital security and forensics. It includes all kinds of practicalapplications, which covers principles, projects, missions, techniques, tools, methods, processes etc.

3. Review papers: that are critically analyzing past and current research trends in the field. Manuscriptssubmitted to IJEETDM should not be previously published or be under review by any other publication.Original unpublished manuscripts are solicited in the following areas including but not limited to:

Intelligent Tutoring Systems Security Aspects Immersive Learning Computer-Aided Assessment Collaborative Learning Errors in E-Learning-Community Building Accessibility to Disabled Users Context Dependent Learning E-Learning Platforms, Portals Mobile Learning (M-Learning) Learning Organization Standards and Interoperability Virtual Labs and Virtual Classrooms Digital Libraries for E -Learning Joint Degrees Web-based Learning, Wikis and Blogs Authoring Tools and Content Development Synchronous and Asynchronous Learning Medical Applications E-Learning Hardware and Software AV-Communication and Multimedia Ontologies and Meta-Data Standards Simulated Communities and Online Mentoring E-Testing and New Test Theories Supervising and Managing Student Projects Distance Education Pedagogy Enhancement with E-Learning Metrics and Performance Measurement