38
Reactive Android Mobile Context Awareness Agent by Tyrone John Adams 211209139 Thesis submitted in partial fulfilment of the requirements for the degree Baccalaureus Technologiae : Information Technology in the Faculty of Informatics and Design at the Cape Peninsula University of Technology Supervisor: Dr. K. Boniface Co-supervisor: Mr F. Neto

Reactive Android Mobile Context Awareness Agent

Embed Size (px)

Citation preview

Reactive Android Mobile Context Awareness Agent

by

Tyrone John Adams

211209139

Thesis submitted in partial fulfilment of the requirements forthe degree

Baccalaureus Technologiae : Information Technology

in the Faculty of Informatics and Design

at the Cape Peninsula University of Technology

Supervisor: Dr. K. Boniface

Co-supervisor: Mr F. Neto

Cape Town

(October 2014)

Contents

ABSTRACT 3

1 CLARIFICATION OF TERMS 3

2 INTRODUCTION 4

3 RESEARCH PROBLEM 4

3.1 Background to Research Problem...............................43.3 Statement of Research Problem................................5

3.4 Aim of Study.................................................53.5 Research Question, Sub-Questions and Objectives..............6

4 LITERATURE REVIEW 6

6 DESIGN SCIENCE RESEARCH 9

6.1 Problem identification and motivation........................96.2 Research Rigor...............................................9

6.2.1 Evaluation of end user license agreement in the use of Android software development kit...............................10

6.2.2 Gathering and processing context aware information.......106.2.3 Reactive computing.......................................13

6.3 Design as a Search Process..................................146.4 Design as an Artefact.......................................15

6.4.1 Characteristics of Artefact..............................166.5 Design Evaluation...........................................17

6.6 Research Contributions......................................226.7 Communication of Research...................................22

6.7.1 Simulation of the artefact...............................236.7.2 Presentation of the artefact.............................23

7 LIMITATIONS 23

8 CONCLUSION 24

9 REFERENCES 25

APPENDIX A 27

Terms and Conditions for Android Developers.....................27SDK License from Google........................................27

Use of the SDK by You..........................................27Using Android APIs.............................................28

AbstractHumans unlike computers have limited resources available to them. One ofthose limited resources is the attention span of a human. The growth intechnology especially mobile technology is constantly fighting for theattention of a device user. The mobile devices are intelligent and havemany native features built in that provide it with the power and abilitiesit has. The research aims to investigate the construction and role of amobile artefact that can assist users in managing the activities on thedevices. The purpose of the research was to investigate how the naturalabilities of a mobile device, like gathering context information, can becombined with minimal input from the device user to make decisions. Theresearch aimed to produce an artefact according to von Alan et al., (2004)design science methodology for information systems. The artefact focusedspecifically on the Android platform. The content contribution for reactivecomputing was from authors Parker, n.d. and Santi, Guidi and Ricci, (2011),which was used in combination to produce the contribution of the artefactto mobile reactive computing.

1 Clarification of Terms

Terms Definitions

SDK

Software Development Kit is a software library and framework provided by a software vendor or developer that is used to develop software for a specific software platform

GPS

Global Positioning System (GPS) is a satellite-based navigation system made up of a network of 24 satellites placed into orbit; it is able to calculatespeed and direction.

API Application programming interface is services that are exposed to a developer to reuse.

2 IntroductionContext aware computing has played an important role in many of thenew technologies used today. Technologies use context information totry to understand and learn something about the user or the deviceitself. The growth of mobile technologies like Android and Apple IOShas allowed for application developers and device users to have thepower of supercomputers in the palm of their hands. The technologiesprovide a good platform for application developers to start buildingwith. Albeit the use of the technologies, for a user to try andmanage his mobile device during a working day can become cumbersomeand tedious. The technologies alone without application developers,do provide features for users to manage their devices, however it isa manual task that needs to be performed to many times. According to(Garlan et al., 2002) the attention span of a human is limited.Having a device pro-actively warn a user about an event happening onthe device competes with the user’s attention. At times having auser disturbed during a task by a phone call or a message can bedisruptive to their work flow. The mobile device has the potentialto assist a user in managing their device, by using the context richinformation it can gather.

3 Research Problem

3.1 Background to Research ProblemIn the modern era we live in many mobile device users do not havethe time and capacity to manage their mobile devices. According to(Gill, Kamath and Gill, 2012) smart phones have become a significantdistraction in the medical practise whose activities relate todecision based and work related tasks. Having a mobile devicesending notifications during an important monthly board meeting canbe distracting and unprofessional. Albeit if the user needs toattend to an emergency not having a device that can react to thiscontext can be critical. The researchers (Wu et al., 2011) observedthat mobile devices a part of many other technological devicesinhibit aspects of essential communication people require. Theaspects of verbal, eye contact and gestures are all hindered when auser’s are distracted by a mobile device.

Smart phones have advanced capabilities like motion, environmentaland position sensors that can make a mobile device context aware.Context can be described as any information that can be used tocharacterize the situation of an entity. An entity is a person,place, or object that is considered relevant to the interactionbetween a user and an application, including the user andapplications themselves (Abowd et al., 1999). The context awaremobile devices of today still rely on predefined instructions thatare pre-programmed into the device.

For a mobile application to be reactive and context aware it has tohave content that it will be listening for, as users are always busythere are certain context factors that would be taken intoconsideration these are schedules, location, behaviour and time. Theproblem with this is that the devices are already listening for allthe context aware information but are not reactive enough to controlhow the device should behave in various situations. The busyscheduled work day of a user makes it difficult for them to managethe mobile devices with all the content and context changes beingreceived, content ranging from phone calls, messages, emails and

calendar notifications. Context change in the case of a user is whenthey move from a business presentation to lunch time or leaving workto go home. These are changes in the environment and behaviour ofthe device which can be used to manage the user’s mobile device.Currently users have to manage all the content being receivedmanually and the only reactivity provided by the mobile deviceaccording to (Android., 2008) is broadcast receivers. These arecomponents of the operating system that allows applications toinitiate a broadcast to other applications by providing anotification that something has happened. However it does not domuch work than provide a gateway to other components.

3.3 Statement of Research ProblemDespite the rapid growth and adoption of mobile technology littleeffort has been made to construct and evaluate the role of a contextaware mobile agent that is able to reactively handle the informationbeing received from the various context aware factors i.e. speed(motion, environment), location (GPS), schedule (calendar) and time,to be able to provide assistance and management of a user's mobiledevice within the working environment by handling device activities.

3.4 Aim of StudyThe aim of the research is to produce a Mobile agent that is able toreactively handle the activities of an Android device. The researchaims to demonstrate the relevance of the mobile agent in minimizinguser interactions with mobile device while performing tasks.

3.5 Research Question, Sub-Questions and Objectives

Research Question Research Method ObjectiveHow to create a mobileagent that is context-aware and reactive to the mobile device activities?

Simulation in controlled environment

Prove the efficiency of the artefact

Research Sub-question Research Method Objective

Which contextual information can be captured by the mobileagent?

Literature review

Identify which contextinformation is relevant to the mobileagent

How contextual information can be captured by the mobileagent?

Literature review andExperiment(Simulation)

Establish how contextual informationwill be captured by the mobile agent

How to manage mobile device activities without violating userprivacy and manufacturer Terms andAgreements?

Literature review

Ensure artefact doesn't violate any agreements provided bythe Android Operating System

4 Literature ReviewThe following section presents an evaluation of the relevant pointsof context awareness and previous work done. The review also has alook at reactivity and how it can be combined with context-awarenessto build smarter and better mobile applications.

Context aware applications is nothing new to the arena of softwareengineering, however a great debate has been around on what contextaware information is how it is gathered from users. According toauthors (Schilit, Adams and Want, 1994) they all agree that forcingusers to explicitly provide context aware information iscounterproductive and against the meaning of being context aware.Researchers (Schilit, Adams and Want, 1994) agree when developingcontext aware applications they need to define context types andcontext taxonomy of features. According to (Ryan, Pascoe and Morse,1998) context types are an entities location, environment, identityand time. (Schilit, Adams and Want, 1994) defines context typesaccording to the user himself namely where you are, who you arewith, what resources are nearby. Extrapolating from the authorsdeductions of what context aware information should be it istherefore inferred that context types refer to personalcharacteristics that are gathered about an entity, which can eitherbe a user or device. Application developers are in control indeciding what types of context aware information should be gatheredand in what manner. In the development of a context aware mobileagent it is imperative to understand the type of context informationthat needs to be gathered as the mobile agent relies on contextaware information to make decisions. Gathering of context awareinformation leads to better personalised and truly situationalreflecting information that is rich and allows greater reactions andfeatures to be built.

Context aware computing according to (Schilit, Adams and Want, 1994)is software that adapts to its context aware information types, from(Schilit, Adams and Want, 1994) we can identify that context awarecomputing is not possible without context aware information. Thefeatures of context aware computing have been categorized by(Schilit, Adams and Want, 1994) into a group of distinct categories.Features mentioned are presentation of information and services touser, automatic execution of a service and tagging of information toinformation for later retrieval. These categories were expanded andfurther defined by (Ryan, Pascoe and Morse, 1998) where the featuresare described as being able to perform contextual sensing theability to detect context information, contextual adaptation whichis the ability to modify a service automatically based on current

content, contextual resource discovery which is the ability forcontext aware applications to locate and exploit resources andservices relevant to the users context and lastly contextualaugmentation which is the ability to associate digital data withusers context. Derived from the definition of context awarecomputing it is evident the devices or entities that are contextaware need to perform operations on the context informationreceived. To perform operations on context aware information it isneeded for the entities to be reactive to the context information.

According to (Santi, Guidi and Ricci, 2011) the researchers built areactive framework known as Jaca. Jaca implements agents as areactive planning system; meaning the agents run concurrently witheach other waiting for events and context to change, then executingthe reactive plans specified by the developer (Santi, Guidi andRicci, 2011). The plans specified by the developer are the actionsthat are executed to achieve the desired goals. The Jaca frameworkhowever is built on top of the Android framework so its environmentand context aware behaviour is defined by the Android environmentand abilities. The Jaca framework represents reactivity in the formof a set of actions that are predefined and performed by the agentsbased on context information provided. According to (Dantec, 2013)reactivity in applications is the semantics presented in theconstruct of the application. Comparing it to the Jaca frameworkwhose approach is to define plans to be reactive, (Dantec, 2013)states that reactivity is the principle make-up of an application.The core principles is categorised as follows; the application needsto be event driven, meaning the system should be built on theconsumption and production of events which leads to the applicationbeing non-blocking and asynchronous. The application needs to bescalable, meaning the application needs to be able to expandaccording to its usage. Scalability works in parallel with eventdriven behaviour as the behaviour provides the foundation (Dantec,2013). The application needs to be resilient, meaning applicationsability to recover quickly from failure and lastly the applicationneeds to be responsive, which means the application needs to be ableto respond in a timely matter where there is low latency between theactions of the user and the response provided by the system.

The authors of (Salvaneschi and Mezini, 2013) agree that reactiveapplications need to be event driven however believe that thereactive components need to be separated from the code responsibleto keep it updated. The authors (Salvaneschi and Mezini, 2013)describe reactivity in object oriented applications to be difficultbecause, for trivial cases the dependencies are not explicitlystated so designing the application to be event driven based on theevent inputs can be highly error prone. Application developers canalso overlook certain dependencies which mean that componentsreliant on certain dependencies cannot perform the neededoperations. The authors (Salvaneschi and Mezini, 2013) feel there isa need for applications to take on different strategies forreactivity as even keeping a simple design for an application has acost in performance. The implementation of a reactive applicationcan be quite complex for an application developer to grasp asdemonstrated by authors above, according to (Parker, n.d.) asolution to the complexity is to use a functional reactiveprogramming language. In a functional reactive language the core ofit relies on the concepts of Event Streams and behaviours (Parker,n.d.). The Event Streams are discretely defined with respect to timewhich is used to represent certain events. Behaviours arecontinuously defined and model events like current time. The use offunctional reactive programming has lead the authors to produceFunctional Reactive Android a functional reactive android library.Functional Reactive Android was built to handle the propagation ofthe events behind the scenes. Functional Reactive Android is howeveronly a proof of concept and has not been tested in real worldapplications.

The paper presents the development of a mobile agent that isreactive towards the activities on a mobile device. The developmentof the mobile agent will aim to generate quality data to representcontext-awareness and to demonstrate if the mobile agent isreactive.

5 Research Methodology

According to von Alan et al., (2004) research in information systemscan be divided into two paradigms namely design science andbehaviour science. Behaviour science is used to discover, developand verify theories that provide rationale or predict human ororganizational behaviour. Design science according to von Alan etal., (2004) described it as a paradigm that seeks to extend theboundaries of human organizational capabilities. To accomplish thetask the creation of new and innovative artefacts are pursued.Design science process allows for simulation of multiplealternatives, adaptive and process-oriented research. The outcome isto create an innovative artefact that will contribute to solving anorganizational problem. The purpose for design science overbehavioural science is due to the problem mentioned in section 3.The problem identified needs the creation of an innovative artefact.

The author investigates what context information and how it can begathered on a mobile device. The context information gathered andthe reactive principles defined by authors, will be used to build acontext aware and reactive agent for a mobile device. The designwill be based on authors (Parker, n.d.) and (Santi, Guidi and Ricci,2011) work. The authors both define their own context types. Theauthors of the Jaca framework allow application developers toexplicitly define the reactivity plans. The development of themobile application will be a variation of the authors work and willbe used to provide a proof of concept for a reactive and contextaware mobile application.

6 Design Science Research

6.1 Problem identification and motivationThe purpose of this phase in the research will be used to identifythe problem and motivate a reason for why the research will beneeded. The motivation behind the research into developing a contextaware and reactive application was identified by a literature reviewperformed.

According to (Garlan et al., 2002) researchers have discovered thatdevices are continuously competing for the attention of the users.Albeit the increase in computational power and the ability to have avast knowledge and social base in the palm of the user, humanattention is still a limited resource. The ability to pro-activelyassist a user with a task and providing them with options needs anassessment of the context the device is used in. An example would bewhen a staff member has regular executive meetings in a specificlocation, they receive an unimportant message, the user’s deviceshould have the capability to not interrupt the user based on thecontext information gathered about the user. The context informationgathered combined with a reactive plan created will be able toassist a user in handling tasks on the device.

Fortunately the mobile devices computing power of today providescapabilities to gather context information at a relatively low cost.Albeit the context being gathered the user still have to act onbehalf of the device every time an event occurs. The investigationis motivated to provide a proof of concept application.

6.2 Research RigorThe process of designing the artefact relied upon rigorous planningand construction iterations that are specified in the process modelprovided by (Gobel and Cronholm, 2012) The reason behind choosingthe design science research methodology was based on two facts 1)The creation of an artefact to demonstrate the validity of theresearch, 2) The iterative process in literature research andartefact design to continuously enhance the artefact.

The design and plans for the artefact was derived from literature inexisting knowledge bases. The holistic view of the design wascomprised of smaller sub divisions that were put together to assistin the construction of the artefact. The design plans was split intotwo main activities for each sub-division namely literature researchand design of the component. The sub divisions that form part of theintimately connected design are the evaluation of end user licenseagreements in the use of Android software development kit, what

context aware information should be gathered and how and reactivecomputing.

6.2.1 Evaluation of end user license agreement in theuse of Android software development kitThe research identified that to construct the artefact it needed tomake use of a mobile platform. The mobile platform to be used wouldhave to be able to support the needs of the artefact that wasidentified. The platform selected was the Android platform. Thereason for the platform selection over others is 1) the learningcurve is not as step, 2) there is many documentation and tutorialsavailable. The research has identified that when building on theplatform it is imperative to understand the terms and agreementswhen using the Android Software Development Kit (SDK). The reasonfor understanding the terms and agreement is due to the nature ofthe artefact. The artefact will be gathering sensitive informationabout the user and using that to make informed decisions. During theevaluation of the terms and agreements the research has identified afew key points that need to be considered during the design. Theseare extracted as is from the Android SDK terms and Agreements.

The first part of the design process was to understand thelimitations of the Android platform the research was going to workwith. In the process of understanding the platform the following fewpoints were highlighted from the many provided as they were the mostrelevant to the development of the artefact.

The Terms and Agreements can be found in Appendix A. The terms andagreements contained key points that will affect the design of theartefact. How the points affect the design:

1.Adapted from rules 3.1, 3.2, 3.3, and 3.6. The bindingcontract between the application developer and Google inrelation to the usage of the SDK. The SDK is allowed tobe used to develop application on the Android platformonly. The artefact in no way should be modifying the sdk

as it is explicitly banned as all changes made will beowned by Google.

2.Adapted from rules in section 4, the developer is notallowed to build software that in any way infringes onthird party applications. The artefact needs to ensurethat information retrieved is not copyright protected andis allowed to be accessed.

3.Adapted from rules in section 8, when building anapplication using the SDK it is the developers duty toprotect all personal information requested and the userneeds to be made aware of information being used. Thenature of the artefact is to gather context information,which is sensitive information. The artefact needs toensure none of the information gets leaked or exposed.

6.2.2 Gathering and processing context awareinformationContext aware information according to (Schilit, Adams and Want,1994) suggests that it is any information that can be used todescribe personal characteristics of a user or entity. The contexttypes defined by (Schilit, Adams and Want, 1994) are the who’s, thewhat’s and the where’s. The context information provided by theauthors was group into a specific taxonomy that would define thecontext gathered for the research. The context taxonomy that wasagreed upon was specific context that could be used to describe themobile user’s context. The context types are the users schedule(what), location (where), time (when) and motion or behaviour. Howthe context information was gathered required an analysis of theAndroid Software Development Kit.

The construction process for context information gathering wasbroken down into sub activities based on each context type. Thefirst context type namely users schedule (what), describe what theuser should be doing at the current moment in time. The Androidoperating system provides an application programming interface(API)namely content providers which is used to access a central

repository of data.(Android) The central repository of data containsdetails about the user which was used to register the device. Thedetails used to register the users account on the device are usedfor many other services and applications on the device to. The datathe research is interested in is the calendar information providedfor the user. The calendar information is accessed through an API.The API allows us to gather all events and meetings the user hasset-up for the day, but according to the Appendix A section 8 asmentioned before we have to notify the user. Each Androidapplication makes use of a manifest file, which is an extensiblemark-up language (XML) file. The manifest file is where theapplication developer declares to the user what data will begathered during the use of the application. The calendar informationcan be queried through the API provided by Android. A query in theAPI example:

Listing 1: Calendar context gathering

The second context type that was defined is location (where). TheAndroid SDK provides application developers three techniques togather location based information. The first and most reliabletechnique is using global positioning system (GPS). The Androidoperating system from the first build has a GPS system as defaultbuilt into it. The GPS allows the device to capture location basedinformation within 10 meter accuracy. The GPS also allows speed tobe obtained using specific calculations. The second example is theuse of cell towers and Wi-Fi access points. The provider uses anetwork lookup to find the closest base station or Wi-Fi___33 accesspoint and retrieves the location. The third example is the use of apassive provider. The passive provider makes use of location updatesprovided by other services and applications running on the device.The third location provider however is dependent on third partyapplications and services and was not considered.

The research made use of both the GPS and Network provider to ensurethe most accurate results are obtained. To perform the request forlocation based information the following code demonstrates theexample:

Listing 2: Location context gathering

The algorithm used above is used to ensure that if there is both anetwork provider and a GPS provider, the GPS provider will be usedbecause we are guaranteed better accuracy than the network provider.

The third context type defined is time (when). Time is a featurethat was built into the previous two context types. The and wherecontext types both have a time factor associated with thatdemonstrate when last the context values have been updated by thedevice. The schedule context ensured that only the most relevantcalendar events at the time were considered and not any futureevents.

The following algorithm demonstrates how time was considered inschedule context:

Listing 3: Time component for schedule context

The location based context has a specific feature built in whichallows us to determine when last a new location value was generated.To demonstrate how often the location was updated an extra parameterwas requested on the location based object which is elapsed time inNano seconds. To convert the time to seconds which is a moreunderstandable value the calculation performed was as follows:

Location last update = elapsed time / 1000 / 1000 / 1000 / 60Listing 4: Time component for location context

The last context type defined was behaviour of the device. Thebehaviour of the device allows us to determine where the device isin relation to the user. To determine this we make use of aproximity sensor that is built into the Android Operating System.The proximity sensor returns a binary value of near or far todetermine if the device is located in the users pocket, bag or nearthe users head. The values returned are device specific, but theminimum value returned is 3cm.

The taxonomy of context types was combined to form the contextdescription of their device. According to (Android) gathering datafrom the various API’s can be expensive on the battery of the user.The researchers at Android recommend gathering data every twominutes as a user is constantly moving and the context data willalways be updating.

6.2.3 Reactive computingReactive computing according to the (Parker, n.d.) framework isbased on event streams and behaviours based on those event streams.According to (Parker, n.d.) making use of event streams allows anapplication to be oriented around data flows. Using data flowsallows for propagation of change which allows for autonomousbehavioural adjustments made in an application. According to the(Santi, Guidi and Ricci, 2011) the application developers built aframework that allowed dynamic change of the applications behaviourduring runtime. The reactive work done by authors (Parker, n.d.) and(Santi, Guidi and Ricci, 2011) was used as the bases of the reactive

component. An example of the behaviour of a reactive application isbased on the example of an excel spread sheet:

a := b + c b changes a changesc changes a changes

Listing 5: Behaviour of reactive components, excel spread sheet

The reactive components of the research made use of a very populardesign pattern namely the observer pattern. The observer pattern hasan Observable and an Observer. The observable in our research wasthe context observable. The duty of the context observable is togather context information on a time bases. The observable performsits duties autonomously and on a timely basis (every two minutes).Every time the observable updates its context information itnotifies each observable that there was a change in context. Theobservables in the case of the research are known as the eventagents. The event agents are divided into namely the call agent andthe message agents. The agents receive input from two places.Initially the agents receive information from the mobile user. Themobile user creates a static plan that is passed to the agents uponinitialisation. The initialisation plans are continuously updatedduring the runtime of the application. The agent takes the contextinformation received from the context observable and the staticplans input initially and use them to form the basis of its decisionmaking.

The third and last component making up the reactivity of theresearch is the ability for the application to autonomously listento the mobile device events. The events being listened to on thedevice is based on the mobile agents that were defined namely phonecalls and messages. The Android SDK allows the use of broadcastreceivers to listen to the events on the device. The broadcastreceivers can however be used in two ways. The first technique touse broadcast receivers is declaring them in the manifest filepreviously mentioned. This approach however is not practical with

the application as this approach ensures that the application alwayslistens to the event seven when it is not being used. The secondapproach is to dynamically register and unregister the broadcastreceivers as the application makes use of them. The second approachwas the preferred approach as this allowed for preservation of powerconsumption and inaccuracies in data. The combination of the allthree components was the make-up of the reactive component in theresearch. Combining the autonomous context collection of the contextobservable, the autonomous updating of the reactive plans providedand the ability to autonomously listen to device events is whatallowed the application to be reactive.

6.3 Design as a Search ProcessThe following section of the paper according to (von Alan et al.,2004) is the iterative process that is followed to produce aneffective solution to the problem. Several iterations are performeduntil the artefacts works for a specified class of problems (Gobeland Cronholm, 2012)

Through the search process we have acquired two specifictechnologies that aimed to address the issue presented. Theresearchers of Jaca framework (Santi, Guidi and Ricci, 2011) andFunctional Reactive Android (Parker, n.d.) both created artefacts todemonstrate the context awareness and reactivity. The frameworkswere thoroughly inspected and tested to observe if they were able tosolve the given class of problems.

The Jaca framework was developed as a set of agents which work andcooperate in the same environment. The Jaca framework is anextension of the agent speak framework, which is an agent orientedprogramming language built on the belief, desire, and intent model.The application developer would need to program a set of agents thatare implemented as reactive plans. The reactive plans runcontinuously reacting to events; the plans are courses of actionsthat the agents commit to. The events on the device are known asperceived changes in the environment. The Jaca framework usesartefacts to define the structure and behaviour of the environment.The artefacts are representations of the Android environment, which

provides resources for the agents to instantiate to complete thedesired actions. The framework had the correct components to assistin solving the problem mentioned. The artefacts and Cartagoenvironment would allow the device to be context aware. The agentplans that were implemented would allow the device to be reactiveand perform actions based on the change in environment.

The use of the Jaca framework proved to be fairly complex. Theframework consisted of incomplete documentation that could notassist with the more advanced topics. The topics in mention wasgathering more context information than what the Jaca frameworkprovided like schedule information. The framework then also hadcompilation issues that were too time consuming to correct. Theframework was built for much older Android devices and an olderversion of Java that was not being supported.

The second framework that was looked at was the Functional ReactiveAndroid framework. The main concept behind Functional ReactiveAndroid was the use of Event-streams and Behaviours. The reasonbehind the creation of Functional Reactive Android is due inherentcomplexity of graphical user interface programming (GUI). To performGUI based programming the application developer needs to have a verygood understanding of the event model and how every componentinteracts. A GUI has a set of call-backs that are issued every timean action occurs. In a GUI actions are when buttons are clicked orwhen an input receives text. Call-backs are called once the actionoccurs, which contains the code to be run. The Event-streams andBehaviours have a set of dependencies that get updated each time thevalue of either object changes.

The use of Functional Reactive Android in the artefact however couldnot be evaluated in a practical situation. The reason for not usingthe Functional Reactive Android library is that it was a proprietarylibrary that was not open for use. The Functional Reactive Androidwas not context aware either which made it limited to it use. Theresearcher only released a specification of what Functional ReactiveAndroid could do but no guides on how to use it. Reaching out to the

researcher was unsuccessful also as the project was completed and nofurther work was being done on it.

The lack of support of the two frameworks has led to theimplementation of a custom framework. The framework was built off acombination of the two frameworks. The framework made use of thereactive planning, autonomous and asynchronous nature of the Jacaframework. The framework made use of the concept of Event-streamsfrom the Functional Reactive Android framework. The combination ofthe two framework concepts allowed the research to add and removenecessary components that would assist the artefact in solving theclass of problems specified. The creation of the custom frameworkwould be implemented on an android device with the latest platforminstalled.

6.4 Design as an ArtefactAccording to (Orlikowski and Iacono, 2001) the IT artefact is thecore subject matter of the information system field. According to(von Alan et al., 2004) they suggest that the result of designscience research is to present a purposeful IT artefact to solve aspecific business problem. The IT artefacts produced is a broaderartefact than just an instantiation. The artefact produced willinclude constructs and methods applied in the development of theartefact. (von Alan et al., 2004)

6.4.1 Characteristics of ArtefactThe system as mentioned in section 5.2 makes use of the Android SDK.Using the Android SDK with relation to the terms and agreementsallowed for the creation of Android API.

The artefact consists of four major components that allow it toperform its duties:

1.Context Observable (Publisher)

2.Event Agents (Subscribers)

3.Event Listeners

4.Event Plans

The application main entry point to starting the service is afragment; in Android a fragment is a modular class that represents aspecific component of the user interface. The Agent Start fragmentis what starts the context observable, registers the event plansspecified by the user and registers the event listeners. The AgentStart Fragment is also in control of stopping each of the servicesmentioned above.

The Context Observable is known as a publisher. The duty of thepublisher is to gather context information in an autonomous mannerand notify the subscribers that context was updated autonomously.The publisher gathers context information every two minutes fromvarious sensors and databases provided on the device. The contextinformation gathered by the publisher is location information usingGPS or network providers, schedule information using calendardatabase on device and behaviour information using sensors. Thecontext information gathered is stored as one whole aggregate objectwhich is provided to each of the subscribers. The contextobservables data is only relevant for as long as the agent isrunning. Once the agent is deactivated the context information isremoved, this is due to terms and agreements mentioned in section5.2.1.

The subscribers mentioned in the previous paragraph are known as theevent agents. The subscribers subscribe to the publisher (contextobservable). The subscription allows the agents to be notified eachcontext changes. The event agents are divided into a call andmessage agent. The duties of the agents are to evaluate whether theuser of the device has created any plans. The plans are createdthrough a user interface that requests the relevant information foreach plan. The types of plans that exist are time, location, motionand calendar event plans. The event agents get notified every twominutes by the publisher (context observable) that there is newcontext information. The event agents use the new contextinformation and compare it to the event plans set-up by the user. Ifthe context information relates to any of the event plans the agentswill set that plan as the current plan to execute.

The event listeners make use of the broadcast receiver API inAndroid SDK. Broadcast receivers allow applications to listen forspecial broadcasts that are sent by applications running on thedevice. An example would be when a phone call is coming through onthe device a system level application on the Android operatingsystems sends a broadcast to the applications running on theapplication layer. All applications that listen to the specificbroadcast will get notified and it’s their duty to handle the eventhowever they do. The artefact has two broadcast receivers that isregister; the first is a phone state listener which listens forchanges in the phone state. The states can range from off hook, idleand ringing. The second broadcast receiver registered is the SMSreceived action. The receiver gets notified each time a SMS is sentto the device and gets handled accordingly. The event listeners iswhere the device activities are handled, using the plans chosen bythe event agents.

6.5 Design EvaluationThis section of the research will provide an evaluation of how wellthe artefact solves the problem mentioned in section 3 of theresearch paper. The artefact will be observed and measured during arigorous simulation to ensure it performs the desired task as it isrequired to. Comparing the results to the desired objectives willdetermine whether the artefact performed the effective handling ofthe user’s device activities by reading context information andreactively handling it.

The artefacts testing were separated into the call agent testing andthe message agent testing. The first group of tables presented wassimulations specifically for the call agent. The simulationspresented exactly what was been tested, what information was fedinto the system to test it and what the result was.

Call Agent Testing

Scenario Testing: Time Scenario Output

Plans

What is being Tested? The agent has a call plan set-up for a specific time frame.

How is it tested?The call agent was provided with the necessary input and simulated the call agent, to see the output.

What is the input?Time Start: 21:22 Time End: 21:25 Action: End Call Message: Busy Testing Agent

What is the result?

The call was successfully ended during the specified time frame. Once the callwas ended the message expected was returned. Once time frame was over the default plan was returned.

How many times was it tested? 5

How many times successful? 5

How many times failed? 0

Call Agent Testing

Scenario Testing: Location Plans Scenario Output

What is being Tested?The agent has a call plan set-up for a specific locations latitude and longitude.

How is it tested?The call agent was provided with the necessary input and simulated the call agent, to see the output.

What is the input? Latitude: -33.925645 Longitude: 18.703452 Action: End Call Message:

Call Agent Testing

Busy Testing Agent At Home

What is the result?

The call was successfully ended at the specific location. However the test demonstrated that the capturing of location context was not consistent. The accuracy of the data captured varied tremendously and the test only succeeded twice.

How many times was it tested? 5

How many times successful? 2

How many times failed? 3

Call Agent Testing

Scenario Testing: Calendar Plans Scenario Output

What is being Tested? The agent has a call plan set-up for a specific calendar event

How is it tested?The call agent was provided with the necessary input and simulated the call agent, to see the output.

What is the input? Action: Urgently Notify User

What is the result? Before calendar event was active the call was not handled, the agent set-up the default plan by silencing the call.The agent received 5 phone calls from the user and each call was ended and the message, “ I am currently in a

meeting , will contact once complete was returned”

How many times was it tested? 5

How many times successful? 5

How many times failed? 0

Listing 6: Call Agent Simulation Results

The results presented in Listing 6 are a summary of the simulationresults captured during the evaluation of the artefact. The chartdemonstrates how efficient and accurately the call agent was able tohandle the device activities.

Message Agent Testing

Scenario Testing: Time Plans Scenario Output

What is being Tested? The agent has a message plan set-up fora specific time frame.

How is it tested?The call agent was provided with the necessary input and simulated the call agent, to see the output.

What is the input?Time Start: 21:22 Time End: 21:25 Action: Silence Message: Busy testing agent

What is the result?

The message agent received five variousmessages from a specific number. Each sms received got a reply from the agentwith the custom message added.

How many times was it tested? 5

How many times successful? 5

How many times failed? 0

Message Agent Testing

Scenario Testing: Location Plans Scenario Output

What is being Tested?The agent has a message plan set-up fora specific locations latitude and longitude.

How is it tested?The message agent was provided with thenecessary input and simulated the call agent, to see the output.

What is the input?Latitude: -33.925645 Longitude: 18.703452 Action: End Call Message: Busy Testing Agent At Home

What is the result? The call was successfully ended during at the specific location.

How many times was it tested? 5

How many times successful? 0

How many times failed? 5

Message Agent Testing

Scenario Testing: Calendar Plans Scenario Output

What is being Tested? The agent has a message plan set-up for a specific calendar event

How is it tested?The message agent was provided with the necessary input and simulated the message agent, to see the output.

What is the input? Action: Urgently Notify User

What is the result?

Before calendar event was active the message was not handled, the agent set-up the default plan by silencing the message. The agent received 5 messages from the user and each message was silenced and the message, “ I am currently in a meeting , will contact once complete was returned”

How many times was it tested? 5

How many times successful? 5

How many times failed? 0

Listing 7: Message Agent Simulation Results

The chart above is a representation of the simulation resultsachieved by the message agent. The message agent once againdemonstrated how inaccurate the location based context gatheringwas. The rest of the context gathered that was used to handle thedevice activities were much more accurate. The simulationdemonstrated that the artefact was able to handle the deviceactivities according to the context information gathered.

6.6 Research ContributionsAccording to (von Alan et al., 2004) research contributions mayextend or contribute to existing knowledge bases. The contributionis the key to selling the research to the research audience (Gobel

and Cronholm, 2012). The contribution that will be made is theresearch artefact itself. The contributions made by the artefact arenamely how to gather context information with the Android SoftwareDevelopment Kit. The second contribution made was how to reactivelyhandle the events of the mobile devices activities based on thecontext information gathered. The system has been proved to be ablesolve the problems mentioned in section 3.

6.7 Communication of Research

The communication of the research will be presented on bothtechnical and theoretical perspective to accommodate for the variousaudiences.

6.7.1 Simulation of the artefact

The simulation of the artefact contained three main components to.

1.Android Mobile Device Running the Agent

2.Android Mobile Device Simulating Events on the Next Device

3.Airtime to be used on the first device to allow feedback

The device mentioned in point 1 is running the Android operatingsystem namely the Jelly bean version. The Android operating systemis written in the Java language and runs in its own virtual machinesimilar to the Java virtual machine.

The results presented in section 5.5 demonstrate high accuracyregarding the agent’s abilities to handle the activities of themobile device using context aware information. The artefact wastested rigorously to demonstrate its capabilities.

6.7.2 Presentation of the artefactThe artefact will be presented on the 24th of October at the CapePeninsula University of technology in-front of a panel ofresearchers and industry experts.

7 Limitations The artefact testing relied on the purchase of airtime because ofthe response provided by the agent, this limited the amount oftesting for each plan. The artefact only had two agents which werecalls and messages, which were not all the events on the device.

8 ConclusionThe research aimed to produce an artefact that would gather contextaware information and reactively handle the activities on thedevice. A mobile device that could limit the amount of distractionsprovided and minimise the amount of interactions, would allow theuser to be more productive. As stated by (Garlan et al., 2002) thatalbeit the advancements in technology the human attention is still aconstrained factor.

The artefact produced in the research demonstrated three maincomponents that were able to assist in handling the deviceactivities with less interaction from the user. The first component

was the ability for the device to gather context information aboutthe user and the device that could. As demonstrated in section 5.5the artefact effectively gathered context information except forlocation based information which was an inconsistent value. Thesecond component allowed the user to create once off static plansthat would be used to handle activities on the device. The staticplans produced allowed for the minimization of interaction betweenthe user and device because the plans are set-up once. Once theplans are set-up the user is only required to start the agent andthe rest is handled by the agent. The third component that assistedin the artefacts success was the ability to listen for activities onthe device and to react accordingly. The reactive handling of thedevice activities ensured that the user did not need to worry abouthandling any activities on the device.

The amalgamation of the components produced a mobile agent that wasable to reactively handle the device activities using contextinformation. The simulation demonstrated the effectiveness of theagent and how accurately it could gather the information and handlethe activities. The simulation demonstrated that location context isstill an area to perform further research into as the results werenot always as accurate.

The research would recommend that to further reduce the interactionof the user further research should be done. Furthering the researchwould allow the investigation into using machine learning algorithmsto create the reactive plans. The machine learning algorithms wouldbe able to learn from the context information already received whatplans can be deduced.

9 References

Abowd, G., Dey, A., Brown, P., Davies, N., Smith, M. and Steggles, P. (1999). Towards a better understanding of contextand context-awareness. pp.304--307.

Android. (2008). Application Fundamentals | Android Developers. [online] Developer.android.com. Available at: http://developer.android.com/guide/components/fundamentals.html. [Accessed 19th May 2014].

Clark, T., Kramer, D. and Oussena, S. (2011). Model Driven Context Aware Reactive Applications.

Chittaro, L. and De Marco, L. (2004). Driver distractioncaused by mobile devices: studying and reducing safety risks.

Dantec, M. (2013). The Reactive Manifesto.

D. Garlan, D. Siewiorek, A. Smailagic, and P. Steenkiste, (2002) Project Aura: toward distraction-free pervasive computing. IEEE Pervasive Computing

Gobel, H. and Cronholm, S. (2012). Design science research in action-experiences from a process perspective.

Gold J. Hospitals warn smartphones could distract doctors. NPR.org. (2012)Available from: http://www.npr.org/2012/03/26/149376254/hospitals-guard-against-smartphones-distracting-doctors. Accessed October 8, 2014.

Henricksen, K., Indulska, J. and Rakotonirainy, A. (2002). Modelling context information in pervasive computing systems. Springer, pp.167--180.

Ho, J. and Intille, S. (2005). Using context-aware computing to reduce the perceived burden of interruptions from mobile devices. pp.909--918.

Lech, T. and Wienhofen, L. (2005). Ambie Agents: a scalable infrastructure for mobile and context-aware information services. pp.625--631.

Lin, J. (n.d.). PROJECT 23. ANDROID, SYSTEMJ AND REACTIVE INTELLIGENT APPLICATION.

Natchetoi, Y., Kaufman, V. and Shapiro, A. (2008). Service-oriented architecture for mobile applications. pp.27--32.

Offermann, P., Levina, O., Sch\"onherr, M. and Bub, U. (2009).Outline of a design science research process. p.7.

Olaru, A. and Gratie, C. (2011). Agent-based, context-aware information sharing for ambient intelligence. International Journal on Artificial Intelligence Tools, 20(06), pp.985--1000.

Parker, J. (n.d.). Froid: Functional Reactive Android.

Preetinder S Gill, Ashwini Kamath, Tejkaran S Gill, Distraction: an assessment of smartphone usage in health care work settings

Ryan, N., Pascoe, J. and Morse, D. (1998). Enhanced reality fieldwork: the context-aware archaeological assistant.

Salvaneschi, G. and Mezini, M. (2013). Reactive behavior in object-oriented applications: an analysis and a research roadmap. pp.37--48.

Santi, A., Guidi, M. and Ricci, A. (2011). JaCa-Android: an agent-based platform for building smart mobile applications. Springer, pp.95--114.

Schilit, B., Adams, N. and Want, R. (1994). Context-aware computing applications. pp.85--90.

Siewiorek, D., Smailagic, A., Furukawa, J., Krause, A., Moraveji, N., Reiger, K., Shaffer, J. and Wong, F. (2003). Sensay: A context-aware mobile phone. pp.248--248.

von Alan, R., March, S., Park, J. and Ram, S. (2004). Design science in information systems research. MIS quarterly, 28(1),pp.75--105.

Wu R, Rossos P, Quan S, et al. An evaluation of the use of smartphones to communicate between clinicians: a mixed-methodsstudy. J Med Internet Res. 2011;13(3). Available from: http://www.jmir.org/2011/3/e59/. Accessed October 8, 2014.

Appendix ATerms and Conditions for Android DevelopersThis License Agreement forms a legally binding contract between youand Google in relation to your use of the SDK.

You may not use the SDK and may not accept the License Agreement ifyou are a person barred from receiving the SDK under the laws of theUnited States or other countries including the country in which youare resident or from which you use the SDK.

SDK License from Google3.1 Subject to the terms of this License Agreement, Google grantsyou a limited, worldwide, royalty-free, non-assignable and non-exclusive license to use the SDK solely to develop applications torun on the Android platform.

3.2 You agree that Google or third parties own all legal right,title and interest in and to the SDK, including any Intellectual

Property Rights that subsist in the SDK. "Intellectual PropertyRights" means any and all rights under patent law, copyright law,trade secret law, trademark law, and any and all other proprietaryrights. Google reserves all rights not expressly granted to you.

3.3 You may not use the SDK for any purpose not expressly permittedby this License Agreement. Except to the extent required byapplicable third party licenses, you may not: (a) copy (except forbackup purposes), modify, adapt, redistribute, decompile, reverseengineer, disassemble, or create derivative works of the SDK or anypart of the SDK; or (b) load any part of the SDK onto a mobilehandset or any other hardware device except a personal computer,combine any part of the SDK with other software, or distribute anysoftware or device incorporating a part of the SDK.

3.6 You agree that the form and nature of the SDK that Googleprovides may change without prior notice to you and that futureversions of the SDK may be incompatible with applications developedon previous versions of the SDK. You agree that Google may stop(permanently or temporarily) providing the SDK (or any featureswithin the SDK) to you or to users generally at Google's solediscretion, without prior notice to you.

Use of the SDK by You4.1 Google agrees that it obtains no right, title or interest fromyou (or your licensors) under this License Agreement in or to anysoftware applications that you develop using the SDK, including anyintellectual property rights that subsist in those applications.

4.2 You agree to use the SDK and write applications only forpurposes that are permitted by (a) this License Agreement and (b)any applicable law, regulation or generally accepted practices orguidelines in the relevant jurisdictions (including any lawsregarding the export of data or software to and from the UnitedStates or other relevant countries).

4.3 You agree that if you use the SDK to develop applications forgeneral public users, you will protect the privacy and legal rightsof those users. If the users provide you with user names, passwords,or other login information or personal information, you must makethe users aware that the information will be available to yourapplication, and you must provide legally adequate privacy noticeand protection for those users. If your application stores personalor sensitive information provided by users, it must do so securely.If the user provides your application with Google Accountinformation, your application may only use that information toaccess the user's Google Account when, and for the limited purposesfor which, the user has given you permission to do so.

4.4 You agree that you will not engage in any activity with the SDK,including the development or distribution of an application, thatinterferes with, disrupts, damages, or accesses in an unauthorizedmanner the servers, networks, or other properties or services of anythird party including, but not limited to, Google or any mobilecommunications carrier.

4.5 You agree that you are solely responsible for (and that Googlehas no responsibility to you or to any third party for) any data,content, or resources that you create, transmit or display throughAndroid and/or applications for Android, and for the consequences ofyour actions (including any loss or damage which Google may suffer)by doing so.

4.6 You agree that you are solely responsible for (and that Googlehas no responsibility to you or to any third party for) any breachof your obligations under this License Agreement, any applicablethird party contract or Terms of Service, or any applicable law orregulation, and for the consequences (including any loss or damagewhich Google or any third party may suffer) of any such breach.

Using Android APIs8.1.1 If you use any API to retrieve data from Google, youacknowledge that the data may be protected by intellectual propertyrights which are owned by Google or those parties that provide thedata (or by other persons or companies on their behalf). Your use of

any such API may be subject to additional Terms of Service. You maynot modify, rent, lease, loan, sell, distribute or create derivativeworks based on this data (either in whole or in part) unless allowedby the relevant Terms of Service.

8.1.2 If you use any API to retrieve a user's data from Google, youacknowledge and agree that you shall retrieve data only with theuser's explicit consent and only when, and for the limited purposesfor which, the user has given you permission to do so.