6
TREC: Platform-Neutral Input for Mobile Augmented Reality Applications Jason Kurczak and T.C. Nicholas Graham School of Computing, Queens University Kingston, Canada K7L 3N6 {kurczak, graham}@cs.queensu.ca ABSTRACT Development of Augmented Reality (AR) applications can be time consuming due to the effort required in accessing sensors for location and orientation tracking data. In this pa- per, we introduce the TREC framework, designed to handle sensor input and make AR development easier. It does this in three ways. First, TREC generates a high-level abstraction of user location and orientation, so that low-level sensor data need not be seen directly. TREC also automatically uses the best available sensors and fusion algorithms so that complex configuration is unnecessary. Finally, TREC enables exten- sions of the framework to add support for new devices or customized sensor fusion algorithms. Author Keywords augmented reality, tracking sensors, input framework, sensor fusion ACM Classification Keywords H.5.1 Information Interfaces and Presentation: Artificial, aug- mented and virtual realities General Terms Design INTRODUCTION Mobile Augmented Reality (mobile AR) is rapidly making inroads in the consumer market, with applications such as Car Finder [1] and Layar [2] being released for mobile phone platforms. These applications use the phone’s camera and screen to superimpose information on a video feed of the real world, and have served to demonstrate the potential of mobile AR to the general public. Mobile AR applications use sensors such as GPS, compass, and inertial sensors to determine the physical location and orientation of the device. The problem for mobile AR de- velopers is that such sensors may be unreliable and noisy. Applications often must fuse inputs from multiple sensors to Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. EICS’11, June 13–16, 2011, Pisa, Italy. Copyright 2011 ACM 978-1-4503-0670-6/11/06...$10.00. Figure 1. Using the tourist application determine accurate values. Even when a framework is used to handle the input devices, it can require complex manual configuration and be hard to extend or modify. These prob- lems require the developer to focus on low-level sensor pro- gramming rather than focusing on the functionality and us- ability of the application. In this paper, we present an architecture for input frame- works designed to help address these issues, which is imple- mented in the TREC (TRacking using Extensible Compo- nents) framework. The advantages of this architecture are il- lustrated by its use in the Noisy Planet application described in the Motivating Example and Applying TREC sections. The main advantages provided by TREC’s architecture in- clude giving programmers open access to a hierarchy of de- vices, transformative modules, and high-level abstracted in- terfaces; being able to dynamically select sensors and sensor fusion algorithms thanks to multiple levels of abstraction; and allowing modification and extension at every level. MOTIVATING EXAMPLE: AN AR TOURIST APPLICATION To provide context to our description of TREC, we intro- duce Noisy Planet, a mobile tourist guide. This applica- tion, shown in figures 1 and 2, has been implemented using TREC. Noisy Planet allows a tourist staying in an unfamiliar city to navigate to proximate destinations on foot while also allowing serendipitous exploration of the area. 283

TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

TREC: Platform-Neutral Input for Mobile AugmentedReality Applications

Jason Kurczak and T.C. Nicholas GrahamSchool of Computing, Queens University

Kingston, Canada K7L 3N6{kurczak, graham}@cs.queensu.ca

ABSTRACTDevelopment of Augmented Reality (AR) applications canbe time consuming due to the effort required in accessingsensors for location and orientation tracking data. In this pa-per, we introduce the TREC framework, designed to handlesensor input and make AR development easier. It does this inthree ways. First, TREC generates a high-level abstractionof user location and orientation, so that low-level sensor dataneed not be seen directly. TREC also automatically uses thebest available sensors and fusion algorithms so that complexconfiguration is unnecessary. Finally, TREC enables exten-sions of the framework to add support for new devices orcustomized sensor fusion algorithms.

Author Keywordsaugmented reality, tracking sensors, input framework, sensorfusion

ACM Classification KeywordsH.5.1 Information Interfaces and Presentation: Artificial, aug-mented and virtual realities

General TermsDesign

INTRODUCTIONMobile Augmented Reality (mobile AR) is rapidly makinginroads in the consumer market, with applications such asCar Finder [1] and Layar [2] being released for mobile phoneplatforms. These applications use the phone’s camera andscreen to superimpose information on a video feed of thereal world, and have served to demonstrate the potential ofmobile AR to the general public.

Mobile AR applications use sensors such as GPS, compass,and inertial sensors to determine the physical location andorientation of the device. The problem for mobile AR de-velopers is that such sensors may be unreliable and noisy.Applications often must fuse inputs from multiple sensors to

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.EICS’11, June 13–16, 2011, Pisa, Italy.Copyright 2011 ACM 978-1-4503-0670-6/11/06...$10.00.

Figure 1. Using the tourist application

determine accurate values. Even when a framework is usedto handle the input devices, it can require complex manualconfiguration and be hard to extend or modify. These prob-lems require the developer to focus on low-level sensor pro-gramming rather than focusing on the functionality and us-ability of the application.

In this paper, we present an architecture for input frame-works designed to help address these issues, which is imple-mented in the TREC (TRacking using Extensible Compo-nents) framework. The advantages of this architecture are il-lustrated by its use in the Noisy Planet application describedin the Motivating Example and Applying TREC sections.

The main advantages provided by TREC’s architecture in-clude giving programmers open access to a hierarchy of de-vices, transformative modules, and high-level abstracted in-terfaces; being able to dynamically select sensors and sensorfusion algorithms thanks to multiple levels of abstraction;and allowing modification and extension at every level.

MOTIVATING EXAMPLE: AN AR TOURIST APPLICATIONTo provide context to our description of TREC, we intro-duce Noisy Planet, a mobile tourist guide. This applica-tion, shown in figures 1 and 2, has been implemented usingTREC. Noisy Planet allows a tourist staying in an unfamiliarcity to navigate to proximate destinations on foot while alsoallowing serendipitous exploration of the area.

283

Page 2: TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

Figure 2. Overhead view

Noisy Planet uses 3D sound to convey to users the positionand distance of nearby points of interest. For example, infigure 1, the user “hears” that the Stauffer Library is behindhim and to his right. Each point of interest is representedby a subtle repeating tone – for example, the sound of rif-fling pages represents the library; clinking glasses representa restaurant, and jingling coins represent a bank. The appli-cation overlays an aural landscape over the physical world.The sounds emanate from the correct location, even as theuser walks or turns his head. Since the sounds are subtle andrepeating, the tourist can easily choose to listen to them ortune them out.

The Noisy Planet implementation must track the user’s lo-cation and head orientation so that sounds appear to em-anate from the correct direction. It is challenging to accu-rately determine this information using off-the-shelf sensorequipment, and often multiple sensors are required to accu-rately estimate location and orientation. We currently useGPS, compass and gyroscope devices. Each device has itsown data format, requiring programmers to write low low-level interfacing code. Each device type has limitations thatcause it to deliver highly innacurate data under some circum-stances. The programmer must therefore and must identifyand code heuristics determining which device to use whenthey deliver conflicting information. As we shall see, ourTREC framework helps with these difficulties.

INPUT HANDLING IN MOBILE AR APPLICATIONSBefore presenting the design of TREC, we first review cur-rent methods for obtaining input in mobile AR applications.We consider hardware support for location and orientationinput, then review the state of the art in sensor fusion, andfinally discuss existing input toolkits.

Hardware for Location and Pose DetectionA variety of off-the-shelf devices are available for estimatinga user’s location and pose in mobile contexts.

GPS devices triangulate their position using signals fromorbiting satellites. Consumer devices have an accuracy ofabout 5-10m, depending on overhead visibility in outdoorenvironments [14].

Accelerometers can be used to track changes in position bycalculating acceleration vectors based on the experiencedforces, and integrating this data twice to obtain displace-ment [8]. Accelerometers provide faster updates and higherresolution updates than a GPS, but lack an absolute frameof reference and are subject to drift, therefore requiring pe-riodic checks with some other absolute measure of position.

Other methods for tracking position include triangulation ofsignals from known locations, such as cellular tower signals,wifi signals or ultrasonic transmitter systems [6]. These ap-proaches suffer from limited coverage.

Digital compasses (or magnetometers) detect orientation rel-ative to magnetic north, but suffer serious drawbacks in ac-curacy. With the help of a 3-axis accelerometer to trackthe direction of earth’s gravitational pull, a magnetometercan provide pitch, yaw, and roll data. Errors arise due tothe lack of uniformity of the earth’s magnetic field and itssusceptibility to magnetic materials and artificial magneticfields. In addition, outside forces (e.g., from walking) dis-turb the accelerometers’ measurement of the gravitationalvector and can cause large deviations in measured orienta-tion when walking.

Gyroscopes measure angular displacement relative to someinitial orientation, and so cannot indicate absolute directionon their own. However, unlike magnetometers, they are notaffected by magnetic anomalies or by outside forces. Errorsaccumulated over time, however, will cause drift from thetrue direction. At high speeds of rotation (e.g., due to rapidhead movement) some gyroscopes may exceed their upperlimit of measurement and return wildly inaccurate results.

Sensor Fusion TechniquesSensor fusion improves accuracy by combining data frommultiple sensors [3]. A simple form of sensor fusion is toaverage the input of multiple sensors that are measuring thesame property, in order to average out the noise from indi-vidual sensors (e.g. averaging the measurements of multi-ple anemometers to ascertain wind speed). More complextechniques take advantage of known properties of differentsensor types. For example, a gyroscope, magnetometer, andaccelerometer might be used in tandem, where the magne-tometer is used to calibrate the gyroscope whenever the sen-sor is at rest.

Another approach is to use a Kalman filter [13], which takesmultiple noisy sensor measurements, estimates the error inthese measurements, and then estimates the actual state ofthe system being measured [4].

Programming fusion algorithms requires iterative tuning basedon deep knowledge of the properties of the underlying sen-sors.

284

Page 3: TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

FrameworksTREC is far from the first framework used to process inputfrom tracking devices.

VRPN provides an interface between input hardware andVirtual Reality (VR) applications. VRPN allows VR hard-ware peripherals to be shared across many computers on thesame network, and simplifies development by providing astandardized interface for peripherals with the same func-tionality [12]. VRPN standardizes the sensor data being de-livered to applications so that their code is not dependent onthe sensors being used. It does not, however, dynamicallychoose which of the attached devices to use; this must bespecified by the application developer. In terms of extensi-bility, VRPN permits the creation of new devices and devicetypes, while also supporting layered devices that let the de-veloper program higher-level behaviour based on input fromone or more sensors.

OpenTracker supports tracking hardware with a flexible andcustomizable architecture [10]. It uses dataflow graphs tomanage data being passed from sensors to applications. Here,“device drivers” act as source nodes that bring data into thesystem; “filter nodes” transform, merge, or otherwise mod-ify passed data from source nodes, and “sink nodes” outputthe data to an application. OpenTracker has a high level ofconfigurability, using an XML schema to define the dataflowgraph and supporting custom nodes. Like VRPN, however,OpenTracker does not offer automatic configuration and choiceof devices, requiring the developer to provide a configura-tion file that describes the exact dataflow graph and devicesto use.

Ubitrack, on the other hand, is designed for automatic con-figuration [9]. It is meant to support large networks of sen-sors to provide AR tracking using all available resources,with completely dynamic configuration of dataflow networksbased on Spatial Relationship Graphs (SRGs) of the sensorsin the network. There does not appear to be any way to mod-ify or extend the algorithm used to configure the dataflownetworks, or to override this algorithm to use specific de-vices or configurations.

The OpenInterface Project [11] has very interesting parallelsin another area of HCI research. It is an open source platformfor rapidly prototyping multimodal input interfaces for com-puter programs, with the benefit of a GUI interface. OpenIn-terface is designed to transform hardware input into a formatsuitable for the client application using modular transforma-tion components, support a broad range of input devices, andbe easily extensible. OpenInterface does not support the au-tomatic selection of sensors and fusion algorithms, requir-ing explicit configurations by the developer like VRPN andOpenTracker.

All of these AR frameworks support abstraction of sensordata to standard interfaces. VRPN, OpenTracker, and Open-Interface are extensible, allowing new sensor types to be eas-ily added. Ubitrack provides automatic sensor configuration.None of these toolkits addresses all three of these important

goals. The TREC framework has been designed specificallyto fill this gap.

THE TREC FRAMEWORKTREC is a software framework for input handling in mo-bile AR applications. TREC’s goals are to reduce the timerequired to develop interfaces for hardware sensors, and toreduce the difficulty of adapting AR apps to work with a par-ticular collection of sensors. Specifically, TREC addressesthe problems identified in the last section where: 1) writ-ing low-level interfaces to sensors distracts developers fromthe core application design; 2) changing the set of availablesensors may break applications, requiring extensive recod-ing; 3) combining the input from different sensors is tricky,requiring experimentation and iteration

First, in order to make the application programmer’s job eas-ier, TREC abstracts all sensor data into a high level repre-sentation of location and orientation. TREC provides theapplication programmer with simple interfaces for these twoproperties in its abstract input layer.

Second, TREC automatically configures the sensors by de-termining at runtime which of the connected sensors can beused to provide the best data to the application. Becausethe TREC layered architecture hides all differences betweensensor hardware, it can provide the application plug-and-play compatibility with different sensors.

Finally, TREC uses sensor fusion algorithms automaticallywhen a supported configuration is available, and the archi-tecture makes it easy to extend the framework to support newalgorithms. It uses a three-layer hierarchy (see figure 3) toabstract device details. This allows newly added sensors inthe device layer to work automatically with existing appli-cations, and fusion algorithms in the abstract device layercan take advantage of the abstracted devices automatically.The framework is open and allows access and modificationat any level, meaning low-level sensor data can always beaccessed directly if necessary.

In summary, the 3-layered architecture of TREC allows it toprovide open access to high and low levels of abstraction,makes it easier to support automatic configuration based ona hierarchy of device types, allows sensor fusion algorithmsto be dynamically chosen in the same way as devices, andmakes new devices and algorithms easily interoperable withexisting code.

The TREC ArchitectureThe framework is structured around a three-layer architec-ture: the Abstract Input Layer, the Abstract Device Layer,and the Device Layer.

The lowest layer, the device layer, contains objects that ex-pose the data provided directly by device sensors. Devicesmust implement one one more abstract device interfaces (seebelow). For example, the OceanServer USB Compass pro-vides both compass and accelerometer functionality. To add

285

Page 4: TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

+OrientationRadians() : float+OrientationDegrees() : float

«interface»IOrientation

+Location() :+Origin() :

«interface»ILocation

+OrientationRadians() : float+OrientationDegrees() : float

CompassOrientation

+OrientationRadians() : float+OrientationDegrees() : float

GyroscopeOrientation

+Heading() : float+AccX() : float+AccY() : float+AccZ() : float

OSCompass

OceanServerUSB Compass

+AngularVeclocity() : float

WTGyroscope

WiTilt v3.0Gyroscope

+Time() : int+Status() : string+Latitude() : float+Longitude() : float+Bearing() : float+Speed() : float+NewValue() : bool

IBlueGPS

iBlue 737A+GPS

«uses»«uses»

+Location() :+Origin() :

GPSLocation

+OrientationRadians() : float+OrientationDegrees() : float

FusedOrientation

+Heading() : float

«interface»ICompass

+AngularVelocity() : float

«interface»IGyroscope

+AccX() : float+AccY() : float+AccZ() : float

«interface»IAccelerometer +Time() : int

+Status() : string+Latitude() : float+Longitude() : float+NewValue() : bool

«interface»IGPS

«uses»

Abstract input layer

Abstract device layer

Device Layer

vector2vector2

vector2vector2

Figure 3. The TREC 3-layered architecture.

a new device to the framework, a developer needs to createa device-layer class for the device.

The abstract device layer groups standard types of sensordevices, and also provides virtual sensors by fusing the datafrom multiple concrete devices. For example, the Compas-sOrientation class provides orientation data from compass-like devices, such as OSCompass, while the GyroscopeOri-entation class provides orientation data from gyroscope-likedevices. Meanwhile, FusedOrientation provides orientationinformation by fusing data from a number of devices.

Finally, the abstract input layer provides interfaces for spe-cific input data types. In the current version of the frame-work, two types of input are defined: IOrientation provideshead orientation data and ILocation provides position infor-mation. To access input, an application queries the TRECdevice manager for one of these types of input, and receivesan object in return implementing the appropriate abstract in-put type. In this way, the TREC framework shields appli-cations from the details of individual devices or fusion algo-

rithms, and simplifies the work of the developer by providinga simple and uniform way to access input information.

APPLYING TRECWe now show, by example, how TREC helps both applica-tion programmers creating a mobile AR system, and systemsprogrammers adding new functionality to TREC itself.

The Application Programmer’s PerspectiveTo illustrate how TREC helps in developing mobile AR ap-plications, we examine the implementation of the Noisy Planettourist application. Users of the application carry a mobiledevice containing a GPS, and wear a gyroscope and compassfor head tracking. These sensors are hidden behind TREC’sIOrientation and ILocation abstract input types.

To access the abstracted sensor data, the Noisy Planet coderequests an appropriate data source from TREC’s DeviceManager. E.g., when an orientation data source is requested,the Device Manager provides an object that adheres to theIOrientation interface, with data coming from either the OS-

286

Page 5: TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

Compass or WTGyroscope device. The application cannottell which device is supplying this data, and does not needto know since the incoming data is in a standardized format.This allows TREC to provide the best available sensor, andallows new devices to be added to TREC without impactingapplication code. If the programmer for some reason pre-ferred a compass, he/she could choose to access the Compas-sOrientation object in the abstract device layer to guaranteethe use of a compass, or even directly access the OSCompassin the device layer to choose that specific device.

In addition to automatically choosing from available sensors,TREC can automatically use sensor fusion algorithms whensufficient sensors are available. Noisy Planet could actuallyreceive a FusedOrientation object (from the abstract devicelayer) after the earlier request for an orientation object. Thishappens without any changes or configuration on the appli-cation side. TREC decides which device or combination ofdevices to use based on an internal rating mechanism, whichis currently a hard-coded list of the known devices.

For comparison, we implemented a version of Noisy Planetwithout access to TREC. This implementation required ap-proximately 250-350 lines of code per sensor (GPS, com-pass, gyroscope) to handle serial input from these devices,in addition to another 80 lines to manage these sensors andperform the sensor fusion. The version using TREC requiresjust two method calls: one to get the current orientation, andanother for the current location.

The Toolkit Developer’s PerspectiveTREC is an open framework, making it straightforward toadd support for new devices. We now discuss our experiencein adding a new hardware and a new virtual device to TREC.

Adding a New Hardware DeviceThe custom code that connects to and processes data from asensor is contained within a class at the device layer. Basedon the type of sensor, accessor methods need to be createdthat will adhere to the proper interface at the abstract devicelayer (e.g., the IBlueGPS in figure 3 must implement theIGPS interface in order to automatically work with TRECapplications).

Adding a Simple Sensor Fusion AlgorithmTREC’s layered architecture allows easy integration of sen-sor fusion algorithms into applications, and allows the fusioncode itself to be shielded from the details of the underly-ing devices. Sensor fusion algorithms are written as abstractdevice modules that adhere to a standard interface. There-fore, any program taking advantage of TREC’s abstract in-put modules (IOrientation and ILocation) can automaticallyuse the new algorithm.

For example, to implement the FusedOrientation class offigure 3, a new abstract device class is created in the ab-stract device layer. This class implements the IOrientationinterface, and therefore can be used whenever orientation in-formation is requested by the application. Inside FusedOri-entation, a gyroscope, compass and accelerometer are used

(each of which are abstract devices), and some kind of loca-tion service is used (an abstract input.) Specifically, the gy-roscope is used to determine orientation; the compass is usedto calibrate the gyroscope from time to time, but not whenthe compass is moving (as determined via the accelerometeror from changes in location).

This example shows how the fusion algorithm leverages TREC’slayered architecture. When necessary, the fusion algorithmknows the kind of device that it is using (gyroscope, com-pass, etc.), but does not need to know exactly which deviceis in use. Furthermore, when it is not necessary to know thekind of device (i.e., for the location service), the appropriateabstract input can be used.

DISCUSSIONWe have shown how TREC helps developers by providing asingle standardized view of sensor data, making sensor se-lection and sensor fusion automatic, and letting developerseasily extend the framework to support new devices or cus-tom fusion algorithms. We now discuss broader questionsrelated to TREC’s design.

Platform Standardization: Smartphone platforms are mak-ing sensors suitable for mobile AR applications broadly avail-able. Exciting future work would involve customizing TRECaround the sensor packages in common phones. The easy ex-tensibility of TREC is important, as the sensors change be-tween generations of phones, requiring updates to the frame-work. One limitation to this approach is that current phonesdo not provide sensors suitable for head-tracking.

Computer vision for tracking: Computer vision is widelyused for pose detection in AR applications (e.g., using AR-Toolkit [7]), but we have not discussed its use in TREC. Vi-sion is less immediately useful in mobile applications, dueto the difficulty of placing fiduciary markers in the outsideenvironment. However, there is no inherent reason why vi-sion could not be included as a sensor type within the frame-work. This would integrate into the TREC hierarchy just asany other sensor does, though requiring a new abstract de-vice interface to be defined for vision systems.

Extending to other kinds of AR: The framework as presentedis heavily influenced by the ambient audio Noisy Planet ap-plication. In Noisy Planet, head tracking is important tooverlay an audio soundscape onto the real world, and thishead tracking need only be in a 2D plane. TREC can eas-ily be extended to other forms of AR, however, simply byadding new device types.

For example, one common form of mobile AR involves track-ing the position and orientation of a handheld device (e.g., aSmartphone), so that its display can provide visual overlaysonto the real world. Here, positioning information is de-tected for the device, not the head, using the sensors in thedevice. TREC’s IOrientation interface would need to be ex-tended to provide 3D positions. This change would requireadditional programming, but does not represent a fundamen-tal change to TREC’s design.

287

Page 6: TREC: Platform-Neutral Input for Mobile Augmented Reality …equis.cs.queensu.ca/~equis/pubs/2011/kurczak-eics-11.pdf · 2012-03-06 · TREC: Platform-Neutral Input for Mobile Augmented

Sensor management: Future work with TREC includes im-proving the sophistication of its sensor management capabil-ities. Currently, TREC’s Device Manager determines whichamong the available set of sensors to use by traversing aranked list of known devices. An improved device managerwould automatically determine the best set of a sensors touse, and would dynamically reconfigure the sensor set (e.g.,in response to failure of one of the sensors.) This issue ofsensor management has been more thoroughly covered inthe Ubitrack system [9] and in the hybrid tracking AR sys-tem of Hallaway et al. [5].

Sensor fusion: The layered architecture of TREC permitsAR applications to use sensor fusion automatically. An ex-ample was shown in the last section of how a simple sen-sor fusion algorithm could be implemented as an abstractdevice. More investigation is required to assess the diffi-culty of implementing more general fusion algorithms, thanthe one presented in the example. The algorithm used theredoes not depend on specific devices and will work with anysensors of the same type. A Kalman filter, however, may re-quire unique knowledge about each sensor to create a goodmodel of the system. In future work, we hope to investi-gate whether TREC can support these advanced algorithmswithout restricting their use to predetermined hardware.

CONCLUSIONIn this paper we have presented TREC, a new frameworkfor handling sensor input in mobile augmented reality ap-plications. TREC has been designed to provide high-levelabstraction of the sensor data, automatic configuration, andease of extension and manual configuration when desired.While there are many other frameworks addressing these is-sues comprehensively, none provide all of these features to-gether in a cohesive package.

The key to TREC’s success is its three-layer architecture.An abstract input layer provides high-level input types thatcompletely abstract the underlying hardware. An abstractdevice layer collects classes of sensors (e.g., compass, ac-celerometer), as well as virtual devices implementing sensorfusion. Finally, a device layer provides interfaces to concretedevices. This architecture allows the abstraction of differenthardware sensors into a uniform representation of locationand orientation, and simplifies the automatic use of avail-able sensors at runtime (including the automatic fusion ofmultiple devices). This hierarchy also makes the process ofadding new devices straightforward, while providing com-patibility with any applications already using TREC.

The next steps for the TREC framework include adding broadersupport for hardware sensors, investigating the implemen-tation of complex sensor fusion algorithms using the ab-stract devices in TREC, and implementing a robust sensor-management algorithm to allow better dynamic configura-tion of the system.

ACKNOWLEDGEMENTSThis work was funded by NSERC Strategic Project #365040-08 and by the GRAND Network of Centres of Excellence.

We would like to thank Claire Joly for her feedback on us-ing TREC, and Jonathan Segel for his contributions to thedesign of Noisy Planet.

REFERENCES1. Intridea Car Finder. http://carfinderapp.com/.

2. Layar Augmented Reality Browser.http://www.layar.com/.

3. B. Dasarathy. Sensor fusion potentialexploitation-innovative architectures and illustrativeapplications. Proceedings of the IEEE, 85(1):24 –38,1997.

4. E. Foxlin. Inertial head-tracker sensor fusion by acomplementary separate-bias Kalman filter. In IEEEVRAIS, pages 185–194, 267, 1996.

5. D. Hallaway, S. Feiner, and T. Hollerer. Bridging thegaps: Hybrid tracking for adaptive mobile augmentedreality. Applied Artificial Intelligence, 25:477–500,2004.

6. J. Hightower and G. Borriello. Location systems forubiquitous computing. Computer, 34(8):57–66, 2001.

7. H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, andK. Tachibana. Virtual object manipulation on atable-top AR environment. In ISAR, pages 111–119,2000.

8. P. Lang, A. Kusej, A. Pinz, and G. Brasseur. Inertialtracking for mobile augmented reality. In IMTC,volume 2, pages 1583–1587, 2002.

9. D. Pustka, M. Huber, C. Waechter, F. Echtler, P. Keitler,J. Newman, D. Schmalstieg, and G. Klinker. Ubitrack:Automatic configuration of pervasive sensor networksfor augmented reality. IEEE Pervasive Computing,preprint, June 2010.

10. G. Reitmayr and D. Schmalstieg. Opentracker: Aflexible software design for three-dimensionalinteraction. Virtual Reality, 9:79–92, 2005.

11. M. Serrano, L. Nigay, J.-Y. L. Lawson, A. Ramsay,R. Murray-Smith, and S. Denef. The OpenInterfaceframework: a tool for multimodal interaction. In CHIextended abstracts on human factors in computingsystems, pages 3501–3506, 2008.

12. R. M. Taylor, II, T. C. Hudson, A. Seeger, H. Weber,J. Juliano, and A. T. Helser. VRPN: adevice-independent, network-transparent VR peripheralsystem. In VRST, pages 55–61, 2001.

13. G. Welch and G. Bishop. An introduction to theKalman filter. Technical Report TR 95-041,Department of Computer Science, University of NorthCarolina at Chapel Hill, 1995.

14. M. G. Wing. Consumer-grade global positioningsystem (GPS) accuracy and reliability. Journal ofForestry, 103:169–173, 2005.

288