12
COMMUNICATIONS OF THE ACM COMMUNICATIONS OF THE ACM COMMUNICATIONS OF THE ACM COMMUNICATIONS OF THE ACM July 1993/Vol. 36, No. 7, pp.

July 1993/Vol. 36, No. 7, pp. - University of Torontogf/papers/Chameleon - Situated Info Spaces.pdf · COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACM July 1993/Vol. 36, No. 7,

Embed Size (px)

Citation preview

COMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACM July 1993/Vol. 36, No. 7, pp.

o

o longer will we need to be tethered to a stationarycomputer workstation to browse electronic databases

r synthetic 3D information spaces transformed onto a 2D displaysurface. Instead, we will browse, interact, and manipulate elec-tronic information within the context and situation in which

the information originated and where it holds strong meaning.A small, portable, high-fidelity display and spatially aware

palmtop computer can act as a window onto the 3D-situatedinformation space – providing a bridge between the com-

puter-synthesized data and physical objects. Our Cha-meleon prototype explores some of the combined inputcontroller and output display paradigms needed to visu-

alize and manipulate 3D-situated information spaces.Electronic information spaces are encroaching on

our everyday environment. We are increasingly car-rying electronic information with us (e.g., floppy

diskettes) and also tapping into reservoirs of infor-mation via access stations (e.g., automatic teller

machines, telephones). Indeed, portable com-puting allows us not only to carry the infor-

mation but also to access, modify, andinteract with it in a matter of seconds.

N

COMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACM July 1993/Vol. 36, No. 7, pp.

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

Ubiquitous computing (seeWeiser's article in this issue) willfurther these abilities and cause thegeneration of short-range and globalelectronic information spaces to appearthroughout our everydayenvironments. How will thisinformation be organized, and howwill we interact with it?

Wherever possible, we should lookfor ways of associating electronic in-formation with physical objects in ourenvironment. This means that ourinformation spaces will be 3D. TheSemNet system [4] is an example of atool that offers users access to large,complicated 3D information spaces.Our goal is to go a step further bygrounding and situating theinformation in a physical context toprovide additional understanding ofthe organization of the space and toimprove user orientation.

As an example of ubiquitouscomputing and situated informationspaces, consider a fax machine. Theelectronic data associated with a faxmachine should be collected, assoc-iated, and co-located with the physicaldevice (see Figure 1). This means thatyour personal electronic phone book, alog of your incoming and outgoingcalls, and fax messages could beaccessible by browsing a situated 3Delectronic information spacesurrounding the fax machine.

The information would be org-anized by the layout of the physicaldevice. Incoming calls would belocated near the earpiece of the handreceiver while outgoing calls would besituated near the mouthpiece. Thephone book could be found near thekeypad. A log of the outgoing faxmessages would be found near the faxpaper feeder while a log of theincoming faxes would be located at thepaper dispenser tray. These logicalinformation hot spots on the physicaldevice can be moved and customizedby users according to their personalorganizations. The key idea is that thephysical object anchors the inform-ation, provides a logical means ofpartitioning and organizing the assoc-iated information space, and serves asa retrieval cue for users.

A major design requirement ofsituated information spaces is the abilityfor users to visualize, browse, andmanipulate the 3D space using aportable, palmtop computer. That is,instead of a large fixed display on a desk,we want a small, mobile display to act asa window onto the information space.Since the information spaces will consistof multimedia data, the display of thepalmtop should be able to handle allforms of data including text, graphics,video, and audio.

Moreover, the desire to merge thephysical and electronic worlds requiresthat the palmtop computer and displayhave a spatial awareness and under-standing of the physical environmentalong with the ability to visually mimicthese environments and individualobjects. Thus, the combination of apowerful computer capable of under-standing and generating 3D modelscoupled with a high fidelity mobiledisplay will serve to blur the boundariesof the physical and electronic worlds.Blurring this boundary, and thereforeproviding a seamless integration of thetwo worlds, will ease the way in whichwe interact with them concurrently.

We are investigating the use of anintegrated input controller and outputdisplay unit to serve as a bridge orporthole between computer synthesizedinformation spaces and physical objects.The research is focused on improving thecommunication bandwidth and the easewith which users interact with physicaland electronic objects throughout theirphysical environments. Becauseubiquitous computing will offer situatedinformation spaces everywhere in ourdaily environments, we are exploringdesigns which facilitate a seamlessintegration of computer augmented dataand physical objects in a highly portabletool.

BackgroundComputer-augmented environments [7]offer a synergistic merging of computersand common physical objects, which canradically change the way we definehuman-computer interfaces, constructsoftware applications, and designhardware systems to fit this new modelof computing. While this model isbeginning to take shape, it is largely

unspecified. At least two approaches,however, have begun to emerge: thenotion of ubiquitous computing andoverlay techniques. The ubiquitous-computing model advocatesembedding many small, highlyspecialized computers within oureveryday environment. Researchersinvestigating overlay techniques aredealing with issues in merging two ormore media types into one integratedand composite medium that offers thestrengths of the combined media.

Ubiquitous ComputingThe ubiquitous-computing model[15] suggests that very smallcomputational devices be embeddedand integrated into our physicalenvironment in such a way that theyoperate seamlessly and almosttransparently. Not only does thismodel advocate miniaturizingcomputers but also suggests that thesedevices be physically aware of theirsurroundings. These devices emitinformation periodically or on requesteither directly by a human or inresponse to queries made bycompanion devices. Some applic-ations of ubiquitous devices areactive badges [14], which are smallelectronic badges, worn by people forautomatic personal identification,which communicate via infraredtransmitters and sensors. The badgescan trigger automatic doors, forwardtelephone calls, and log a person'slocation and who that person ismeeting [8 , 11].

The ubiquitous-computing modelrequires that we examine the designof our existing computing systems.The new model, at the very least, willstretch our notion of distributedcomputing in terms of network andoperating system requirements.

In addition, ubiquitous computingsuggests separate displays for eachunit varying from no display, anaudio display, to low- andhigh-fidelity displays. Users will needto continuously scan the environmentfor the appropriate display thatcorresponds to the information theyare seeking. For example, in order tocheck if new messages have been left

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

on my answering machine, I mustphysically visit the machine and standdirectly above the LCD panel to readthe display. Perhaps we do not need aseparate display for each compu-tational device. Instead of relying onthese environmental displays to bevisible at all times, people can carrywith them a personal display whichcould be used in conjunction or inabsence of environmental displays.Users should not always have to findthe embedded informationthemselves; the information couldfind the users via their personaldisplay.

From a user's perspective,ubiquitous computers will cause aflood of information to be available—the majority of which will beuseful immediately while someinformation may be useful at a later

point in time. The information spaceswill be generated by manycomputational objects within a user'senvironment. Ideally, this informationneeds to be viewed and manipulatedwithin the context of the originatingsituation. That is, the informationspaces should not be abstracted andtransported to a stationary computerfixed on a physical desktop. Instead, weneed highly portable displays andprotocols for visualizing and filteringthe electronic spheres of information.

Overlay TechniquesOverlay techniques offer insight intohow common physical objects can beaugmented with computer-synthesizeddata to reduce the time needed tocomplete a task and ease the way inwhich we interact with the physicalobjects. The DigitalDesk [10] integrates

traditional paper media with electronicmedia on a combined physical anddigital desktop. A digitizing tablet andcordless pen are used for selecting andinputting data. Electronic ink andimages are superimposed on paperdocuments and the desktop surface.

In the DigitalDesk, words in a paperdocument can be selected, looked up ina dictionary, and presented in anelectronic window on the desktopsurface. Another example allows usersto select a column of numbers from apaper document and transfer them to aspreadsheet program or electroniccalculator by gesturing. This designallows users to interact with paper andelectronic media in a similar andconsistent manner while addingcomputational functionality to paperdocuments.

Figure 1. Figure 1. Figure 1. Figure 1. Potentialinformation “hotspots” surrounding afax machine: (a) logof incoming calls, (b)log of outgoing calls,(c) personal phonebook, (d) outgoingfax messages, (e)incoming faxmessages.

Figure 2.Figure 2.Figure 2.Figure 2.Configuration ofChameleonprototype.

Figure 3Figure 3Figure 3Figure 3. Palmtop unit consisting ofa video display, a response button atthe top of the unit and a 6D inputsensor for providing x, y, and zpositional information andorganization (pitch, yaw and roll).

Figure 4a.Figure 4a.Figure 4a.Figure 4a. Palmtop unit beforezooming with screen shot of whatthe user sees in the palmtopmonitor. The user is browsing a 3Dtree hierarchy.

Figure 4b.Figure 4b.Figure 4b.Figure 4b. Palmtop unit afterzooming with screen shot of whatthe user sees in the palmtopmonitor.

COMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACMCOMMUNICATIONS OF THE ACM July 1993/Vol. 36, No. 7, pp. 39-49

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

The object-oriented video system[13] offers another example ofoverlaying media, in this case videodata and graphical objects. Users areable to interact with graphical objectsand controllers superimposed on livevideo data. The graphical buttons,sliders, and knob controls are mappedinto the physical devices so the usercan remotely operate the controls.Selecting video objects results ininvoking the corresponding graphicalcontroller. The video shows thecontext and the immediate effect ofmanipulating the controls. In thissense, the graphical objects are tightlycoupled with the physical objects.Users are not just annotating thevideo images; they are interactingwith them. Note that these twosystems have dealt with augmentingphysical objects that are viewed in a2D setting. Additional enhancementsmust be designed to work in a 3Dworld.PrototypeChameleon is a prototype systemunder development at the Universityof Toronto [5]. It is part of aninvestigation on how palmtopcomputers designed with ahigh-fidelity monitor can becomespatially aware of their location andorientation and serve as bridges orportholes between computer-synthesized information spaces andphysical objects. In this prototypedesign, a 3D input controller and anoutput display are combined into oneintegrated unit. This allows thepalmtop unit to act as an informationlens near physical objects. Forexample, consider a geographicalwall map used in conjunction with theChameleon. The palmtop is alwaysaware of its own physical positionand orientation relative to the map;the contents of the display canrespond directly to the user's gesturesand movements. That is, the user seesinformation about Toronto or Bostonwhile the device is positioned overthe respective city on the map.Varying levels of detail and classes ofinformation could be viewedaccording to different gestures madeby the user.

The design achieves 3D compre-hension with a very small screen size. Itdoes this through the use of movementas a compelling depth cue. In addition,the act of movement provides a greatdeal of 3D sensation, as suggested inthe motor theory of space perception[6].

In the prototype configuration(Figure 2), a small 4-inch color, LCDbased hand-held monitor acts as apalmtop computer with the capabilitiesof a Silicon Graphics 4D/310 Irisworkstation. A video camera is cur-rently being used to capture the contentsof the large workstation screen which isfed into the small hand unit. Tofacilitate input controls, a responsebutton at the top of the device and a 6Dinput device (the Ascension bird [1])are attached to the small monitor.

This design allows the system todetect user gestures, (x, y, and z posi-tional data as well as pitch, yaw, androll orientation from the bird) and inputselections (via the response button) forissuing commands (see Figure 3). TheSilicon Graphics workstation isprogrammed to generate a variety of 3Dmodels and information spaces that willultimately be positioned near physicalobjects.

Translation (x and y axes) and zoom(z axis) controls are available on theprototype to navigate through a 3Dworkspace roughly equivalent to a3-foot cube. The net effect is that thepalmtop unit acts as a window into the3D workspace. The system is modeledafter the "eye-in-hand" metaphor. Forexample, as the user translates thepalmtop unit to the left, she or he movestoward the left wall of the 3Dworkspace.

Figures 4a and 4b show a before--and-after view of the palmtop unit andthe contents of the palmtop screen asthe user zooms into the 3D model alongthe z-axis. The model consists of a 3Dcam tree representing a hierarchy ofinformation where the rectangles arenodes on the tree [2, 3, 12]. Note that asthe palmtop is moved from the start tothe end position (Figure 5), the usersees a smooth zooming animationduring the traversal of the path.

In order to allow users to selectobjects within the virtual world, a

cross-hair cursor is fixed in the center ofthe screen. Users line up target objectsover the cross-hair and click (or in somesituations double-click) on the responsebutton. This causes an imaginary raycentered at the cross-hair to emanatefrom the palmtop unit toward objects inthe virtual world. The first objectencountered is selected.

Application-specific controls can bebuilt into the system. For example, inthe 3D cam tree, users are able to selecta node and gesture downward in atugging motion to cause the tree torotate along the x-axis. Conversely,selecting a node and tugging upwardscauses the tree to rotate in the oppositedirection.Lessons learnedPreliminary experimental evaluation ofthe Chameleon design suggests that thesmall palmtop screen size andgesturing-input combination offerequivalent depth perception comparedto large (21 inches) static 3D displays[5]. Additional experiments will furthercharacterize the performance of thiscombined input and output device.Nevertheless, the experiment revealed afew interesting observations.

When first interacting with thepalmtop unit, approximately 25 percentof the users felt that the controls werecompletely backwards. That is, they hadan object view instead of an egocentricview for the input controls. Selecting anobject and gesturing to the left wereincorrectly believed to move the objector the entire scene to the left.Egocentric controls dictate thatgesturing to the left causes the user tomove to the left, keeping the object orentire scene in a relatively fixedlocation. To convert users from objectcentered to egocentric controls, we firsthad them keep the palmtop at a fixeddistance from their body. Then weasked them to rock forward andbackward and then left to right. Thesephysical actions directly correspondedto the navigation controls and causedthe user to make a quick transition tothe egocentric model.

While using the device, we noticedthat users seemed to have a maximumcruising speed while moving thepalmtop to browse the 3D workspace.This cruising speed was slower than we

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

anticipated. Moreover, none of the userscomplained about or observed a lag inthe system. This leads us to consideralternative explanations. Specifically,we are looking more closely at thehuman visual processor and physicalmotor skill limitations. One theory isthat novice users of the system who areunfamiliar with the contents of the 3Dworkspace may be browsing in acontinuous "focused path-tracking"mode. Expert users who are veryfamiliar with the workspace mayalternatively browse in a "focusedendpoint-tracking" mode. That is,experts know their final destination andare only interested in thebefore-and-after views, while novicesare interested in observing the entiretraversal path to the final destination.Future research will more closelyinvestigate this cruising-speedphenomena.

In terms of palmtop display quality,the resolution of the small LCD monitorwas considerably inferior to the largeSGI monitor. While ghosting imagesare not a problem, the resolution of theLCD display does not easily supporttext. We are investigating the use ofscalable fonts and alternative textpresentation styles. More sophisticatedrendering models to preserve the 3Dscene are also under investigation.McKenna [9] has begun to explore thebenefits for tracking both the head and a

mobile display surface. Nevertheless, webelieve the display characteristics andquality will improve over time.

Improvements in tracking technologywill allow the palmtop to operate inlarger spaces. The current use of theAscension bird provides detailed 6Dtracking on the order of a 3-foot cube.Researchers are investigating methodsfor tracking on a much larger scale. Theoptoelectronic system being developed atUNC Chapel Hill is designed to providesimilar tracking performance to the bird,but on the order of 16-by-30 feet andscalable to larger dimensions (seeAzuma's sidebar in this issue).

Notice that more technologicallyrobust configurations can be used for theChameleon prototype. For example,using an NTSC output channel for theSGI machine instead of the externalcamera would be an improvement.However, using the camera allows us toeasily switch between host machines (e.g., developing on a Macintosh instead ofan SGI) and provides a means of quicklyaltering the quality of the video image.The idea is to allow for rapid prototypingand rapid alterations to the prototype

framework for improved designexploration.

Although the prototypepalmtop device is currentlytethered by cords (due to the videofeed and 6D input device), itprovides a rich environment fortesting new situations,applications, and user interactionsin a technology configuration weanticipate will be available in afew years in a highly portableform. Three applications aredescribed. The Active map iscurrently being investigated andprototyped while the other two arestill on the drawing board.ApplicationsGiven the notion of situatedinformation spaces and the designof the Chameleon prototype, newapplications are being defined toexplore and reveal issues incomputer-augmented environ-ments. The intention of describingthe applications is to uncoverissues, problems, even styles ofuser interaction. Our goal,initially, is not to determine thefeasibility of implementing eachapplication but to identifyinteresting characteristics of theapplications and how they mayinfluence future system designsand prototypes.

Figure 5.Figure 5.Figure 5.Figure 5. The user sees a smoothanimation on the palmtop screenwhile browsing an informationspace.

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

Active Maps and PaperThe Chameleon system can serve as anelectronic information lens when usedin conjunction with electronic orpaper-based displays. A paper display(e. g., large posters, diagrams, ormaps) or a computer monitor (e.g.,large 21-inch displays or even a largerear-projected screen) serves as astationary object containing thedominant information source. As thehand unit is positioned closer to orfurther from the stationary artifact,varying levels of information or detailare shown within the display of thepalmtop unit. For example, suppose wewanted to browse a map of Canada. Alarge poster or a computer monitordisplays a map of the entire country. Auser first selects a region by using thepalmtop as a pointing device. Zoomcontrols are available to access moredetailed information about a region,causing the palmtop display to change;the map being presented remains thesame for contextual awareness andorientation (see Figure 6). A variety ofinformation could be presented to theuser depending on the orientation ofthe hand unit. For example, weatherinformation, travel itineraries, andgeographical points of interest couldbe easily accessed. More sophisticatedmodels will allow the Chameleon and

an electronic display surface to interactand react to each other.

A similar design concept could beinvestigated for individual sheets ofpaper. Suppose a paper documentcontains sensitive data such as a fiscalbudget. The numbers in the spreadsheetwill not be printed in ink but appear inelectronic ink only when a user, with theproper authorization, positions theChameleon device onto the columnscontaining the sensitive data.

Information spaces will beconstructed not only with a single com-putational unit operating in isolation withphysical artifacts but often withcollections of computational units sit-uated within a common environment.The computer- augmented library andportable surrogate office are twoexample applications that illustrate thisidea by extending the model ofcomputational objects into acomputational environment consisting ofcooperating objects.

Computer-Augmented LibraryAs an example of a more widespreadelectronic information space, acomputer-augmented library couldoffer significant improvements over atraditional library. Suppose we weresearching for books and informationon 19th Century music composers.Searching in a traditional libraryoften involves two separate proce-dures: (1) accessing and querying acard catalog or an electronic biblio-graphic database and (2) trackingdown books and browsing the book-shelves. In a computer-augmentedlibrary, these separate acts and pro-cedures are combined into an inte-grated process. The electronic data-base of information is situated aroundthe bookshelves. The shelves and thebooks emit navigational and semanticinformation. As we walk through themusic section, books on the topic ofinterest as well as related materialwill be highlighted by indicatorlights. Figure 7 shows acomputer-augmented bookshelf withtouch-sensitive LCD strips along eachshelf. Regions of the strip arehighlighted under books that match auser's query. Alternatively, ourawareness may be directed by non-speech audio that we can hearthrough miniature headphones.

Figure 6.Figure 6.Figure 6.Figure 6. An active map whichemits various layers of informationis quickly accessed by the palmtopunit. A map of Canada is shown andthe user is requesting detailedweather information for a region onthe map by moving over the regionof interest.

To aid our search, selecting a bookand tapping on the book spine or onthe corresponding touch-sensitive LCDstrip under the book (see Figure 7inset) will cause the search engine tofocus on material similar to ourselected book. The LCD strips provideusers with peripheral awareness ofinformation regions, supplying verycoarse levels or pieces of information(e.g., specifying a location ordisplaying a word).

The Chameleon unit serves as themain focus of high-fidelity informationexchange. Textual and graphical datawill be presented to further enhanceour experience. When requested, the

table of contents of individual books willbe rapidly presented to us, or adynamically merged table of contentscould be formed from a group of selectedbooks. While many such possibilitiesexist, note that this computer-augmentedenvironment does not prevent us frombrowsing the bookshelf in the traditionalfashion. Our experience is enhancedthrough the situated electronicinformation spaces and theembedded-computer infrastructure,without sacrificing the beneficial aspectsof existing technology.Portable Surrogate OfficeThe Chameleon system can be used tooffer remote access to a physical

environment as well as provide addi-tional functionality. The envisioneddesign allows users to electronicallyannotate all objects in the physicalenvironment as well as access andcontrol "mediator objects" that havecompatible interfaces working be-tween both the physical and virtualenvironments.

To capture the contents of an en-vironment, in this case an office envi-ronment, a camera is placed in thecenter of the room. A 360-degreepanoramic video image is taken tocapture a visual representation of theoffice (Figure 8). This image, stored

Figure 7.Figure 7.Figure 7.Figure 7. Computer-augmented library.The electronic database of information issituated around the bookshelves; theshelves and books emit navigational andsemantic information. Touch sensitiveLCD strips run along the shelves toselect books of interest to refine thesearch.

Figure 8.Figure 8.Figure 8.Figure 8. A proposed office environmentwith a camera mounted on the ceiling forperiodically capturing the contents of theoffice to be stored in the palmtop. Theoffice contains touch sensitive LCDstrips (appearing in green) for accessingelectronic annotations.

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

on the palmtop unit, allows users tobrowse the contents of their officeenvironment while not physicallypresent. Note that a more robust designwould support real-time video access tothe environment given access tohigh-capacity communication channels.

To facilitate accessing the contentsof the video images, a spatial mappingis introduced in the retrievalmechanism. Since the users are veryfamiliar with the spatial layout of theiroffice, the proper portions of the videoimages are accessed by positioning thepalmtop in the direction of the targetobjects in the physical environment. Forexample, in an office a desk may befound in the center of the room with abookcase on the right-hand wall, awhiteboard on the left-hand wall, and acalendar on the front wall (Figure 8).

To view the contents of your wallcalendar while you are away from theoffice, one would first imagine sitting inthe office chair. Since the calendar islocated directly in front of you, youraise the device until you see thecalendar (Figure 9). To see thebookshelf, you would swing the device

90 degrees to your right; the palmtop isproviding a window into this officeenvironment. This design takesadvantage of the user's persistent mentalmodel of the office environment andprovides a constant analogy to thephysical interface for accessing orviewing objects.

Not only can users remotely browsethe contents of the office, but additionalfunctionality can be offered such asvoice and graphical annotations. Whilebrowsing the office, users select objectsfrom the video images by lining up thetarget object into the center cross-hairon the palmtop unit and clicking on theresponse button. At this point the usercan attach a voice annotation to theselected object. A graphical "post-it"note is superimposed on the video datato remind the user of the presence of avoice annotation. The note consists of avoice annotation icon, the date, and atimestamp. For example, one couldselect a day in the wall calendar andleave a reminder message "Departmentbudget meeting at 2:00" (Figure l0a) orselect a book from the bookshelf and

leave a voice annotation such as"Return this to Jim" (Figure lOb).

The graphical notes can also serveas anchors for making hypermedialinks or associations among objectswithin the environment. Clicking andholding on a note will define thebeginning anchor for a link. Once thedevice is positioned to the target endanchor, a graphical link line issuperimposed on the video images.The appearance of a piece of stringphysically connecting the two anchorpoints is produced. Figure l0a showsa link made between a day in the wallcalendar and the telephone.

Extending the design of theChameleon system to support pen

Figure 9.Figure 9.Figure 9.Figure 9. Browsing the contentof your office environmentwhile at home. The palmtopunit acts as a small windowinto your remote environmentand makes use of spatialorganizations and memory forquick access.

Figure 10.Figure 10.Figure 10.Figure 10. Sample views of thepalmtop unit showing thecombined physical andelectronic information. The bluerectangles represent electronicpost-it notes which have voiceannotations attached. (a) showstwo notes left on the desk and ahypermedia link between thephone and the wall calendar. (b)shows three notes left on thebookshelf and (c) shows thecontents of the electronicwhiteboard.

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

input allows users to scribble electronicnotes or annotations on the office wallsor on objects in the physicalenvironment. A common practice maybe to leave notes on the officewhiteboard.

When the whiteboard is promoted toan electronic "mediator object," thenotes made on the palmtop unit canautomatically be transferred and appearon the electronic whiteboard. That is,when the palmtop unit is positioned ontop of the virtual whiteboard, thepalmtop surface acts the same as theelectronic whiteboard surface in thephysical environment (Figure lOc).

Light switches, thermostats, andeven the telephone can be promoted tomediator objects. Users can turn theirlights on or off, raise or lower the officetemperature and check whether anytelephone voice messages have been lefton their machine.

In short, mediator objects can reactand interface between the physical andcomputational environments. They offerthe best example of integratingcomputers and sensors with familiarobjects in a way that is compatible withthe real world and current workpractice.

While working in the physical officeenvironment, users need mechanismsfor remembering and accessing theirelectronic annotations and links.Indicator lights similar to ones found ontelephone units can be used to serve thesame purpose. A more completesolution involves touch-sensitive LCDstrips which are placed along the officewalls, the desk, and shelves in thebookcase. The LCD indicators highlightregions near annotations or hypermedialink anchors. Pressing on the LCDindicator accesses the voice or graphicalannotation or follows the link to theendpoint anchor. In addition, theChameleon system can be used withinthe physical office to serve as a scannerunit to locate and display annotations.

In the surrogate-office description,many complications arise when onebegins to think about actuallyimplementing some of the ideas. Forexample, how can we reconstruct acomplete spatial map of the office withonly a 360-degree panoramic video

image? The resolution of the image willbe crucial. Text at oblique angles maybe unreadable given the videoresolution. Another issue is how can weproperly align the physical objects withthe video data and annotations? Manyof these and other issues should beexplored to gain a better understandingof the requirements for situatedinformation spaces andcomputer-augmented environments.ConclusionsUbiquitous computing requires aqualitative change in the way we thinkand interact with computers. Instead ofviewing and manipulating acomputerized world through a largestationary computer and display, wewant to shift to a new model in whichwe carry around a very small palmtopcomputer that acts as our personaldisplay onto information spaces. Thesedisplays are aware of their surroundingsand change depending on the situationin which they are immersed.

Electronic information will beavailable everywhere. In order to avoidbeing flooded and overwhelmed withthe quantity of information, we need toadopt the notion of situated informationspaces. The electronic informationassociated with physical objects shouldbe collected, associated, and collocatedwith those objects. The physical objectsanchor the information, provide alogical means of partitioning andorganizing the associated informationspace into "hot spots," and serve asretrieval cues for users.

Since we are constantly viewing andinteracting with a 3D physical world,the devices used to supportcomputer-augmented environmentsneed to be just as responsive and awareof the 3D worlds in which theyparticipate.

Toward this end, the Chameleonprototype begins to explore how we canaccess and manipulate situated 3Dinformation spaces throughout ourenvironment. Because we areinteracting concurrently with physicalobjects and computer-generated data,our input controls and output displayare tightly coupled to provide aseamless integration between the twoworlds.

This article has explored anduncovered a wide range of issuessurrounding computer-augmentedenvironments. The Chameleonprototype and a set of computer-augmented applications weredescribed. Future research will furtherexplore the concepts of situatedinformation spaces, mediator objects,and user interaction styles using theChameleon design.AcknowledgmentsThe author greatly appreciates thecontributions of Mark Chignell,Shumin Zhai, Ferdie Poblete, SarahZuberec, Ron Baecker, Bill Buxton,Marilyn Mantei, and Paul Milgramfor their suggestions and support.Special thanks are due also to RobertGilbertson for his initial artwork andphotography which aided invisualizing the concepts presented inthis article.

We gratefully acknowledge thesupport to our laboratory from theNatural Sciences and EngineeringResearch Council of Canada, theInformation Technology ResearchCentre of Ontario Digital EquipmentCorp., Apple Computer, and XeroxPARC.

References1. Ascension Technology Corp. The Ascension

Bird. Burlington, Vt., 1992.2. Card, S.K., Robertson, G.G. and MacKinlay,

J.D. The Information Visualizer: Aninformation workspace. In Proceedings of CHI'91 Human Factors in Computing Systems (NewOrleans, La.). ACM, New York, 1991, pp.181188.

3. Chignell, M., Zuberec, S. and Poblete, F. Anexploration in the design space of threedimensional hierarchies. Dept. of IndustrialEngineering, Univ. of Toronto, 1993,Unpublished manuscript.

4. Fairchild, K.M., Poltrock, S.E. and Furnas,G.W. SemNet: Three-dimensional graphicrepresentation of large knowledge bases. InCognitive Science and its Application forHuman-Computer Interface. Erlbaum, Hillsdale,NJ., pp. 201-233.

5. Fitzmaurice, G.W., Zhai, S. and Chignell, M.Virtual reality for palmtop computers. To bepublished in ACM Trans. Inf. Syst. (July 1993),special issue on Virtual Worlds.

6. Gibson, J.J. The Perception of the Visual World.Houghton Mifflin, Boston, Mass., 1950.

7. Kruger, M. Artificial Reality II. Ad-dison-Wesley, Reading, Mass., 1991.

8. Lamming, M.G. and Newman, W.M.Activity-based information retrieval:

COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE COMMUNICATIONS OF THE ACMACMACMACM July 1993/Vol. 36, No. 7, pp. 39-49

Technology in support of human memory.Tech. Rep., Rank Xerox EuroPARC, 1991.

9. McKenna, M. Interactive viewpoint control andthree-dimensional operations. In Proceedings ofthe 1992 ACM Symposium on Interactive 3DGraphics. Comput. Graph. (1992), 53-56.

10. Newman, W. and Wellner, P. A desksupporting computer-based interaction withpaper documents. In Proceedings of CHI '92Hunan Factors in Computing Systems. ACM,New York, 1992, pp. 587-592.

11. Newman, W.M., Eldrige, M.A. and Lamming,M.G. PEPYS: Generating autobiographies byautomatic tracking. In Proceedings of the 2ndEuropean Conference on CSCW, 1991, pp.175187.

12. Robertson, G.G., Mackinlay, J.D. and Card,S.K. Cone trees: Animated 3D visualizations ofhierarchical information. In Proceedings of CHI'91 Human Factors in Computing Systems.ACM, New York, 1991, pp. 189-202.

13. Tani, M., Yamaashi, K., Tanikoshi, K.,Futakawa, M. and Tanikoshi, K.Object-oriented video: Interaction withreal-world objects through live video. InProceedings of CHI '92 Human Factors inComputing Systems. ACM, New York, 1992,pp. 593-598.

14. Want, R., Hopper, A., Falcao, V. andGibbons, J. The Active Badge Loca tionSystem. ACM Trans. In f. Sys. 10, 1 (Jan.1992), 91-102.

15. Weiser, M. The computer for the 21stCentury. Sci. Am. 265, 3 (Sept. 1991), 94- I 04.

CR Categories and Subject Descriptors: H.3.3[Information Storage and Retrieval]: InformationSearch and Retrieval—retrieval models; H.5.2[Information Interfaces and Presentation]: UserInterfaces—input devices and strategies, interactionstyles; 1. 3. 1 [Computer Graphics] HardwareArchitecture—three-dimensional displays; 1.3.6[Computer Graphics] Methodology and Techniques—interaction techniques

General Terms: Design, Human FactorsAdditional Key Words and Phrases: Information

access, palmtop computers, 3D control and displays

About the Author:GEORGE W. FITZMAURICE is a Ph.D student incomputer science (HCI) in the Dynamic Graphics projectat the University of Toronto. His research interestsinclude computer-augmented environments, ubiquitouscomputing, 3D input devices, and interactive 3Dinformation visualizations. Author's Present Address:University of Toronto, CSRI, 6 King's College Rd.,Toronto, Ontario, Canada, M5S 1A4, email:[email protected]

Permission to copy without fee all or part of this material is granted providedthat the copies are not made or distributed for direct commercial advantage, theACM copyright notice and the title of the publication and its date appear, andnotice is give that copying is by 'permission of the Association for ComputingMachinery. To copy otherwise, or to republish, requires a fee and/or specificpermission.© ACM 0002-0782/9310700-038 $1.50