Robotic Haptic Proxies for Collaborative Virtual Reality Haptic Proxies for Collaborative Virtual Reality ... real objects in the environment, ... support collaboration and wireless

  • View

  • Download

Embed Size (px)

Text of Robotic Haptic Proxies for Collaborative Virtual Reality Haptic Proxies for Collaborative Virtual...

  • Robotic Haptic Proxies for Collaborative Virtual RealityZhenyi He

    Future Reality Lab, New YorkUniversity

    Fengyuan ZhuFuture Reality Lab, New York

    Aaron GaudetteFuture Reality Lab, New York

    Ken PerlinFuture Reality Lab, New York

    ABSTRACTWe propose a new approach for interaction in Virtual Reality(VR) using mobile robots as proxies for haptic feedback. Thisapproach allows VR users to have the experience of sharingand manipulating tangible physical objects with remote col-laborators. Because participants do not directly observe therobotic proxies, the mapping between them and the virtualobjects is not required to be direct. In this paper, we describeour implementation, various scenarios for interaction, and apreliminary user study.

    ACM Classification KeywordsH.5.1. Multimedia Information Systems: Artificial, aug-mented, and virtual realities; H.5.2 User Interfaces: Interactionstyles

    Author KeywordsTangible User Interface; Physical presence; Virtual Reality;Remote collaboration; Haptics;

    INTRODUCTIONIn the past decade, virtual reality (VR) has reached a greaterlevel of popularity and familiarity than ever before. VR isgaining traction in the consumer market, and integrating phys-ical and virtual environments seamlessly is a well-recognizedchallenge. Haptic feedback can be a powerful component ofimmersion and can be used to improve user experience in VRto great effect.

    In this paper, we present a generic, extensible system forproviding haptic feedback to multiple users in virtual reality byintelligently directing one or more mobile robots. These robotsact as proxies for real, physical objects or forces, assisting theenvironment by providing a deeper level of immersion at alower cost. Using the Holojam Software Development Kit(SDK) along with OptiTrack tracking technology, we are ableto determine the absolute position and rotation of various

    real objects in the environment, such as a users wrists or aworkstation/table. With this knowledge of global state, weutilize an outside-in approach to control lightweight robots(proxies) in realtime with a high degree of accuracy. Thesystem is physically concise and simple to set up.

    We implemented multiple prototype applications that addresssome of the many possibilities this technology enables. Be-cause the Holojam SDK primarily supports multi-user VRexperiences, we chose to focus on collaborative, shared experi-ences. We implemented prototypes involving users in the sameroom (shared-space) as well as prototypes involving users indifferent physical locations (remote-space), with support forvarying spatial configurations.

    We defined three possible mappings between the representa-tion of physical proxies, which are "invisible" when wearinga VR headset, and virtual objects: one-to-one, many-to-one,and one-to-many. Multiple virtual objects may be representedphysically by a single robot, or vice versa, depending on thenumber and distribution of the robots in the scene, as well asuser proximity (shared-space versus remote-space).

    In remote-space, our system allows multiple users to "share"touch on the same virtual object, enabling new forms of col-laboration for remote teamwork. In shared-space, one proxyor several distributed proxies can be synthesized via gesturerecognition and prediction to facilitate swift haptic responsefor a large number of virtual objects, as well as "command-ing" of physical objects to move without touching them. Weexecuted all of these ideas as prototype applications.

    Our main contributions:

    Remote synchronizable robotic proxies for VR. Based onrobotic assistants, people could do physical collaborationremotely.

    Mappings between virtual objects and invisible physicalrobots. We extend the possibility via implementing differentmapping styles. Therefore, we could control one virtualobject by multiple robots, or control multiple virtual objectsby only one robot.

    Augment the experience by combination of physical feed-back and virtual scene.








    ] 3

    1 Ja

    n 20


  • RELATED WORKSeveral methods have been advanced that simulate the senseof touch. However, many are either not as mobile or not aslightweight as our system.

    Haptic InterfacesRecently more and more research has arisen focused on theintersection between haptic feedback and collaboration. In-Form [6] proposed utilizing shape displays in multiple waysto manipulate by actuating physical objects. Tangible Bits wasproposed to empower collaboration by manipulating physicalobjects at 1997[9], and the idea was extended in 2008[8]. PSy-Bench, a physical shared workspace, presents a new approachto enhance remote collaboration and communication, basedon the idea of Tangible Interfaces at 1998[3]. The conceptof synchronized distributed physical objects was mentionedin PSyBench[3], which demonstrated the potential of phys-ical remote collaboration. One contribution in this paper isto show how people can experience consistent physical feed-back over distance, regardless of the physical configurationof the corresponding remote space. PSyBench[15] only had1-to-1 mapping while we extended the mapping style and kindof scenarios. Also objects could not be lifted and displayedthe same movement without VR support. InForm[6] did notsupport collaboration and the materials are fixed on the table,while we offer a more lightweight approach. SnakeCharmer[1]had similar ideas about one-to-many mapping. However, wesupport collaboration and wireless.

    Haptic FeedbackHaptic feedback has been mentioned frequently in VR training,especially in the medical field. The sense of touch is theearliest developed in human embryology and is believed tobe essential for practice [5, 4]. Robot-assisted minimallyinvasive surgery (RMIS) holds great promise for improving theaccuracy and dexterity of a surgeon [14]. Haptic feedback haspotential benefits not only in training, but in other interactionsas well. Reality based interaction was proposed for post-WIMP interfaces [11]. Tangible interactions are observablewith both visual and haptic modalities that could help peopleutilize basic knowledge about the behavior of our physicalworld [18].

    Haptic Re-TargetingManipulating multiple virtual objects is always a challenge, inthat precisely-located haptic proxy objects are required. [2]proposed multiple approaches to align physical and virtualobjects. Redirected touching [12] considered the inflexibilityof passive haptic displays and introduced a deliberate incon-sistency between real hands and virtual hands. In redirectedtouching, a single real object could provide haptic feedbackfor virtual objects of various shapes to enrich the mappingbetween virtual objects and physical proxies.

    TelepresenceC-Slate presented a new vision-based system, which combinedbimanual and tangible interaction and the sharing of remotegestures and physical objects as a new approach to remotecollaboration [10]. [7] tried an augmented reality way to

    control distant objects without feedback. Also with sharedworkspaces that can capture and remotely render the shapesof people and objects, users can experience haptic feedbackbased on shaped displays [13].

    MOBILE PHYSICAL PROXIESTo support the illusion of physical sharing between collabo-rators, our system requires one or more mobile robots, whichmust be intelligently controlled in a manner that respects theconstraints of the robots and the environment. We accomplishthis with an outside-in approach, managing global state andsending robot control messages from a single server machine.

    Using the ad-hoc server model in the Holojam SDK, we firstascertain global state by reading realtime tracking data fromthe OptiTrack system, then calculate the target positions forany robots that are currently active in the environment, basedon the application requirements. Our ad-hoc server maintainsas many rooms as is necessary for remote-space applications(all of the examples in this paper use either one or two rooms)and commands the connected robots to move when needed.Because both the robots and optitrack cameras are small, andthe ad-hoc server can run on a midrange computer, the entiresetup is fairly lightweight and portable.

    Figure 1. Robots with tracked configuration

    Robot DesignBeginning with several m3pi robots, we attached a configu-ration of tracked markers to each one in order to determineits absolute position and orientation via the OptiTrack sys-tem. We created a cylinder-shaped acrylic cover for the robots(Figure 1) in order to facilitate easy gripping. This grip wasstandardized across all the robots, with a minimalist, scale-matched virtual representation available for when interactionwas necessary.

    Mappings Between Physical and Virtual SpaceOne challenge of physically representing a virtual environmentis the method used to map virtual objects to their physical coun-terparts (or vice versa). Conventionally, virtual objects have nophysical representation and can only be controlled indirectlyor through the use of a standard input device (such as a gamecontroller). Also, the set of physically controlled objects thatare visually represented in VR have been traditionally lim-ited to the input devices themselves. In contrast, our systemprovides three different mapping mechanisms to augment therelationship between physical and virtual representation.

    One-to-One MappingThis is the standard and most straightforward option. When

  • users interact with a virtual object in the scene, they si-multaneously interact with a physical proxy at the samelocation.

    Many-to-One MappingWhen multiple proxies represent one virtual object, wedefine the