1
A Raycast Approach to Hybrid Touch / Motion Capture Virtual Reality User Experience Ryan P. Spicer, Rhys Yahata, Mark Bolas and Evan Suma USC Institute for Creative Technologies {spicer, ryahata, bolas, suma}@ict.usc.edu Figure 1: Left: Our virtual environment uses a tracked HMD and a touch screen. Right: Our approach, illustrated in a one-dimensional case. The physical user is indicated in green; the self-avatar as seen in the HMD in blue. Author Keywords Virtual Reality; 3D User Interfaces; Touch Screens ACM Classification Keywords H.5.2 User Interfaces: Interaction Styles; I.3.7 Three- Dimensional Graphics and Realism: Virtual Reality INTRODUCTION We present a novel approach to integrating a touch screen de- vice into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environ- ment with tracked head and hands. In our system, the user’s hands are tracked by a plastic rigid body with motion capture markers. This rigid body straps to the back of the user’s hand. This design is convenient, but lacks fingertip tracking. Our interaction uses an infrared multitouch overlay on a LCD display. This overlay is used for user input both inside and outside of the HMD experience (Figure 1, left). Since the user’s fingertips are not tracked, they may not align with the self-avatar’s fingertips. This error is amplified be- cause users may have different length fingertips, and the hand rigid bodies may not be placed identically on all users. As a result of these errors, the user’s self-avatar fingertip, as viewed through the HMD, does not consistently align with Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the au- thor/owner(s). SUI’14, October 4–5, 2014, Honolulu, HI, USA. ACM 978-1-4503-2820-3/14/10. http://dx.doi.org/10.1145/2659766.2661226 the real-world user’s fingertip as sensed by the touch sur- face. This causes the user to perceive frustrating, inaccurate responses from the user interface. APPROACH We developed a raycast-based approach to determine cor- rected touch locations accurate to the user’s perspective in the virtual environment. When the a touch event is reported, we determine the closest self-avatar pointer finger to the touch point, based on the rigid body attached to each hand. We then check if the touchpoint is within some threshold distance, ex- perimentally set at 10cm, of the self-avatar fingertip. If the fingertip is within the threshold, we raycast from the user’s eye point through the index finger of the user’s self avatar hand. We use the intersection of this ray with the vir- tual screen to compute a perspective-correct touch point. If neither self-avatar finger is nearby, we assume that the user is interacting without the HMD and hand trackers, and use the original touch point. These corrected touch points are posi- tioned relative to the self-avatar fingertip as seen by the HMD user. Differences in hand size or finger length do not impact the results because the corrected touch point is computed rel- ative to the self-avatar fingertip as viewed through the HMD. Likewise, differences in hand tracker position are reflected in the position and orientation of the self-avatar fingertip (Fig- ure 1, right). This approach does not require an active touch screen. We have used the system to create interactive surfaces in midair, or on blank wood and glass surfaces for haptic feedback. In this use case, we detect a touch based on the self-avatar fin- gertip colliding with an object in the virtual environment that represents the screen. LIMITATIONS AND FUTURE WORK This approach has several limitations that may be addressed in the future. The system does not afford multi-touch ges- tures. Since the fingertips are fixed in the user’s self-avatar, the same approach is not useful for multi-finger gestures. This challenge could be addressed by e.g. determining the trans- forms of other touchpoints relative to the index finger touch- point, as determined based on the orientation of the hand. ACKNOWLEDGEMENTS This work was supported by the Office of Naval Research through Award No. W911NF-04-D-0005-0041. The content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. Poster Session SUI’14, October 4-5, 2014, Honolulu, HI, USA 160

A Raycast Approach to Hybrid Touch / Motion Capture ...vice into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environ-ment with

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Raycast Approach to Hybrid Touch / Motion Capture ...vice into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environ-ment with

A Raycast Approach to Hybrid Touch / Motion CaptureVirtual Reality User Experience

Ryan P. Spicer, Rhys Yahata, Mark Bolas and Evan SumaUSC Institute for Creative Technologies

{spicer, ryahata, bolas, suma}@ict.usc.edu

Figure 1: Left: Our virtual environment uses a tracked HMD and atouch screen. Right: Our approach, illustrated in a one-dimensional

case. The physical user is indicated in green; the self-avatar as seen inthe HMD in blue.

Author KeywordsVirtual Reality; 3D User Interfaces; Touch Screens

ACM Classification KeywordsH.5.2 User Interfaces: Interaction Styles; I.3.7 Three-Dimensional Graphics and Realism: Virtual Reality

INTRODUCTIONWe present a novel approach to integrating a touch screen de-vice into the experience of a user wearing a Head MountedDisplay (HMD) in an immersive virtual reality (VR) environ-ment with tracked head and hands. In our system, the user’shands are tracked by a plastic rigid body with motion capturemarkers. This rigid body straps to the back of the user’s hand.This design is convenient, but lacks fingertip tracking.

Our interaction uses an infrared multitouch overlay on a LCDdisplay. This overlay is used for user input both inside andoutside of the HMD experience (Figure 1, left).

Since the user’s fingertips are not tracked, they may not alignwith the self-avatar’s fingertips. This error is amplified be-cause users may have different length fingertips, and the handrigid bodies may not be placed identically on all users.

As a result of these errors, the user’s self-avatar fingertip, asviewed through the HMD, does not consistently align with

Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for third-party components of this work must behonored. For all other uses, contact the owner/author(s). Copyright is held by the au-thor/owner(s).SUI’14, October 4–5, 2014, Honolulu, HI, USA.ACM 978-1-4503-2820-3/14/10.http://dx.doi.org/10.1145/2659766.2661226

the real-world user’s fingertip as sensed by the touch sur-face. This causes the user to perceive frustrating, inaccurateresponses from the user interface.

APPROACHWe developed a raycast-based approach to determine cor-rected touch locations accurate to the user’s perspective in thevirtual environment. When the a touch event is reported, wedetermine the closest self-avatar pointer finger to the touchpoint, based on the rigid body attached to each hand. We thencheck if the touchpoint is within some threshold distance, ex-perimentally set at 10cm, of the self-avatar fingertip.

If the fingertip is within the threshold, we raycast from theuser’s eye point through the index finger of the user’s selfavatar hand. We use the intersection of this ray with the vir-tual screen to compute a perspective-correct touch point.

If neither self-avatar finger is nearby, we assume that the useris interacting without the HMD and hand trackers, and use theoriginal touch point. These corrected touch points are posi-tioned relative to the self-avatar fingertip as seen by the HMDuser. Differences in hand size or finger length do not impactthe results because the corrected touch point is computed rel-ative to the self-avatar fingertip as viewed through the HMD.Likewise, differences in hand tracker position are reflected inthe position and orientation of the self-avatar fingertip (Fig-ure 1, right).

This approach does not require an active touch screen. Wehave used the system to create interactive surfaces in midair,or on blank wood and glass surfaces for haptic feedback. Inthis use case, we detect a touch based on the self-avatar fin-gertip colliding with an object in the virtual environment thatrepresents the screen.

LIMITATIONS AND FUTURE WORKThis approach has several limitations that may be addressedin the future. The system does not afford multi-touch ges-tures. Since the fingertips are fixed in the user’s self-avatar,the same approach is not useful for multi-finger gestures. Thischallenge could be addressed by e.g. determining the trans-forms of other touchpoints relative to the index finger touch-point, as determined based on the orientation of the hand.

ACKNOWLEDGEMENTSThis work was supported by the Office of Naval Researchthrough Award No. W911NF-04-D-0005-0041. The contentdoes not necessarily reflect the position or the policy of theGovernment, and no official endorsement should be inferred.

Poster Session SUI’14, October 4-5, 2014, Honolulu, HI, USA

160