Upload
trinhkien
View
213
Download
0
Embed Size (px)
Citation preview
I
Interaction with Mobile AugmentedReality Environments
Jong Weon Lee and Han Kyu Yoo
Department of Digital Contents, Sejong
University, Seoul, South Korea
Synonyms
AR; Mediated reality; Mixed reality; MR
Definition
Augmented reality is a technology that combines
virtual and real worlds in real time to help users
complete their work or to provide users new
experiences.
Introduction
Augmented reality technologies have been
widely applied to military, industry, medical,
and entertainment areas. The rapid spread of
smart mobile devices such as smart phones and
smart pads has made it possible to experience AR
on smart mobile devices. Various AR applica-
tions including games have been developed on
mobile devices using sensors such as a camera, a
GPS, and an inertial sensor, yet most of them only
provide simple interaction for the users. Better
3D interaction techniques are needed to extend
the usability of mobile AR applications.
In this article, we will introduce a 3D interac-
tion technique suitable for mobile AR applica-
tions developed at the mixed reality and
interaction (MRI) laboratory recently. The 3D
interaction technique had been developed con-
centrating on object manipulations.
State-of-the-Art Work
3D Interaction in AR Environments
There is little research on interactions of mobile
AR systems with a small display. Anders
Henrysson et al. developed two interaction tech-
niques. They used an AR-enabled mobile phone
as a tangible interaction device. In Henrysson
et al. (2005), the mobile phone itself was manip-
ulated to control an object after selecting it in a
3D AR environment. In Henrysson and
Billinghurst (2007), they extended the interaction
technique developed in 2005 for mesh editing.
They selected multiple points on a mesh and the
selected vertices are locked relative to the cam-
era. Now a user could move the mobile phone to
translate and rotate the selected object or points
after they chose the motion type.
Touch-Based Interaction for 3D Manipulation
Touch-based interaction techniques have been
applied to manipulate 3D objections in a few
virtual reality systems. These interaction
# Springer International Publishing Switzerland 2015
N. Lee (ed.), Encyclopedia of Computer Graphics and Games,DOI 10.1007/978-3-319-08234-9_40-1
techniques are categorized into two types:
constrained and unconstrained. Constrained
interaction techniques are able to manipulate 3D
objects precisely. The constrained interaction
techniques separate the control of degree of free-
dom (DOF) to restrict the movements of 3D
objects. A widget, which acts as a visual guidance
for the predefined constraints, is typically used to
restrict the movements of 3D objects in the
constrained interaction techniques. Figure 1
shows a standard 3D transformation widget.
A user can select one of three arrows in the
widget to set a translation direction or one of
three circles to set a rotation axis. Any user’s
motions are then applied along the selected direc-
tion or the selected rotation axis.
A boxlike widget, tBox, was developed in
Cohé et al. (2011). The edges and the faces of
tBox were used for translation and rotation of the
selected object, respectively. Users can select and
manipulate edges and faces of tBox easily with a
fingertip. Widgets were designed to be more tol-
erable to imprecise touch inputs even though
careful touch positioning was still necessary.
Schmidt et al. developed a single touch interac-
tion technique with transient 3D widgets
(Schmidt et al. 2008). Stroke-based gestures
were used to create translation and rotation wid-
gets. The standard click-and-drag interaction was
used for manipulation.
A few constrained interaction techniques have
been developed for multi-touch inputs without a
widget. Oscar K.C. Au et al. introduced the
widgetless constrained multi-touch interaction
on a 10.1 inch display (Au et al. 2012). A user
selected the constraint without directly touching
the constraint mark. The orientation of two
touched fingers was compared with the
predefined axes to select the constraint. The con-
straint marks were displayed only as a visual
guidance to users. This solved the fat-finger prob-
lem causing an error on a device where the screen
elements were too small compared to a finger.
Unconstrained interaction techniques do not
use a 3D transformation widget that visually
guides possible motions of a 3D object. Users
can transform an object along an arbitrary direc-
tion or axis with the unconstrained interaction
techniques. Users can also translate and rotate a
3D object simultaneously with the unconstrained
ones so they are typically useful for fast and
coarse manipulations.
M. Hancock et al. introduced the Sticky Tools
technique in Hancock et al. (2009) to control the
full 6DOF of objects. Users select a virtual object
by touching it with their two fingers. Users move
the two touched fingers and rotate the two
touched fingers relative to one another to manip-
ulate the virtual object. While users manipulate
the virtual objects, user’s two fingers should stay
in touch with it. Anthony Martinet
et al. developed DS3 (Depth-Separated Screen-
Space) interaction techniques to manipulate 3D
objects in a multi-touch device (Martinet
et al. 2012). They combined constrained and
unconstrained approaches and applied different
techniques for translation and rotation. The
selected object was translated along the axis or
the plane defined with one or two fingers. It was
rotated freely using the constrain solver, which
was introduced by Reisman et al. in Reisman
et al. (2009). Translation and rotation were
clearly separated by the number of fingers
directly in contact with the object. Nicholas
Katzakis et al. used a mobile device as the game
Interaction withMobile Augmented Reality Environ-ments, Fig. 1 A standard 3D transformation widget
(Cohé et al. 2011)
2 Interaction with Mobile Augmented Reality Environments
controller in Katzakis et al. (2011). They devel-
oped an interaction technique that could control a
3D cursor on a large display without directly
touching the large display. The plane defined by
the orientation of a mobile device was casted on
the large display. The user could move the cursor
on the casted plane using touch inputs on the
display of the mobile device.
The last three interaction techniques are good
solutions for a virtual environment with a touch-
based display, but they cannot be directly applied
to mobile AR environments with a small display.
The Sticky Tools and DS3 interaction techniques
require direct contacts with an object. This
requirement is not applicable for a mobile AR
system. Fingers will occupy too much area of
the display. The constraint solver could be bur-
densome for the processor of the mobile device,
which has limited processing power. The interac-
tion technique proposed by Oscar K.C. Au
et al. could be applied to the device with a small
display because they do not require direct contact
with the constraint marks. The possible problems
with this technique are clutter caused by visual
guidance and two required touched fingers. The
plane casting interaction developed by Nicholas
Katzakis could be adapted to a mobile AR envi-
ronment since the position and orientation of the
mobile device are tracked in real time. This
tracked information could be used to constrain
the motion of a 3D object in the mobile AR
environment.We adapted this plane casting inter-
action to the proposed interaction techniques.
Overview
We developed a new interaction technique for
mobile AR systems with following three charac-
teristics: (1) combining constrained and
unconstrained interaction techniques, (2) using
relations between real objects and a smart mobile
device, and (3) combining a way to manipulate
real objects and a touch interface of a smart
mobile device. The proposed interaction tech-
nique aims at providing intuitive and effective
interaction when a user manipulates virtual
objects in mobile AR world.
3D Interaction in Mobile AREnvironments
We designed a new interaction technique for
mobile AR systems with three characteristics
described in the earlier paragraphs. The interac-
tion technique uses the movements of a mobile
device to change constraints and a mapping ratio
dynamically as shown in Figs. 2 and 3. After
moving the mobile device, the plane created by
the orientation of the mobile device is projected
onto the coordinate of the selected virtual object
in an AR world. For example, the mobile devices
A and B in Fig. 2 are projected onto the coordi-
nates of a cube object as plane A0 and plane B0
passing through the origin of the selected object
coordinate, respectively. A user can translate the
object along the projected plane, which is the
constraint plane, by a simple drag motion shown
in Fig. 4. By changing the constraint plane, a user
can translate the object to any location with sim-
ple drag motions on the display. Figure 5 shows
the mapping between the translations on the AR
world and motions on the display. The 2Dmotion
E on the display is projected onto the constraint
planeD as E0. A user can move the selected object
B along the E0 direction using the 2Dmotion E on
the display.
The moving distance of the object is depen-
dent on the distance of the mobile device as
shown in Fig. 3. When the mobile device is
located at location A, the drag motion translates
the virtual object C to the location CA. The same
drag motion on the display of the mobile device at
B will translate the C to the location CB. The
distance between C and CA is twice as long as
the distance between C and CB since the distance
between C and A is twice as long as the distance
between C and B. This mapping is represented in
Eq. 1 where a is the mapping ratio between dp, the
distance of the drag motion, and do, the translated
distance of the virtual object C.
do ¼ dp � l� a (1)
The tapping on amode-changing button is used to
change the interaction mode between translation
and rotation. In the rotation mode, the axis of the
Interaction with Mobile Augmented Reality Environments 3
rotation is defined as the axis orthogonal to the
direction of the drag motion on the constraint
plane created by the orientation of a mobile
device. The axis b is orthogonal to the drag
motion a. The scaling is done with pinch and
spreading motions. The scaling is also
constrained by the projection plane defined by
the orientation of a mobile device. The ratio of
the scaling is determined dynamically based on
the distance between the mobile device and the
selected object similar to the translation.
Experiments
We designed and performed a user study to eval-
uate the presented interaction technique. We
examined the subjective intuitiveness such as
ease of use, ease to learn, naturalness, preference,
and fun.
We developed a docking task, manipulated
virtual objects (indicated by the dotted lines),
and arranged them along the real objects
(indicated by the filled rectangular) on table
T (Fig. 4). We asked participants to put five
virtual characters on the top of the same real
characters as shown in Fig. 4. Five virtual char-
acters randomly appeared at the starting location,
the lower center of T. To enforce 3D manipula-
tion, the position, the orientation, and the size of
each virtual character were randomly assigned. If
each virtual object was closely posed with a sim-
ilar size to the corresponding real object, it was
considered as successfully docked and the virtual
object disappeared, and the next virtual one
appeared at the starting location again (see the
right part of Fig. 4). The rectangular with the
character M was the location of a pattern used
for tracking the camera of a smart phone.
The usability test consisted of two periods:
training and final test periods. Participants were
trained until their performance improvements
were saturated or they felt comfortable with the
test. Participants generally took 30–45 min for
the training period. The number of trials and the
learning time were measured during the training
period. The numbers of translation, rotation, and
scaling operations and the task completion time
were measured for each trial. Before the usability
test, we asked participants to fill up theInteraction withMobile Augmented Reality Environ-ments, Fig. 3 Dynamic mapping distance
Interaction withMobile Augmented Reality Environ-ments, Fig. 2 Dynamic constraints
4 Interaction with Mobile Augmented Reality Environments
questionnaires to understand participants’ back-
grounds. The numbers of translation, rotation,
and scaling operations and the task completion
time were also measured during the final test.
After the training and the final test period, partic-
ipants were asked to fill up the questionnaires
shown in Table 1 to measure the preference of
interaction techniques and the opinions about
interaction techniques.
Ten participants (four males and six females)
with normal or corrected vision took part in the
experiment. They were volunteers coming for the
experiment and we gave them a small gift. All
participants owned smart phones and seven
participants have heard about AR. Three partici-
pants have used AR apps before, but they only
used them few times. We selected young partic-
ipants for the experiment since they were gener-
ally more familiar with new technologies and
more willing to learn new technologies.
Average ratings are summarized in Fig. 5.
Overall, the presented interaction technique
achieved good ratings in all questions except
Q10 and Q13. The interaction technique was
considered easy to learn, easy to remember, and
fun. Users had difficulty applying rotation motion
to the selected object and using the mobile device
with one hand.
7 L
iker
t S
cale
7
6
5
4
3
2
1
0
1 2 3 4 5 6 7
Question Number
8 9 10 11 12 13
Interaction with Mobile Augmented Reality Environments, Fig. 5 User preference
Interaction with Mobile Augmented Reality Environments, Fig. 4 The setting of the usability test
Interaction with Mobile Augmented Reality Environments 5
Conclusion and Discussion
Understanding the characteristics of mobile AR
systems can lead to the development of more
effective 3D interaction schemes in the mobile
AR applications. Important findings from the
usability study with the presented interaction
technique can be summarized as:
1. The hybrid touch-based interface, combining
constrained and unconstrained interaction
techniques, is easy to learn and easy to remem-
ber for the given task. The participants’ famil-
iarities to the touch-based interface could
affect the results.
2. Users have to view the given pattern through
their cameras for AR applications using com-
puter vision techniques. Participants were not
bothered much by this requirement for the
presented interface. This is an encouraging
result because computer vision techniques
are used often to create mobile AR applica-
tions. Participants also responded positively to
the losses of augmented objects due to track-
ing failures.
3. Users do not want to move around the AR
environment yet. The geometrical relations
between augmented virtual objects and real
objects are important in an AR environment,
so users have to move around the AR environ-
ment. In the experiment, participants preferred
to rotate the real environment, which is the
board that contains all real objects used in
the experiment. We would fix all real objects
for the next user experiment to understand the
behaviors of the participants better in an AR
environment.
In addition, our experience suggests that we
have to modify the rotation interaction of the
presented interaction technique to provide users
with better user interactions. Participants had the
most difficult time when they had to rotate the
augmented objects in the desired direction. Par-
ticipants also provided useful comments. During
the training period, they complained about dis-
comfort in their arms caused by holding the smart
phone for a long period of time. This aspect
regarding discomfort should also be considered
while developing mobile AR applications if they
are to be truly user-friendly.
Cross-Reference
▶ 16 Virtual Reality – 2 Interactive Virtual Real-
ity Navigation using Cave Automatic Virtual
Environment Technology
▶ 16 Virtual Reality – 5 Virtual Reality and User
Interface
References and Further Reading
Au, O.K., Tai, C.L., Fu, H.: Multitouch gestures for
constrained transformation of 3D objects. J. Comput.
Graph. Forum. 31(2), 651–660 (2012)
Cohé, A., Decle, F., Hachet, M.: tbox: A 3D transforma-
tion widget designed for touch-screens. In:
Interaction withMobile Augmented Reality Environ-ments, Table 1 Questionnaires to measure the partici-
pants’ preferences of the interaction techniques (7 Likert
scale)
No. Questions
Q1 The interaction technique was easy to use
Q2 The interaction technique was easy to learn
Q3 The interaction technique was natural to use
Q4 The interaction technique was easy to remember
Q5 It was easy to view the pattern required for using
the augmented reality system
Q6 The augmented object was lost few times, but
they did not cause a big problem to complete the
given task
Q7 The interaction technique was generally
satisfactory
Q8 The interaction technique was fun
Q9 It was easy to move the augmented object to the
target location
Q10 It was easy to rotate the augmented object to the
target orientation
Q11 There wasn’t a major problem to complete the
given task
Q12 The size of the display was suitable for the
interaction technique
Q13 It was easy to use one hand for the interaction
technique
6 Interaction with Mobile Augmented Reality Environments
Proceedings of the 2011 Annual Conference on
Human Factors in Computing Systems,
pp. 3005–3008 (2011)
Hancock, M., Cate Ten, T., Carpendale, S.: Sticky tools:
Full 6DOF force-based interaction for multi-touch
tables. In: Proceedings ITS’09, pp. 145–152 (2009)
Henrysson, A., Billinghurst, M.: Using a mobile phone for
6 DOF mesh editing. In: Proceedings of CHINZ 2007,
pp. 9–16 (2007)
Henrysson, A., Billinghurst, M., Ollila, M.: Virtual object
manipulation using a mobile phone. In: Proceedings of
the 2005 International Conference on Augmented
Tele-Existence (ICAT’05), pp. 164–171 (2005)
Katzakis, N., Hori, M., Kiyokawa, K., Takemura, H.:
Smartphone game controller. In: Proceedings of 75th
HIS SigVR Workshop, pp. 55–60 (2011)
Martinet, A., Casiez, G., Grisoni, L.: Integrality and sep-
arability of multi-touch interaction techniques in 3D
manipulation tasks. IEEE Trans. Vis. Comput.
Graphics. 18(3), 369–380 (2012)
Reisman, J., Davidson, P.L., Han, J.Y.: A screen-space
formulation for 2D and 3D direct manipulation. In:
Proceedings of UIST’09, pp. 69–78 (2009)
Schmidt, R., Singh, K., Balakrishnan, R.: Sketching and
composing widgets for 3D manipulation. Comput.
Graph Forum. 27(2), 301–310 (2008)
Interaction with Mobile Augmented Reality Environments 7