6
The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007) Mobile Haptic Interface for Large Immersive Virtual Environments: PoMHI v0.5 1 Chaehyun Lee, 2 Min Sik Hong, 1 In Lee, 2 Oh Kyu Choi, 2 Kyung-Lyong Han, 2 Yoo Yeon Kim, 1 Seungmoon Choi, and 2 Jin S. Lee 1 : Haptics and Virtual Reality Laboratory POSTECH, Republic of Korea 2 : Robotics and Automation Laboratory POSTECH, Republic of Korea {xlos, mshong, inism, hyh1004, sidabari, yooyeoni, choism, jsoo}@postech.ac.kr Abstract - We present initial results of research toward a novel Mobile Haptic Interface (MHI) that provides an unlimited haptic workspace in large immersive virtual environments. When a user explores a large virtual envi- ronment, the MHI can sense the configuration of the user, move itself to an appropriate configuration, and provide force feedback for haptic display, thereby enabling a vir- tually limitless workspace. Our MHI (PoMHI v0.5) is featured with omni-directional mobility, a collision-free motion planning algorithm, and force feedback for general environment models. We also provide experimental re- sults that show the fidelity of our mobile haptic interface. Keywords - mobile haptic interface, mobile robot, omni-directional wheel, PID control with fuzzy logic control, haptic rendering 1. Introduction A force-feedback haptic interface has an inherent limit on its workspaces and cannot render virtual objects larger than its workspace. This is tolerable in desktop applica- tions where the size of a virtual environment is comparable to the workspace of a desktop haptic interface, but can be a severe limitation in large virtual environments such as the CAVE TM . The traditional solution to this problem has been using a large haptic interface of a manipulator type [1] or of a string type (SPIDAR) [2][3]. However, their work- spaces are still limited and cannot be easily scaled to vir- tual environments of different sizes. A promising alternative is a Mobile Haptic Interface (MHI) that refers to a force-feedback haptic interface with a mobile base. The mobile base can move the haptic in- terface to an adequate place to render large virtual objects and thus can provide an unlimited workspace. Compared to other large haptic interfaces, the MHI is easier to be installed or removed due to its mobility, considerably smaller, and safer. Furthermore, the MHI can provide high-fidelity force feedback since a desktop haptic inter- face with greater precision is used for it. The concept of the MHI was firstly proposed by Nitzsche et al [4][5]. Although their MHI, Walkii, had omni-directional mobility, a simple motion planning al- gorithm was used and only primitive objects could be rendered. Barbagli et al. presented two new MHIs in 2004 [6]. Both of the two MHIs used a commercial robot as its mobile base and adopted more complex motion planning algorithm than Walkii. More recently, a MHI with two desktop haptic interfaces was proposed for two-point ma- nipulation [7]. In this paper, we present an initial version of POSTECH Mobile Haptic Interface (PoMHI v0.5) especially de- signed to be used with large visual environments such as the CAVE TM . The PoMHI can move in any direction using three omni-directional wheels while avoiding collisions with a user, load general virtual environment models, and provide acceptable force feedback. The mobile base with the three omni-directional wheels was controlled via a PID-control with an additional fuzzy logic control. A mo- tion planning algorithm guiding the movement of the mobile base was also developed. We also confirmed through experiments that the dynamics of the mobile base brings little interference on the forces delivered to the user. The reminder of this paper is organized as follows. Section 2 briefly reviews the overall architecture of the PoMHI. Section 3 describes the hardware components and the control methods used in the system. In Section 4, we describe the software structure and the motion planning strategy. The effect of the mobile base dynamics on the final rendering force is also examined. Applications of the PoMHI are discussed in Section 5. Finally, we conclude this paper in Section 6. 2. System Architecture The overall system architecture is shown in Figure 1. The IS-900 Tracking System (a processor and two track- ers; InterSense Inc., USA) is responsible for tracking the current configurations (position and orientation) of both the PoMHI and the user. Tracked information is sent to the Fig. 1. Architecture of the PoMHI. 106

TS5-4 Mobile Haptic Interface for Large Immersive Virtual

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

TS5-4 Mobile Haptic Interface for Large Immersive Virtual Environments: PoMHI v0.5 The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007)
Mobile Haptic Interface for Large Immersive Virtual Environments: PoMHI v0.5
1Chaehyun Lee, 2Min Sik Hong, 1In Lee, 2Oh Kyu Choi, 2Kyung-Lyong Han, 2Yoo Yeon Kim,
1Seungmoon Choi, and 2Jin S. Lee 1: Haptics and Virtual Reality Laboratory
POSTECH, Republic of Korea
2: Robotics and Automation Laboratory POSTECH, Republic of Korea
{xlos, mshong, inism, hyh1004, sidabari, yooyeoni, choism, jsoo}@postech.ac.kr
Abstract - We present initial results of research toward a novel Mobile Haptic Interface (MHI) that provides an unlimited haptic workspace in large immersive virtual environments. When a user explores a large virtual envi- ronment, the MHI can sense the configuration of the user, move itself to an appropriate configuration, and provide force feedback for haptic display, thereby enabling a vir- tually limitless workspace. Our MHI (PoMHI v0.5) is featured with omni-directional mobility, a collision-free motion planning algorithm, and force feedback for general environment models. We also provide experimental re- sults that show the fidelity of our mobile haptic interface. Keywords - mobile haptic interface, mobile robot, omni-directional wheel, PID control with fuzzy logic control, haptic rendering
1. Introduction
A force-feedback haptic interface has an inherent limit on its workspaces and cannot render virtual objects larger than its workspace. This is tolerable in desktop applica- tions where the size of a virtual environment is comparable to the workspace of a desktop haptic interface, but can be a severe limitation in large virtual environments such as the CAVETM. The traditional solution to this problem has been using a large haptic interface of a manipulator type [1] or of a string type (SPIDAR) [2][3]. However, their work- spaces are still limited and cannot be easily scaled to vir- tual environments of different sizes.
A promising alternative is a Mobile Haptic Interface (MHI) that refers to a force-feedback haptic interface with a mobile base. The mobile base can move the haptic in- terface to an adequate place to render large virtual objects and thus can provide an unlimited workspace. Compared to other large haptic interfaces, the MHI is easier to be installed or removed due to its mobility, considerably smaller, and safer. Furthermore, the MHI can provide high-fidelity force feedback since a desktop haptic inter- face with greater precision is used for it.
The concept of the MHI was firstly proposed by Nitzsche et al [4][5]. Although their MHI, Walkii, had omni-directional mobility, a simple motion planning al- gorithm was used and only primitive objects could be rendered. Barbagli et al. presented two new MHIs in 2004 [6]. Both of the two MHIs used a commercial robot as its mobile base and adopted more complex motion planning
algorithm than Walkii. More recently, a MHI with two desktop haptic interfaces was proposed for two-point ma- nipulation [7].
In this paper, we present an initial version of POSTECH Mobile Haptic Interface (PoMHI v0.5) especially de- signed to be used with large visual environments such as the CAVETM. The PoMHI can move in any direction using three omni-directional wheels while avoiding collisions with a user, load general virtual environment models, and provide acceptable force feedback. The mobile base with the three omni-directional wheels was controlled via a PID-control with an additional fuzzy logic control. A mo- tion planning algorithm guiding the movement of the mobile base was also developed. We also confirmed through experiments that the dynamics of the mobile base brings little interference on the forces delivered to the user.
The reminder of this paper is organized as follows. Section 2 briefly reviews the overall architecture of the PoMHI. Section 3 describes the hardware components and the control methods used in the system. In Section 4, we describe the software structure and the motion planning strategy. The effect of the mobile base dynamics on the final rendering force is also examined. Applications of the PoMHI are discussed in Section 5. Finally, we conclude this paper in Section 6.
2. System Architecture
The overall system architecture is shown in Figure 1. The IS-900 Tracking System (a processor and two track- ers; InterSense Inc., USA) is responsible for tracking the current configurations (position and orientation) of both the PoMHI and the user. Tracked information is sent to the
Fig. 1. Architecture of the PoMHI.
106
The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007)
tracker server, and the server forwards it to the laptop inside the PoMHI via UDP protocol. Based on the user’s configuration, the laptop determines an appropriate con- figuration of the mobile base and controls the base to place it to the configuration. Graphic and haptic rendering modules are also managed in the laptop. The user wears a head mounted display (HMD) and sees stereoscopic scenes of a virtual environment. Relevant communication and rendering rates are specified in the figure.
3. MHI Hardware
3.1 MHI Hardware Design A. Overall Structure of the Hardware
The hardware structure of the PoMHI is shown in Figure 2. It consists of four main parts: omni-directional wheels and geared DC motors, a DSP control board and power amplifiers, a laptop, and a desktop 3 DoF haptic interface (PHANToM Premium 1.5A; SensAble inc., USA). The mobile base is responsible for moving the whole PoMHI to face the user, and the PHANToM is responsible for providing appropriate force feedback to the user. B. Omni-directional Structure
Figure 3 shows the mobile-base actuation design using three omni-directional wheels. Our design follows the Y-shaped structure for holonomic motion of the mobile base. This is more adequate than a bidirectional mobile robot to follow possibly abrupt motions of a user. Each wheel is connected to a motor by a timing belt and a pulley.
The kinematics of the mobile base is derived from the relation of parameters represented in Figure 4. Here, the
origin of the local coordinate is set to the center of the base and the parameters are defined as:
iv : Linear velocity of each wheel (i = 1, 2, 3)
[ , ]Tx y=v : Linear velocity of the mobile base
φ : Angular velocity of the mobile base L : Distance between a wheel and the orgin.
Then, the mobile base kinematics can be derived as
follows [8][9]:



, (1)
where T TTS x y φ φ = = v .
3.2 MHI Control
A. Mobile Base Control Overview
The motion of the mobile base is based on PID control for the desired linear and angular velocities of the base. As shown in Figure 3, each omni-directional wheel has six free-rolling sub-wheels that may cause a slip. Moreover, it was observed that the mobile base contains inherent nonlinearity induced from the wheel and belt-pulley structure. We thus also use a fuzzy logic control (FLC) and other supplementary control techniques to overcome the problem. The overall control equation for motor i is:
( 1) ( ) ( ) ( )i i i iU k U k U k k+ = + +Γ , (2) where Ui(k) is a command to be sent to the motor amplifier and
i(k) is an output of the supplementary control. The
sampling rate of the control loop is set to 50 Hz. B. PID-based Velocity Control
Once a desired trajectory of the mobile base is determined from the motion planning algorithm (will be explained later), the desired velocity of each wheel is calculated by the kinematics in Eq. (1). Using these
Fig. 2. Hardware structure of the PoMHI.
Fig. 3. An omni-directional wheel (left) and the placement of three wheels in the mobile base (right).
Fig. 4. Omni-directional structure on local coordinates.
107
The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007)
velocities, each motor is controlled by the PID control method. The velocity of each wheel is estimated using the exponential filter as:
, ,( ) (1 ) ( 1) ( )i filtered i filtered iv k v k v kα α= − − + , (3) where 0 1α≤ ≤ is the parameter of the filter.
To obtain the desired motor velocity, we use the PID control equation as follows:
1
i p i i i d i i
U k K e k K e i K e k =
+ = + + ∑ , (4)
where , ,( ) ( ) ( )i i desired i filterede k v k v k= − is the velocity error
of the motor, and Kp, Ki, and Kd are the PID gains. Using only the PID control did not bring enough per-
formance in our system. This is illustrated in Figure 5 where the red lines represent references and blue lines measured values. The left graph shows the position of the PoMHI, and the right three graphs show the velocities of the wheels. We can observe the system nonlinearity in the black circles where the measured velocity rather decreases in spite of the increase of the reference velocity. The nonlinearity seems to be caused by the increase of friction when the contact point of a wheel between a sub-wheel and the floor is changed. Because this nonlinearity is in- herent characteristics of the wheel structure, eliminating this phenomenon perfectly may not be possible. We thus minimize the effect of the nonlinearity through additional control methods that will be described in the next subsec- tions.
C. Fuzzy Logic Control
We use a singleton fuzzifier, a product inference engine, and a center average defuzzifier in FLC of the mobile base. The inputs are the velocity error ei(k) and its increment
ei(k). We use triangular membership functions (eµ and
eµ ) as shown in Figure 6. Membership functions consist of five sections; negative big (NB), negative (N), zero (ZO), positive (P), and positive big (PB). These sections are determined experimentally.
Table 1 shows the rule set of the FLC derived from the input membership functions. Each element of the table,
( , )i jy , represents the output of each rule. Using the rules and input variables, the updating output of FLC is [3]:
5 3 ( , )
m n i
k k
µ µ
µ µ
We adopt additional supplementary control when the following condition is satisfied:
, ,( ) ( 1) & ( ) ( 1)i i i filtered i filteredU k U k v k v k> − < − . (6) This condition detects when the contact point of a wheel between a sub-wheel and the floor is changed. While this occurs, excessive errors are accumulated in the PID con- trol which causes an overshoot in position control. An algorithm to minimize this behavior is described below.
If Eq. (6) is satisfied at time index k, the controller stores the sum of updating outputs
Ui(k) and the current velocity
( )e filteredv v k= . In each control time l after k, the controller checks whether a condition, ( )e filteredv v l< , is met. If it is, the controller compensates the excessive error by:
, ( ) l
U U k =
= ∑ and (7)
, ( ) , ( )
k k l
Fig. 5. Experimental results using the PID-based velocity
control only (x = 0.0 m/s, y = 0.2 m/s, φ = 0.0 rad/s).
Fig. 6. Membership functions of velocity error, eµ , and
its variation, eµ .
e(k) e(k) NB N ZO P NP
N (1,1)y (2,1)y (3,1)y (4,1)y (5,1)y
ZO (1,2)y (2,2)y (3,2)y (4,2)y (5,2)y
P (1,3)y (2,3)y (3,3)y (4,3)y (5,3)y
108
The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007)
E. Experimental Results
Figure 7 presents the results with the adapted FLC and the supplementary control. In the velocity graph of each wheel, fine trembles due to the change of contact points still show up, but we can see that the effect of the inherent nonlinearity has significantly decreased. Moreover, in the position graph, measured position values almost perfectly followed the reference values (a straight line).
4. MHI Software
The software in the MHI system consists of three parts.
The tracker server program which is executed in the tracker server receives the configuration of each IS900 tracker and sends this information to the MHI via wireless LAN. Using this information, the software running on the laptop in the MHI controls all devices in the MHI (except for the motors controlled by the DSP board) and renders visual and haptic information.
The tracker server program and the MHI communicate through wireless LAN. Due to the update rate of the IS900 tracker, the network update rate is set to 190 Hz. To maintain this speed, the UDP protocol is used instead of the TCP protocol. TCP, which is more reliable than UDP, is sometimes too slow and even exhibits severe jitters. Whenever a packet is missed in TCP, the protocol re- transmits all packets after the missed packet and causes a long delay which is unacceptable in our application. The UDP is not as complex as the TCP, and is therefore faster. Although some packets can be missed and reordered dur- ing transmission in UDP, its effect can be made trans- parent to a user.
The haptic server program is comprised of networking, motion planning and haptic rendering parts. First, the networking part receives the tracker information from the tracker server and transforms the coordinate system from IS-900 to the world coordinate frame. This updates the configuration of the user, the mobile base and the haptic interface point (HIP; a point modeling the haptic tool tip) and sends the new information to the visual server through network. Note that since the haptic and visual server pro- grams communicate with each other via network, the two servers can operate on separate machines for purposes such as higher performance or improved convenience. Second, the motion planning part calculates the next proper configuration of the mobile base and commands angular and linear velocities to the mobile base. Finally, the haptic rendering part detects a collision between the HIP and virtual objects and calculates haptic feedback force. All of these parts run at different rates, so the haptic server is designed as a multithreaded program.
The other server program which also runs on the laptop is the visual server. The visual server receives the con- figuration information from the haptic server and renders visual scenes. In our current system, a HMD is used for visual rendering, so the visual server also runs on the laptop with the haptic server. If other visual systems are used (e.g. CAVETM), the visual sever can be easily moved to another machine that controls the visual display and communicate with the haptic server through network, as is done in [11].
Fig. 7. Experimental results with the modified velocity
control (x = 0.0 m/s, y = 0.2 m/s, φ = 0.0 rad/s).
Fig. 8. Software structure of PoMHI.
Fig. 9. Configuration space of the MHI.
Fig. 10. Motion planning algorithm.
-1 -0.5 0 0.5 1
-1.8
-1.6
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
y
x
0 1 2 3 4 5 6 7 8 9 10 -0.02
0
0.02
0.04
V 1
0 1 2 3 4 5 6 7 8 9 10 0
0.2
0.4
V 2
0 1 2 3 4 5 6 7 8 9 10 -0.4
-0.2
0
The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007)
4.2 Motion Planning Algorithm
Basically, the mobile base has to place the haptic device in a proper position where the device can give force feedback to a user most effectively while avoiding colli- sions with a user. Considering these constraints, we set the goal position of the MHI on a line which passes the user and the user’s hand grasping the haptic tool. The distance between the user and the MHI is maintained within the typical arm length of an adult. Moreover, the MHI always faces the user so that the haptic interface tool is positioned in front of the user.
To represent the motion of the MHI, we introduce a 3D configuration space where its x and z axes represent the 2D position of the mobile base and its y axis the direction of the MHI in the workspace. In the configuration space, a configuration of the MHI is represented as a point, and a user is represented as a cylindrical obstacle (see Figure 9).
To move the MHI to a goal configuration without col- liding with the user, we developed the following motion planning strategy consisting of three cases. First, when the MHI is too close to the user, it moves away from the user as soon as possible to avoid possible collisions with the user. Second, when the goal position is close to the current position, the MHI moves toward the goal. Finally, when the goal position cannot be reached directly (e.g., when the user is on the path), the MHI sets sub-goals between the goal and current position of the MHI and moves towards the nearest sub-goal. An implemented algorithm for this strategy is shown in Figure 10.
To evaluate the performance of the motion planning algorithm, we simulated the motion of the mobile base and the user and represented the results with graphs in Figures 11 and 12. To simplify the simulation process, we as- sumed the mobile base could always move with maximum velocity. In each graph, the red point represents the current position of the MHI and the blue region represents the next position of the user where the mobile base can catch up with the user in 1 second. The results indicate that the change of maximum angular velocity does not signifi- cantly matter to the MHI while the change of the maxi- mum linear velocity does.
4.3 Force Rendering Algorithm
To calculate feedback force for a user given the HIP po-
sition and a virtual environment model, we use the virtual proxy algorithm which is widely used for haptic rendering with static objects. In a MHI, the movement of a mobile base can cause unwanted forces that may be perceived by the user. However, if the mobile base is tightly posi- tion-controlled and significantly heavier than a haptic in- terface on top of it, we can ignore the effect of mobile base dynamics. To verify this fact, we conducted an experiment with a force sensor attached between the lank link of the PHANToM and the puck held by the user.
The results of the experiment are shown in Figures 13 and 14 where the red lines represent commanded force to the haptic device and the blue lines recorded force by the force sensor. According to the results, the differences between the commanded and recorded force in the case of the moving mobile base are not greater than those in the case of the stationary mobile base. This means that the mobile base dynamics does not significantly affect the feedback force delivered to the user. Therefore, we calculate torque commands sent to the haptic interface only considering the static torque-force relationship of the desktop haptic interface.
Fig. 11. Regions that the mobile base can move in 1 second with different maximum angular velocities.
Fig. 12. Regions that the mobile base can move in 1 second with different maximum angular velocities.
Fig. 13. Comparison between commanded and meas-
ured forces when the mobile base was stationary. Fig. 14. Comparison between commanded and meas-
ured forces when the mobile base was moving.
110
The 4th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2007)
4.4 Virtual Environment Model
Not only primitive models such as a plane and a cyl-
inder but also complex triangular mesh models can be loaded and rendered in the PoMHI. The visual server also shows the current position of the HIP to let the user know where s/he interacts with and the wall for a warning about the physical workspace limit in a room. The model of the mobile base is also loaded and visually rendered via a HMD in order to help the user avoid colliding with the mobile base for the safety purpose.
5. Applications
The current MHI system can provide the user with
visual and haptic feedback about large virtual environ- ments. One example is shown in Figure 15. A complex dolphin model consisting of 4488 faces is loaded, and the user sees and touches the real-sized dolphin model (1258 mm wide, 426 mm high, and 352 mm deep). For the sta- bility of the mobile base, we set the maximum linear and angular velocity to 0.2 m/s and 40 degree/s, respectively. With this maximum velocity values, the MHI can follow the user in most cases, unless the user moves abruptly.
6. Conclusions and Future Work
We have develop an initial version of new mobile haptic interface named PoMHI and its applications. The PoMHI can place itself to an appropriate configuration to provide boundaryless haptic feedback while avoiding collisions with the user, and handle general virtual environment models. We adopted several motion control methods to stably and correctly control the motion of the mobile base. We also examined the fidelity of force display in the PoMHI by comparing actual force outputs with desired values.
We are currently working on a next version of the PoMHI. This version has four omni-directional wheels with advanced design and a lift for the desktop haptic interface for the extension of its workspace in the height direction. We are also upgrading the software in terms of more sophisticated motion planning algorithm, more precise kinematics calibration, and force computation algorithm considering the effect of the mobile base dynamics. Once all of these are completed, we will integrate the PoMHI into the CAVETM that is the most
immersive large virtual environment platform among the present.
Acknowledgements
This work was supported by a research grant No. R01-2006-000-10808-0 from the Korea Science and En- gineering Foundation (KOSEF) funded by the Korea government (MOST).
References
[1] F. P. Brooks, M. Ouh-Young, J. J. Batter, and P. J.
Kilpatrick, “Project GROPE - Haptic Displays for Scientific Visualization,” In Proc. SIGGRAPH 90, pp. 177-185, 1990.
[2] L. Buoguila, M. Ishii, and M. Sato, “Multi-Modal Haptic Device for Large-Scale Virtual Environ- ment,” In Proc. 8th ACM International Conference on Multimedia, pp. 277-283, 2000.
[3] N. Hashimoto, S. Jeong, Y. Takeyama, and M. Sato, “Immersive Multi-Projector Display on Hybrid Screens with Human-Scale Haptic and Locomotion Interfaces,” In Proc. International Conference on Cyberworlds, pp. 361-368, 2004.
[4] N. Nitzsche, U. D. Hanebeck, and G. Schmidt, “Mobile haptic interaction with extended real or virtual environments,” In Proc. RO-MAN 2001, pp. 313-318, 2001.
[5] N. Nitzsche, U. D. Hanebeck, and G. Schmidt, “Design Issues of Mobile Haptic Interfaces,” Jour- nal of Robotics Systems, Vol. 20, No. 9, pp. 549-556, 2003.
[6] F. Barbagli, A. Formaglio, M. Franzini, A. Gianni- trapani, and D. Prattichizzo, “An experimental study of the limitations of mobile haptic interfaces,” In Proc. ISER 2004, 2004.
[7] Formaglio, M. D. Pascale, and D. Prattichizzo, “A mobile platform for haptic grasping in large envi- ronments,” Virtual Reality, Vol. 10, No. 1, pp. 11-23, 2006.
[8] Leow Y.P., Low K.H., Loh W.K., “Kinematic modeling and analysis of mobile robots with omni-directional wheels,” In Proc. ICARCV 2002. Vol. 2, pp. 820-825, 2002.
[9] Yong Liu, Xiaofei Wu, J Jim Zhu, Jae Lew, “Omni-Directional Mobile Robot Controller Design by Trajectory Linearization,” In Proc. American Control Conference, 2003.
[10] L. X. Wang, A Course in Fuzzy Systems and Control, Prentice-Hall, Inc., 1997.
[11] E. Dorjgotov, S. Choi, S. R. Dunlop, and G. R. Bertoline, “Portable Haptic Display for Large Im- mersive Virtual Environments,” In Proc. HAPTICS 2006, pp. 321-327, 2006.
Fig. 15. A user playing with the virtual dolphin using
the PoMHI (left) and the visual scenes displayed to the user via the HMD (right). The red sphere represents the HIP.
111
Main
Information
Plenary Talk I : Symbolic AI + Embodied AI = Dependable Robot Intelligence?
Plenary Talk II : Visual SLAM in large environments
Plenary Talk III : Maximal Information Systems
Plenary Talk IV : Telehaptics - Principle, Design and Implementation
Plenary Talk V : Mobiligence: Emergence of Adaptive Motor Function through Interaction among the Body, Brain and Environment
Plenary Talk VI : Force and Visual Control for Physical Human-Robot Interaction
Oral Session
TS1: Dependable manipulation
TS1-1 Design of a Humanoids Specific Anthropomorphic Robot Hand by Imitating Human Finger’s Motion
TS1-2 Robot Skill Learning Strategy for Contact Task
TS1-3 Design of the Safe and Speedy Robot Arm with Variable Stiffness Joint
TS2: Mobile Robot & Application
TS2-1 A User Study of a Mobile Robot Teleoperation
TS2-2 An approach for Modularization Design of Mobile Robot
TS2-3 Controller design of the propulsion system used in the autonomous vehicle
TS2-4 Inspection of Insulators on High Voltage Power Transmission Line
TS3: Trends in the North American Personal, Service and Mobile Robotics Market
TS3-1 Trends in the North American Personal, Service and Mobile Robotics Market
TS4: Humanoid & Application
TS4-1 Adaptive Communication Selection for Multi-Robot Interaction
TS4-2 Structural Optimization of the Pelvis in a Humanoid Considering Dynamic Characteristics
TS4-3 Grasp Planning for Three-Fingered Robot Hands using Taxonomy-Based Preformed Grasps and Object Primitives
TS4-4 Inertial Sensor Alignment with Human Arm Orientation for Humanoid User Interface
TS5: Ambient Intelligence
TS5-3 Unified S/W Platform for Ubiquitous Robot, AnyRobot Studio
TS5-4 Mobile Haptic Interface for Large Immersive Virtual Environments: PoMHI v0.5
TS5-5 Modeling of Indoor Space and Environment Sensor
TS5-6 Context Aware Service Agents for Ubiquitous RobotsApplications
TS6: Navigation & Localization
TS6-2 A Simultaneous Localization And Mapping Algorithm for Topological Maps with Dynamics
TS6-3 Square Root Iterated Kalman Filter for Bearing-Only SLAM
TS6-4 Artificial Landmark Design and Recognition for Localization
TS6-5 Range Sensor Scan Matching for Localization and Map Building
TS7: Swarm & Robotics
TS7-1 Neighbor-Referenced Formation Control for a Team of Mobile Robots
TS7-2 Kalman Filter based ZMP Estimation Scheme for Balance Control of a Biped Robot
TS7-3 Predicting the Size of Spring Network Swarm in Quadratic Potential Fields
TS8: Autonomous Vehicle
TS8-1 Fault Diagnosis of Aircraft Engine Sensor Based on Sequential Probability Ratio Test and Kalman filter
TS8-2 A Shared Fate Approach to Improve UAV Pilot Performance and Minimize Accidents
TS8-3 Unmanned Aerial Vehicle Rotorcraft: Identifying Landing Zones in the Presence of Obscurants
TS8-4 Backward Parking Control of a Car with Passive Trailers
TS9: Bio-mimetic Mechanism
TS9-1 A Development of Wall-Climbing Robot with Pneumatic System
TS9-2 Determination of Joint Angles for Fitting a Serpentine Robot to a Helical Backbone Curve
TS9-3 An Insect-Like Flapping-wing Device Actuated by a Compressed Unimorph Piezoelectric Composite
TS9-4 An Undulatory Locomotion Controller for Snake-like Robots Based on Adaptive Central Pattern Generators
TS10: Sensor & Actuator
TS10-2 A New Adjustable Actuator for Vehicle Engine
TS10-3 Fusion Method for Multi-sensor Dynamic Data
TS10-4 Personal Information Extraction Using A Microphone Array
TS11: Vision & Application
TS11-1 Recognition of Moving Objects in Mobile Robot with an Omnidirectional Camera
TS11-2 Recognition of Objects with Legs Using Vision-based Candidate Generation and Evaluation
TS11-3 Recognition of Human action in Dynamic image using Entropy measurement based on Frequency of Motion region
TS12: Bio-Signal Processing
TS12-1 Traffic Sign Recognition for Intelligent Vehicle/Driver Assistance System Using Neural Network on OpenCV
TS12-2 Design a Reject Output for Pattern Recognition Neural Network by Using a Dynamic Radial Basis Function Network
TS12-3 Improvement of Real-Time Object Tracking Performance of The ROBOKER Head by RBF Network Compensation
TS12-4 Pattern Recognition of Typical Grasping Operations Using a Wavelet Feature of EEG Signal
TS13: Design and Synthesis of Mechanical System
TS13-1 Visualization of Cornea Dynamics
TS13-2 Scenario Analysis of Sustainability in Global Resource Circulation
TS13-3 Preliminary Study on Flexible Product Family Deployment by Optimal Anticipatory Design of Common Components
TS13-4 Development of Quasi-3D-Rehabilitation-System, “Hybrid PLEMO”
TS14: Industry Session I
TS14-1 Advanced CMOS Image Sensor for Robot Eye
TS14-2 The Development Concepts and Business Model for Intelligent Service Robot
TS14-3 Development of the Humanoid Research Platform, URIA(Ubiquitous Robotics Information Assistant)
TS15: Industry Session II
TS15-2 Improvement of iGS Positioning Sensor for Ubiquitous Robot Companion
TS15-3 EDS-EDRS(Exciting & Dynamic Robot control Simulation software)
TS16: Towards New Trends of Robotics
TS16-1 Collaborative Monitoring Using UFAM and Robot-2ndreport : Development of integrated management system-
TS16-2 Architecture of Navigation System Using CBD Method and UPnP Middleware
TS16-3 Basic Consideration on Robotic Food Handling by Using Burger Model
TS16-4 Robot Town Project: Sensory Data Management and Interaction with Robot of Intelligent Environment for Daily Life
TS16-5 Minimally Invasive Orthopedic Surgery System Based on Biologically Compatible Processes
TS17: Interactive Human-Space Design and Intelligence
TS17-1 Intelligent Ambience-Robot Cooperation-Closing Door with Humanoid Robot-
TS17-2 Multiple Objects Localization in Intelligent Space - Utilizing User Hands Position Information from Position Server -
TS17-3 Information Recommendation Module for Human Robot Communication Support under TV Watching Situation
TS17-4 Evaluation of Mental Stress by Analyzing Accelerated Plethysmogram Applied Plethysmogram Applied Welfare Space based on
TS17-5 A common platform for Robot Services through Robot Services Initiative(RSi) activities
TS17-6 Development of the Home Appliance Component using RTC-Lite
Poster Session
PO-1 Development of Intelligent BioRobot Platform for Integrated Clinical Test
PO-2 Development of Flexible Mobile Agents for BioRobot System in the Clinical Laboratory
PO-3 Parameter Identification of Dynamic Systems using Hamiltonian Regressor
PO-4 Conflict Evaluation Method for Building Grid Maps with Sonar Sensors
PO-5 Active Resampling for Rao-Blackwellized Particle Filter
PO-6 Multivariate Fuzzy Decision Tree for Hand Motion Recognition
PO-7 A Map Merging Technique using the Fingerprint Matching Method in Multi-Robot FastSLAM
PO-8 A New Motion Planning Algorithm for a Biped Robot Using Kinematic Redundancy and ZMP Constraint Equation
PO-9 Example-based Spoken Dialog Processing for Guidance Robots
PO-10 Robust Localization for Mobile Robots in Dynamic Environment
PO-11 Real-time Simultaneous Localization and Mapping using Omnidirectional Vision
PO-12 Systematic Error Calibration of Mobile Robot using Home Positioning Function
PO-13 Expression-Invariant Face Recognition Using Tensor Based Image Transformation
PO-14 Development of Omnidirectional Human Detection System for Mobile Robot
PO-15 Realization of a Service Robot HSR-I for Environment Security
PO-16 Laser Scanner-based Navigation for Indoor Service Robot
PO-17 A Digitally Trimmable Continuous-Time Capacitive Readout ASIC for MEMS vibratory gyroscope for Robot Applications
PO-18 State Estimation with Delayed Observations Considering Uncertainty in Time
PO-19 Robust Data Association for the EKF-based SLAM
PO-20 Development of a scheduling algorithm for efficient tests on Bio Robot
PO-21 Navigation Strategy for the Robot in the Elevator Environment
PO-22 A High-Performance, Low-Cost INS for Autonomous Mobile Robots
PO-23 Robust Multidimensional Scaling for Multi-Robot Localization
PO-24 Detection of Moving Objects by Optical Flow Matching in Mobile Robots using an Omnidirectional Camera
PO-25 Probabilistic Grid Matching Method for Grid-Based SLAM Using Sonar Sensors
PO-26 Robust Control of Coordinated Motion for Underwater Vehicle-Manipulator System with Minimizing Restoring Moments
PO-27 Efficient Feature Tracking for Visual SLAM by Pairwise Constraints
PO-28 The Binary Recognition Algorithm Using Point Correlation Template
PO-29 Flexible Docking Mechanism with Error-Compensation Capability for Auto Recharging System
PO-30 Terrain Classification Using Texture Recognition for autonomous robot
PO-31 Efficient Area Coverage Method for a Mobile Robot in Indoor Environments
PO-32 The implementation of URC Robot Management Server
PO-33 Landing Platform for Experiment of Stable Humanoid Walking
PO-34 Redundancy-Based Navigation Algorithm for Swarm Robot Systems
PO-35 Gomy: The Baby Bear-like Robot for Emotional Human-Robot Interaction
PO-36 KOBIE: A Pet-type Robot with Emotional Behaviors
PO-37 Multi-Arm Path Generation Method for Humanoid Robots
PO-38 Position Detection of KHST(Korean High Speed Train) using GPS System
PO-39 Temperature Characteristics of Main Transformer for KHST using Sensors
PO-40 A Study on Propulsion System Characteristics of KHST(Korean High Speed Train)
PO-41 Traction System Characteristics of TTX(Tilting Train eXpress) Using Measurement System
PO-42 The Study of Tilting System Combination Test of EMU Tilting Train
PO-43 Robot Arm Tele-operation using Wearable Electronic Device
PO-44 A Study on Motor Temperature Characteristic of TTX(Tilting Train eXpress)
PO-45 Study on the URC robot simulation that used intelligent robot simulator
PO-46 Contact Stability Analysis and Coordinative Manipulation of a Dextrous Robotic Finger Mechanism
PO-47 Cosegmentation as a Generalized Correspondence Problem-A Motion Model Driven Approach
Search
Help
Exit