9
Robotics and Autonomous Systems 47 (2004) 153–161 Leveraging on a virtual environment for robot programming by demonstration J. Aleotti, S. Caselli, M. Reggiani Dipartimento di Ingegneria dell’Informazione, University of Parma, Parco Area delle Scienze 181A, 43100 Parma, Italy Available online 14 May 2004 Abstract The Programming by Demonstration paradigm promises to reduce the complexity of robot programming. Its aim is to let robot systems learn new behaviors from a human operator demonstration. In this paper, we argue that while providing demonstrations in the real environment enables teaching of general tasks, for tasks whose essential features are known a priori demonstrating in a virtual environment may improve efficiency and reduce trainer’s fatigue. We next describe a prototype system supporting Programming by Demonstration in a virtual environment and we report results obtained exploiting simple virtual tactile fixtures in pick-and-place tasks. © 2004 Elsevier B.V. All rights reserved. Keywords: Robot programming; Programming by demonstration; Virtual fixtures; Virtual environments 1. Introduction Programming by Demonstration (PbD) aims at solv- ing the persistent problem of programming robot ap- plications [1–5]. Robot programming is known to be a complex endeavor even for robotic experts. Simpli- fying robot programming has become of prominent importance in the current context of service robotics, where end users with little or no specific expertise might be required to program robot tasks. The PbD tenet is to make robots acquire their be- haviors by providing to the system a demonstration of how to solve a certain task, along with some initial knowledge. A PbD interface then automatically inter- prets what is to be done from the observed task, thus eliminating the need for alternative, explicit program- ming techniques. Providing a demonstration of a task Corresponding author. E-mail address: [email protected] (M. Reggiani). to be reproduced by others is an effective means of communication and knowledge transfer between peo- ple. However, while PbD for computer programming has achieved some success [6], teaching tasks involv- ing motion of physical systems, possibly in dynamic environments, directly addresses the well-known dif- ficulties of embodied and situated systems [7]. Hence, further research is required to fulfill the goals of PbD in robotics. The most straightforward way to put into practice the PbD concept is by letting the user demonstrate the task in the real world, while taxing the system with the requirement to understand and replicate it. Recent examples of PbD systems involving demonstration in the real world are [5,8,9]. This is also the most gen- eral approach to programming by demonstration, but the complexity of the underlying recognition and in- terpretation techniques strongly constrains its appli- cability. To circumvent this problem, a PbD system might require to restrict to a predefined set the objects 0921-8890/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.robot.2004.03.009

Leveraging on a virtual environment for robot programming by demonstration

Embed Size (px)

Citation preview

Robotics and Autonomous Systems 47 (2004) 153–161

Leveraging on a virtual environment for robotprogramming by demonstration

J. Aleotti, S. Caselli, M. Reggiani∗Dipartimento di Ingegneria dell’Informazione, University of Parma, Parco Area delle Scienze 181A, 43100 Parma, Italy

Available online 14 May 2004

Abstract

The Programming by Demonstration paradigm promises to reduce the complexity of robot programming. Its aim is tolet robot systems learn new behaviors from a human operator demonstration. In this paper, we argue that while providingdemonstrations in the real environment enables teaching of general tasks, for tasks whose essential features are known a prioridemonstrating in a virtual environment may improve efficiency and reduce trainer’s fatigue. We next describe a prototypesystem supporting Programming by Demonstration in a virtual environment and we report results obtained exploiting simplevirtual tactile fixtures in pick-and-place tasks.© 2004 Elsevier B.V. All rights reserved.

Keywords: Robot programming; Programming by demonstration; Virtual fixtures; Virtual environments

1. Introduction

Programming by Demonstration (PbD) aims at solv-ing the persistent problem of programming robot ap-plications[1–5]. Robot programming is known to bea complex endeavor even for robotic experts. Simpli-fying robot programming has become of prominentimportance in the current context of service robotics,where end users with little or no specific expertisemight be required to program robot tasks.

The PbD tenet is to make robots acquire their be-haviors by providing to the system a demonstration ofhow to solve a certain task, along with some initialknowledge. A PbD interface then automatically inter-prets what is to be done from the observed task, thuseliminating the need for alternative, explicit program-ming techniques. Providing a demonstration of a task

∗ Corresponding author.E-mail address: [email protected] (M. Reggiani).

to be reproduced by others is an effective means ofcommunication and knowledge transfer between peo-ple. However, while PbD for computer programminghas achieved some success[6], teaching tasks involv-ing motion of physical systems, possibly in dynamicenvironments, directly addresses the well-known dif-ficulties of embodied and situated systems[7]. Hence,further research is required to fulfill the goals of PbDin robotics.

The most straightforward way to put into practicethe PbD concept is by letting the user demonstrate thetask in the real world, while taxing the system withthe requirement to understand and replicate it. Recentexamples of PbD systems involving demonstration inthe real world are[5,8,9]. This is also the most gen-eral approach to programming by demonstration, butthe complexity of the underlying recognition and in-terpretation techniques strongly constrains its appli-cability. To circumvent this problem, a PbD systemmight require to restrict to a predefined set the objects

0921-8890/$ – see front matter © 2004 Elsevier B.V. All rights reserved.doi:10.1016/j.robot.2004.03.009

154 J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161

and actions involved in the task to be demonstrated,or set up a highly engineered demonstration environ-ment. However, if objects and actions occurring in thetask are constrained in number and type, the same apriori knowledge can be transferred into a virtual en-vironment.

Based on this observation, we have begun an in-vestigation into a virtual environment for PbD, focus-ing on the approach to PbD based on task-level pro-gram acquisition[1,2,5,10]. The few previous worksevaluating PbD in virtual environments seem to haveexploited in a limited way its potential[11–13]. Per-forming the demonstration in a virtual environmentprovides functional advantages which can decrease thetime and fatigue required for demonstration and im-prove overall safety by preventing execution of incor-rectly learned tasks:

• tracking user’s actions within a simulated environ-ment is simpler than in a real environment and thereis no need for object recognition, since the state ofthe manipulated objects is known in advance;

• human hand and grasped object positions do nothave to be estimated using error-prone sensors likecameras;

• multiple virtual cameras and view point control areavailable to the user during the demonstration;

• the virtual environment can be augmented with op-erator aids such as graphical or other synthetic fix-tures[14], and force feedback;

• a virtual environment enables task simulation priorto execution for task validation.

In service robotics applications the simulation fea-ture is even more important since it has been pointedout that, in many cases, it would be almost impossibleto ask for multiple demonstrations by the user[8].

Of course, these functional advantages should beweighed against the very drawback of a virtual envi-ronment, namely its need for advance explicit encod-ing of a priori knowledge about the task, which re-stricts the applicability of the approach. An additionaldisadvantage associated with PbD in a virtual envi-ronment is the limited feedback provided to the user,which may prevent demonstration of complex senso-rimotor patterns.

The remaining of this paper presents a proto-type PbD system that we have set up for simplepick-and-place tasks, along with an experimental in-

vestigation on the exploitation of virtual fixtures tosimplify task demonstration. The general architec-ture of the PbD system, exploiting a dataglove anda tracker for manipulation data acquisition and seg-mentation and vision for object recognition and lo-calization, is indebted with the work of Dillmann andcoworkers[4,8,10] and Ikeuchi and coworkers[3,5].However, these researchers target more general andcomplex tasks with demonstration in real environ-ments, whereas our specific research focuses on taskpresentation in a virtual environment and explores therole of tactile feedback and synthetic fixtures duringdemonstration.

2. System overview

The PbD system described hereafter handles basicmanipulation operations in a 3D ‘block world’. Asmentioned, the system targets task-level program ac-quisition. We assume that trajectories will eventuallybe computed by path planning based on the actual lo-cation of objects and status of the working environ-ment.

In the proposed robot teaching method, an opera-tor, wearing a dataglove with a 3D tracker, demon-strates the tasks in a virtual environment. The vir-tual scene simulates the actual workspace and displaysthe relevant assembly components. The system recog-nizes, from the user’s hand movements, a sequence ofhigh-level actions and translates them into a sequenceof commands for a robot manipulator. The recognizedtask is then performed in a simulated environment forvalidation. Finally, if the operator agrees with the sim-ulation, the task is executed in the real environmentreferring to actual object locations in the workspace. Alibrary of simple assembly operations has been devel-oped. It allows to pick-and-place objects on a work-ing plane, to stack objects, and to perform peg-in-holetasks.

The architecture of the PbD system (Fig. 1) followsthe canonical structure of the ‘teaching by showing’method, which consists of three major phases. Thefirst phase is task presentation, where the user wearingthe dataglove executes the intended task in a virtualenvironment. In the second phase the system analyzesthe task and extracts a sequence of high-level oper-ations, taken from a set of rules defined in advance.

J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161 155

Fig. 1. PbD system architecture.

In the final stage the synthesized task is mapped intobasic operations and executed, first in a 3D simulatedenvironment and then by the robotic platform.

Fig. 1 shows the main components of the PbDtestbed. The actual robot controlled by the PbD appli-cation is a six d.o.f. Puma560 manipulator. A visionsystem (currently operating in 2D) is exploited torecognize the objects in the real workspace and detecttheir initial configurations.

2.1. Demonstration interface

The demonstration interface includes an 18-sensorCyberTouch (a virtual reality glove integrating tactilefeedback devices, from Immersion Corporation, Inc.)and a six d.o.f. Polhemus tracker. The human operatoruses the glove as an input device. For demonstrationpurposes, operator’s gestures are directly mapped toan anthropomorphic 3D model of the hand in the sim-ulated workspace.

In the developed demonstration setup, the virtualenvironment is built upon theVirtual Hand Toolkit(VHT) provided by Immersion Corporation. To dealwith geometrical information in a formal way, VHT

uses a scene graph data structure (Haptic SceneGraph – HSG) containing high-level descriptions ofenvironment geometries. VRML models can be eas-ily imported in VHT through a parser included inthe library. To grant dynamic interaction between thevirtual hand and the objects in the scene, VHT allowsobjects to be grasped. A collision detection algorithm(V-Clip) generates collision information between thehand and the objects, including the surface normals atthe collision points. A grasp state is estimated basedon contact normals and related information. If a validgrasp state exists, the object is attached to the handin the virtual space. When the grasp condition is nolonger satisfied, the object is released.

The user interface also provides a vibratory feed-back using CyberTouch actuators. Vibrations conveyproximity information that helps the operator to graspthe virtual objects and release them at proper locations.In the experiments described in this paper, vibrationamplitude has been empirically set to an intermedi-ate value clearly perceived by the user, without inves-tigating the additional possibility of vibration ampli-tude modulation. Vibration feedback is activated witha slight delay due to CyberTouch actuators dynamics.

156 J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161

This delay is not critical, since vibration is only acti-vated to suggest potential areas for placing or pickingup objects.

The current implementation of the virtual environ-ment for assembly tasks in the ‘block world’ consistsof a plane, a set of 3D colored blocks on it, and pos-sibly one or more containers (holes). This scene in-cludes the same objects of the real workspace configu-ration, although, in general, actual locations of blockswill be different.

2.2. Task recognition

The task planner analyzes the demonstration pro-vided by the human operator and segments it intoa sequence of high-level primitives that should de-scribe the user actions. To segment the human actionin high-level operations, a simple algorithm based onchanges in the grasping state is exploited: a new op-eration is generated whenever a grasped object is re-leased in a valid face-to-face configuration. The ef-fect of the operation is determined by evaluating theachieved object configuration in the workspace.

It should be noted that segmenting actions in thisway in the virtual environment is much simpler thanit would be in the real one, since both object locationsand grasp state are known to the system. Furthermore,for pick-and-place tasks this approach seems to scaleto reasonable numbers of objects and actions.

Three high-level tasks have been identified to de-scribe assembly operations in the simple pick-and-place domain. The first task picks an object and placesit onto a support plane (PickAndPlaceOnTable), thesecond task stacks an object onto another (PickAnd-PlaceOnObj), and the third task inserts a small objectin the hole of a container lying on the working plane(PegInHole). The high-level tasks have been imple-mented in C++ as subclasses of aHighLevelTask ab-stract class (Fig. 2). Information about the recognizedhigh-level task is passed to the constructor when theHighLevelTask class is instantiated.

2.3. Task generation

A set ofBasicTasks have been implemented for thebasic movements of the real robot. The available con-crete classes (Fig. 2) include basic straight-line move-ments of the end effector, such as translations in the

Fig. 2. Task hierarchy.

XY plane, parallel to the workspace table, and transla-tion and rotation along thez-axis. Two classes describethe basic operations of object pick up and release, ob-tained by simply closing and opening the on-off grip-per of the manipulator.

The high-level tasks identified in the task recog-nition phase are then decomposed in a sequence ofBasicTasks objects describing their behavior. In thissimple domain, decomposition is straightforward andthe three tasks only differ in the heightzf of the re-lease operation. Since the available manipulator has noforce sensor,zf is computed in the virtual demonstra-tion environment based on contact relations. For thepeg-in-hole task the grasped object must be releasedafter its initial insertion in the hole.

Each concrete class of the task tree provides twomethods to perform the operation, one in the simu-lated environment and one in the real workspace. Oncethe entire task has been planned, the task performer(Fig. 1) manages execution in both simulated and realworkspaces.

2.4. Task simulation

After the recognition phase, the system displays tothe human operator a graphical simulation of the gen-

J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161 157

erated task. This simulation improves safety, since theuser can check the correctness of the interpreted task.If the user is not satisfied after the simulation, the taskcan be discarded without execution in the real envi-ronment.

Simulation is non-interactive and takes place in avirtual environment exploiting the same scene graphused for workspace representation in the demonstra-tion phase. The only difference is that the virtualhand node in the HSG is replaced by a VRMLmodel of the Puma560 manipulator. Movement ofthe VRML model is obtained applying an inversekinematics algorithm for the specific robot and is up-dated at every frame. In the simulation, picking andreleasing operations are achieved by attaching anddetaching the HSG nodes representing the objectsto the last link of the VRML model of the mani-pulator.

2.5. Task execution

Execution in the real workspace exploits aC++ framework [15] based on CORBA. The PbDsystem builds a client–server CORBA connectionusing a Fast Ethernet switch. The client side runson MS Windows 2000, whereas the server control-ling the manipulator runs on Solaris 8 and the visionserver on Linux. The methods of the concrete classesin the task list invoke blocking remote calls of the

Fig. 3. PbD and execution of a task involving peg-in-hole and stacking operations.

servant manipulator object which transforms theminto manipulator commands.

2.6. Sample experiment

Fig. 3shows the demonstration, simulation and ex-ecution steps of a pick-and-place task (only initialand final frames are shown). In this experiment theworkspace contains four objects: two colored boxes,a cylinder and a container (a cylinder with a hole).The user demonstration consists of a sequence of threesteps. The user first picks up the cylinder and puts itin the container, then puts the yellow box on a dif-ferent position on the table, finally grasps the bluebox and stacks it on top of the yellow one. Whileperforming the demonstration, the user can dynam-ically adjust the point of view of the virtual scene.This feature, typical of demonstration in virtual envi-ronments, can help in picking partially occluded ob-jects, releasing them on the plane or on other boxes,and inserting them in the container. Movies of this andother PbD experiments are available at the web page:http://rimlab.ce.unipr.it/Projects/PbD/pbd.html.

2.7. Discussion

Since a large amount of information is readily avail-able in the virtual environment (object locations, handpose) and since we target pick-and-place tasks, in

158 J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161

the proposed PbD system tasks are learned at an ab-stract level and a single demonstration usually suf-fices. Hence, the demonstration phase is less demand-ing than with alternative approaches, even though per-forming a single demonstration would be simpler forthe user in the real environment than in the virtual one.

Task segmentation of pick-and-place-tasks in a vir-tual environment is easy and robust, and the approachseems to scale well to a larger number of subtasks. Ofcourse, the approach would require substantial exten-sions to be adapted to tasks involving complex sen-sorimotor patterns, which cannot be decomposed interms of simple pick-and-place steps.

Task simulation has proven an effective tool to pre-vent some erroneous executions in the real world. Theoperator can check whether the learned task is cor-rect and whether it can be executed by the target robottaking into account also its reachability and kinematicconstraints.

Task execution by the real robot requires the avail-ability of a sensory system to locate objects in theworkspace and of adequate path planning and robotcontrol capabilities. Once the task has been correctlylearned, successful task execution depends on thequality of the robot controller and the accuracy ofthe vision system. So far, we have not stressed theseaspects in our PbD system, although we have suc-cessfully executed peg-in-hole tasks with a clearanceof 3 mm.

3. Exploiting virtual fixtures

One of the potential advantages of a virtual demon-stration environment is the ability to incorporate in asimpler way virtual fixtures, i.e. artificial clues thathelp the operator in performing the task. Virtual fix-tures have been introduced as a general concept inrobot teleoperation[14,16]. We argue here that theycan play an important role in simplifying task demon-stration in PbD as well. The simple PbD system de-scribed in the previous section allows some analysisof the impact of synthetic fixturing as described in thefollowing.

Our PbD system incorporates virtual fixturing intwo ways. First, demonstration in the virtual environ-ment is somehow easier than it would be with an ac-curate representation of real world constraints, since

we accept some error in the positioning of the graspedobject. For example, with default parameter setting, at-tempting to deposit an object 1 cm below the plane re-sults in a validPickAndPlaceOnTable operation. Like-wise, in Fig. 3 clearance of the peg-in-hole task inthe virtual environment is about three times the ac-tual clearance in the physical world. Thresholds spec-ifying an acceptability zone are defined for any ac-tion, the tradeoff being between the degree of assis-tance provided to the operator and the ability to dis-criminate between the contact relations to be estab-lished and to achieve the required accuracy in posi-tioning. More cluttered environments would require,thus, stricter thresholds.

Whenever the object is released within the accept-ability zone, its location is corrected and re-alignedin the virtual environment. As described, this fea-ture is appropriate only for simple domains like theblock world above, yet the underlying concept extendsto more general applications. That is, if sufficientapriori knowledge is available, a virtual environmentcan guide the user performing the demonstration to-ward semantically significant actions, rather than sim-ply record user actions. Clearly, if the application de-mands accurate, free positioning, a different task hier-archy must be defined, along with proper acceptabilitythresholds.

A second type of virtual fixture is implemented inthe PbD system by exploiting the vibrotactile feed-back available in the CyberTouch glove. The underly-ing idea is that exploiting multimodality reduces theperceptual overload of the operator’s visual channel[17]. In the current implementation, vibration is acti-vated whenever the object lies within the acceptabilityzone previously discussed for a release operation. Theoperator can take advantage of this explicit informa-tion by immediately releasing the object, or declineit by moving the object to another location. Possiblevariations in this scheme include:

• activating vibrotactile feedback for a short amountof time, so that the user can decide whether take ad-vantage of predefined object alignment (by imme-diately releasing the object), or override it in favorof a fine manual positioning (by holding the objectuntil the end of vibration);

• providing vibrotactile feedback (which in principlecould also be modulated in amplitude) in a wider

J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161 159

volume than the acceptability zone, so as to providea hint guiding user motion.

3.1. Evaluation

We have asked five subjects, two females and threemales, to demonstrate three elementary tasks and onecomposite task in the virtual environment. Prior to theactual data collection experiment, each subject wasasked to play for 5 min with the virtual environment,picking and releasing objects. The elementary tasks tobe demonstrated were displacement of a cubic objecton table, stacking of a cubic object on top of anotherone, insertion of a cylindrical peg into a cylindricalhole. Each task also included approach motion, ob-ject grasping, and object transportation phases. The

Fig. 4. Average and standard deviation in task completion times for five subjects performing tasks with and without vibration assistance:object displacement task (top left); object stacking task (top right); peg-in-hole task (bottom left); composite task (bottom right). Verticalaxis: time in seconds; horizontal axis: subject index.

composite task was a routine comprising the three el-ementary tasks in sequence, although with a differ-ent object arrangement. For each task, time to com-pletion (time required to perform the demonstration)was measured and the average value and standard de-viation computed. Finally, each subject performed theexperiment five times using only the graphical out-put of the environment, and then five times with thevirtual tactile fixture turned on. Task completion timewas measured by an external supervisor and triggeredwhen the system reported a successful recognition ofthe last requiredHighLevelTask.

Fig. 4 shows the average and standard deviation intask completion times without and with the virtualtactile fixture in the four experiments. Task comple-tion times in both modality are clearly influenced by

160 J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161

Fig. 5. Assessing the improvement in task completion time us-ing vibration for the four tasks: ratio between average completiontimes with and without vibration (vertical axis). Each dot repre-sents a subject, whereas the dashed line connects to the averageimprovement across all subjects.

the different difficulty of the various tasks. Accordingto Fitts law [18], a Difficulty Index can be definedfor each task, and correlation with average task com-pletion time established. In general, the additionaltactile fixture helps in decreasing average demon-stration times, even though for each elementary taskone subject performed slightly worse with the tactilefixture on. For the composite task, the tactile fixtureimproved execution performance for all subjects. Itshould be mentioned that in the current implemen-tation the demonstration environment is somehowslower when tactile feedback is active. A higher la-tency is perceived by the user, which therefore mighttend to perform the demonstration more cautiously.The resulting delay might play a role in the outlierdata. Moreover, due to differences in object arrange-ment and initial operator pose, completion times forthe composite task cannot be compared with comple-tion times of the elementary tasks.

Fig. 5 compares results across the four tasks byscaling each subject performance with the value ob-tained without the tactile fixture. Dashed lines refer tothe average task completion time across all subjects.Qualitatively, virtual tactile fixturing appears to play amore important role for more complex tasks. For thecomposite task the average degree of improvement issmaller, although all subjects improve their comple-tion times when tactile virtual fixturing is active. This

is due to the fact that the task includes multiple trans-fer phases where virtual tactile fixturing plays no role.

4. Conclusion

We have described an ongoing investigation intoexploiting a virtual environment to assist the user inPbD of robot tasks. We have developed a prototypePbD system that uses a dataglove and a virtual realityteaching interface to program pick-and-place tasks ina block world. The PbD system has proven quite ef-fective in simplifying task demonstration, even thoughits applicability to more complex tasks remains to beassessed.

In this context, we are investigating the potentialsof virtual fixtures, both visual and tactile, and their ef-fect on task recognition performance. Preliminary re-sults on five subjects seem to confirm the advantageprovided by virtual fixtures. The ability to easily inte-grate such virtual fixtures is one of the major motiva-tions behind the adoption of a virtual demonstrationenvironment.

Acknowledgements

This research is partially supported by MIUR (Ital-ian Ministry of Education, University and Research)under project RoboCare (A Multi-Agent System withIntelligent Fixed and Mobile Robotic Components).

References

[1] K. Ikeuchi, T. Suehiro, Towards an assembly plan fromobservation, part I: task recognition with polyhedral objects,IEEE Transactions on Robotics and Automation 10 (3) (1994)368–385.

[2] Y. Kuniyoshi, M. Inaba, H. Inoue, Learning by watching:extracting reusable task knowledge from visual observationof human performance, IEEE Transactions on Robotics andAutomation 10 (6) (1994) 799–822.

[3] S. Kang, K. Ikeuchi, Toward automatic robot instructionfrom perception—temporal segmentation of tasks from humanhand motion, IEEE Transactions on Robotics and Automation11 (5) (1995) 670–681.

[4] H. Friedrich, S. Münch, R. Dillmann, S. Bocionek, M.Sassin, Robot programming by demonstration: supporting theinduction by human interaction, Machine Learning 23 (2–3)(1996) 163–189.

J. Aleotti et al. / Robotics and Autonomous Systems 47 (2004) 153–161 161

[5] K. Ogawara, J. Takamatsu, H. Kimura, K. Ikeuchi, Extractionof essential interactions through multiple observations ofhuman demonstrations, IEEE Transactions on IndustrialElectronics 50 (4) (2003) 667–675.

[6] A. Cypher (Ed.), Watch What I Do: Programming byDemonstration, The MIT Press, 1993.

[7] R. Brooks, Intelligence without reason, in: Proceedings of the12th International Joint Conference on Artificial Intelligence,Sydney, Australia, 1991, pp. 569–595.

[8] M. Ehrenmann, R. Zöllner, O. Rogalla, R. Dillmann,Programming service tasks in house-hold environments byhuman demonstration, in: Proceedings of the 11th IEEEInternational Workshop on Robot and Human InteractiveCommunication, Berlin, Germany, 2002, pp. 460–467.

[9] P. McGuire, J. Fritsch, J. Steil, F. Röthling, G.A. Fink, S.Wachsmut, G. Sagerer, H. Ritter, Multimodal human–machinecommunication for instructing robot grasping tasks, in:Proceedings of the IEEE International Conference onIntelligent Robots and Systems, Lausanne, Switzerland, 2002,pp. 1082–1089.

[10] R. Dillmann, O. Rogalla, M. Ehrenmann, R. Zöllner, M.Bordegoni, Learning robot behaviour and skills based onhuman demonstration and advice: the machine learningparadigm, in: Proceedings of the Ninth InternationalSymposium of Robotics Research, Snowbird, UT, USA, 1999,pp. 229–238.

[11] T. Takahashi, T. Sakai, Teaching robot’s movement invirtual reality, in: Proceedings of the IEEE/RSJ InternationalWorkshop on Intelligent Robots and Systems, Osaka, Japan,1991, pp. 1538–1587.

[12] H. Ogata, T. Takahashi, Robotic assembly operation teachingin a virtual environment, IEEE Transactions on Robotics andAutomation 10 (3) (1994) 391–399.

[13] J.E. Lloyd, J.S. Beis, D.K. Pai, D.G. Lowe, Programmingcontact tasks using a reality-based virtual environmentintegrated with vision, IEEE Transactions on Robotics andAutomation 15 (3) (1999) 423–434.

[14] C. Sayers, R. Paul, An operator interface for teleprogrammingemploying synthetic fixtures, Presence 3 (4) (1994) 309–320.

[15] M. Amoretti, S. Bottazzi, M. Reggiani, S. Caselli,Designing telerobotic systems as distributed CORBA-basedapplications, in: Proceedings of the International Symposiumon Distributed Objects and Applications, Catania, Italy, 2003,pp. 1063–1080.

[16] L. Rosenberg, Virtual fixtures: perceptual tools for teleroboticmanipulation, in: Proceedings of the IEEE Virtual RealityAnnual International Symposium, Seattle, WA, USA, 1993,pp. 76–82.

[17] J. Aleotti, S. Caselli, M. Reggiani, Multimodal user interfacefor remote object exploration with sparse sensory data,

in: Proceedings of the 11th IEEE International Workshopon Robot and Human Interactive Communication, Berlin,Germany, 2002, pp. 41–46.

[18] P. Fitts, The information capacity of the human motorsystem in controlling the amplitude of movement, Journal ofExperimental Psychology 47 (6) (1954) 381–391.

Jacopo Aleotti received the M.S. degreein electronics engineering from the Uni-versity of Parma, Italy, in 2002. He is cur-rently working toward a Ph.D. degree ininformation technology at the same Uni-versity. In 2003 he was Marie Curie fel-low at the Learning System Laboratory,Orebro University, Sweden. His currentresearch interests are robot programmingby demonstration, virtual reality technolo-

gies, human–robot interfaces and telemanipulation.

Stefano Caselli received a Laurea degreein electronic engineering in 1982 and thePh.D. degree in computer and electronicengineering in 1987, both from the Uni-versity of Bologna, Italy. In 1989–1990he has been visiting scholar at the Uni-versity of Florida. From 1990 to 1999he has held research fellow and associateprofessor positions at the University ofParma, Italy. He is now professor of com-

puter engineering at the University of Parma, where he is alsodirector of the Laboratory of Robotics and Intelligent Machines(RIMLab). His current research interests include development ofautonomous and remotely operated robot systems, service robotics,and real-time systems.

Monica Reggiani graduated from theUniversity of Parma in 1997 and ob-tained the Ph.D. in computer engineer-ing in 2001 from the same University.In 1999–2000 she was a visiting scholarat the Los Alamos National Laboratoryworking on parallel and distributed com-puting. Currently, she is a research as-sociate at the Dipartimento di Ingegneriadell’Informazione, University of Parma.

Her areas of interest include motion planning, programming bydemonstration, autonomous robots, and grid computing.