8
A PATH PLANNER BASED ON EMOTIONAL INTELLIGENCE FOR TOPOLOGICAL NAVIGATION V. Egido 2 , R. Barber 1 , M. Malfaz, M.A. Salichs 1 1 Carlos III University,Madrid,Spain 2 European University of Madrid, Madrid, Spain [email protected], [email protected] ,[email protected], [email protected] Abstract: In this paper a task planning approach based on previous experiences is described. This planner will be used for task planning in a topological navigation system called EDN. The planner has been implemented as one of the deliberative skills of the mobile robot architecture named AD. AD is a two level architecture: deliberative and automatic. In a previous step, the environment’s information is stored by the robot autonomously as a topological map named Navigation Chart. After that, the task planner will be able to get that information for generating different plans. The generated plans are a sequence of actions and events. Two criteria have been established for the plan generation: one that minimizes travel time based on “Dijkstra” algorithm and other that tries to imitate a common human behaviour, minimizing the possible risk using emotions. The last one uses a modified version of Dijkstra algorithm developed for this work called “Guided Dijkstra”. Keywords: Mobile robots, robot skills, planners, robot navigation, emotions. 1. INTRODUCTION As part of everyday life, people navigate from one place to another using their knowledge about the environment. This is a natural process people learn as small children (Ihnelder, et al., 1967) and develop as they grow up. But how do people find their ways? Usually most of the computer models do not simulate the behaviour of human wayfinders. Researches in topological navigation in the last years tend to separate two problems that were before considered as one: path planning and wayfinding. The first one groups together problems related with obstacle avoidance until a defined point is reached. The second one is the task sequencing at higher level. This one can lead in a final event that does not necessarily have to be a specific geometric point. The path planning problem in a first approximation, can be divided in two tendencies. The first one, considered as the classic one, assumes the environment’s total geometric knowledge. The second one, based on the sensorial information, considers the lacks that are linked to the real information obtained of what surrounds such as: partial knowledge, sensor failure, obstacles that do not allow total vision, unexpected situation, etc. Within this field, all the classical avoidance techniques are found, as those based in potential fields or the use of maps based on reticular occupation. With respect to the wayfinding problem, planning movements from an initial place to a final one trying to imitate the human behaviour, researchers in the field of spatial knowledge have the same overall vision of the problem by layers. Timpf and Volta

Path planning inspired on emotional intelligence

  • Upload
    uc3m

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

A PATH PLANNER BASED ON EMOTIONAL INTELLIGENCE FOR TOPOLOGICAL NAVIGATION

V. Egido2, R. Barber1, M. Malfaz, M.A. Salichs1

1 Carlos III University,Madrid,Spain 2 European University of Madrid, Madrid, Spain

[email protected], [email protected] ,[email protected], [email protected]

Abstract: In this paper a task planning approach based on previous experiences is described. This planner will be used for task planning in a topological navigation system called EDN. The planner has been implemented as one of the deliberative skills of the mobile robot architecture named AD. AD is a two level architecture: deliberative and automatic. In a previous step, the environment’s information is stored by the robot autonomously as a topological map named Navigation Chart. After that, the task planner will be able to get that information for generating different plans. The generated plans are a sequence of actions and events. Two criteria have been established for the plan generation: one that minimizes travel time based on “Dijkstra” algorithm and other that tries to imitate a common human behaviour, minimizing the possible risk using emotions. The last one uses a modified version of Dijkstra algorithm developed for this work called “Guided Dijkstra”. Keywords: Mobile robots, robot skills, planners, robot navigation, emotions.

1. INTRODUCTION

As part of everyday life, people navigate from one place to another using their knowledge about the environment. This is a natural process people learn as small children (Ihnelder, et al., 1967) and develop as they grow up. But how do people find their ways? Usually most of the computer models do not simulate the behaviour of human wayfinders. Researches in topological navigation in the last years tend to separate two problems that were before considered as one: path planning and wayfinding. The first one groups together problems related with obstacle avoidance until a defined point is reached. The second one is the task sequencing at higher level. This one can lead in a final event that does not necessarily have to be a specific geometric point.

The path planning problem in a first approximation, can be divided in two tendencies. The first one, considered as the classic one, assumes the environment’s total geometric knowledge. The second one, based on the sensorial information, considers the lacks that are linked to the real information obtained of what surrounds such as: partial knowledge, sensor failure, obstacles that do not allow total vision, unexpected situation, etc. Within this field, all the classical avoidance techniques are found, as those based in potential fields or the use of maps based on reticular occupation. With respect to the wayfinding problem, planning movements from an initial place to a final one trying to imitate the human behaviour, researchers in the field of spatial knowledge have the same overall vision of the problem by layers. Timpf and Volta

(Timpf, et al., 1992) establish 3 levels: The planning level, the instruction one and the guidance behaviour one. Based on the navigation model that focuses on the mental or cognitive map, these researches consider that it is through this mental map how the following task is accomplished: the task of obtaining verbal instructions to indicate paths, navigation and environment description. In some other research lines investigations as those of Chatila (Chatila, et al., 1992), or Firby (Firby, 1994) are found. Both consider the planning in a high level of deliberative type. Chatila considers a more extended planning hierarchy, in which it is established that the planning concept can be divided into path and movement planning, perception planning, navigation planning, manipulation planning, communication planning, and task planning. In the present work, two basic elements are used: the movement planning by a topological representation named Navigation Chart (that will be described in further sections) and the navigation as a process in which those planned movements are carried out by the robot. This last process is in charge of the sequencing of other existing processes in a lower level and of its supervision, even though the present work is focused in the movement planning. The task planner will find the way from one event to another establishing the sequence of nodes and edges from the initial node to the final one. Two criteria will be considered: an objective one applying Dijkstra algorithm and a subjective one trying to imitate a common human behaviour of taking longer but more familiar routes. In the last one, modifications in the cost function of the Dijkstra algorithm are applied, “Guided Dijkstra”. Differing from other similar researches as that one carried out by Roy (Roy, et al., 1999), the chosen path here will depend on the previous navigation actions and on the successful or failed actions that the robot carried out. It implies that they will not be established a priori.

2. EMOTIONS IN ROBOTICS

One of the main objectives in robotics and artificial intelligence research is to imitate the human mind and behaviour. For this purpose the studies of psy-chologists on the working mind and the factors in-volved in the decision making are used. In fact, it has been proved that two highly cognitive actions are dependant not only on rules and laws, but on emotions: Decision making and perception (Picard, 1998). In fact, some authors affirm that emotions are generated through cognitive processes. There-fore emotions depend on ones interpretation, i.e. the same situation can produce different emotions on each agent, such as in a football match (Ortony, 1988). Moreover, emotions can be considered as part

of a provision for ensuring and satisfaction of the system’s major goals (Frijda, 1987). Emotions play a very important role in human behaviour, communication and social interaction. Emotions also influence cognitive processes, par-ticularly problem solving and decision making (Damasio, 1994). In recent years, emotion has in-creasingly been used in interface and robot design, primarily in recognition that people tend to treat computers as they treat other people. 1.2 Fear. Fear is produced when the possibility of something bad happen exists. To cope with fear the action that produces the negative effect is going to be considered. We will distinguish between actions executed by the robot and exogenous actions carried out by other elements of the environment such as other robots. In this case, fear affects to the topological navigation though the implementation of a daring factor. This factor helps the robot to avoid unsecured paths. Depending on the value of this daring factor the robot will select a well known path rather than an unknown one.

3. NAVIGATION CHART

The environment representation used in this paper is the Navigation Chart. The Navigation Chart is not formed by a succession of places of the environment but by a succession of elementary skills (Barber, et al, 2001). It is represented by a simple directed parameterised graph G(v,e) formed by nodes v and edges e. Nodes are events regarding sensorial perceptions (be in front of a door) and edges represent sensorimotor skills which lead the robot to the next event (move towards a door). A new situation is being described on the Navigation Chart when detecting an event. The robot finds itself in the specific situation or place in which a specific sensorial event is sensed. An example of this Navigation Chart is shown in figure 1.

A on the left

Intersection Intersection B on the right

Go straightTurn on the right

Turn on the right

startA

B finish

A on the left

Intersection Intersection B on the right

Go straightTurn on the right

Turn on the right

startA

BA

B finish

Fig. 1. Example of Navigation Chart. 3.1 Nodes. The Navigation Chart nodes have the structure shown in figure 2:

Node 0

left door

d,α...

(0,1)

Id:

Event:

Parameters:

Sentinel:

[-2...3]Weight:

Node 0

left door

d,α...

(0,1)

Id:

Event:

Parameters:

Sentinel:

[-2...3]Weight:

Fig. 2. Node structure in the Navigation Chart. Node identification: It is the digit which allows to identify each node. However, several nodes may have assigned the same sensorial event. This digit will be unique due to the fact that the same two events, for example the event left door, could be assigned to different doors. Sensorial event: Depending on the capacities carried out on the robot, a set of detectable events will be obtained. These events are generated, in the present work by robot skills. These skills, as it will be explained later, are implemented in the automatic level of a two level control architecture. Sentinel: It is a variable with range (0,1) which will show if the node has been previously visited or not. Weight: It is a variable, which allows obtaining an historic of the sensorial events that have already been tested and those that failed. Its value is of great importance to establish new plans considering past experiences. Parameters: The node can contain data and additional information, obtained from the sensorial skill data object that enables the event. 3.2 Edges. Transitions between nodes are performed by robot actions or robot movements such as turn right, follow a corridor, move towards a door. The Navigation Chart transitions have the structure shown in figure 3:

Corridor follow

40s 10m

vmax, dmin...

(0,1)

Skill:

Planning parameters:

Activation parameters:

Sentinel:

Corridor follow

40s 10m

vmax, dmin...

(0,1)

Skill:

Planning parameters:

Activation parameters:

Sentinel:

Fig. 3. Edge structure in the Navigation Chart. Sensorimotor skill: The transitions are identified by a name. The name indicates the sensorimotor skill that allows the robot to go from one event to another. Planning parameters: The planner considers them when trying to establish a plan. An example of this is the estimated distance between points or the time needed to reach the following event.

Activation parameters: To activate motor actions, additional information could be required. This information is stored in those parameters. Sentinel: This parameter is different from the one explained in the nodes. It indicates when an error has occurred during the execution of the sensorimotor skill.

4. PLANNING IN THE AD CONTROL

ARCHITECTURE The control architecture used in this work is the AD architecture (Barber, et al., 2001b). The architecture is made up of two levels: a deliberative level, which contains the reflexive processes and the automatic level, where the processes with sensor and motor capacities are found. Both levels present the same characteristic: they are made up of skills. Skills consist of the different capacities of reasoning or carrying out an action. These skills return data and events to the skills or sequencers which have activated them.

PR

INC

IPA

L S

EQ

UE

NC

ER

Long Term Memory

Short Term MemoryDEL

IBER

ATI

VE L

EVEL

Events

PlannerSequencerNavigator

To the Automatic Level

New plan for the navigator

New plan needed event failure

Navigation Chart with modifications

Data FlowPerformance orders flow

Events Flow

PR

INC

IPA

L S

EQ

UE

NC

ER

Long Term Memory

Short Term MemoryDEL

IBER

ATI

VE L

EVEL

Events

PlannerSequencerNavigator

To the Automatic Level

New plan for the navigator

New plan needed event failure

Navigation Chart with modifications

Data FlowPerformance orders flow

Events Flow

Fig. 4. Deliberative level in the AD architecture. In the higher level, see figure 4, the skills related to the capacities of reasoning and decision-taking, can be found. The planner, described in this paper, and the navigator are examples of deliberative skills. The planner uses the Navigation Chart and it establishes the plan as a subgraph of that representation. The navigation skill takes this subgraph and converts it in the sequence of events to be detected (nodes) and sensorimotor actions (edges). Once the process is sequenced, this skill is the one that modifies the Navigation Chart, storing the parameter changes as sentinels and nodes weights. In the automatic level there are skills which interact with sensors and actuators. Examples of automatic skills used in the present work are: door detection, corridor travelling, door crossing, contour following and identifying location (figure 5). These skills will be associated with the nodes and edges of the Navigation Chart.

AU

TOM

ATI

C L

EVEL

Short term Memory

Corridor Travelling

Cross Door

Contour Following

Door Detection

Identify Location

Virtual Laser

Virtual Sonar

Virtual Odometry

Virtual Base

Laser Sonar Odometry Base

Data FlowPerformance orders flow

Events Flow

From the Deliberative Level

AU

TOM

ATI

C L

EVEL

Short term Memory

Corridor Travelling

Cross Door

Contour Following

Door Detection

Identify Location

Virtual Laser

Virtual Sonar

Virtual Odometry

Virtual Base

Laser Sonar Odometry Base

Data FlowPerformance orders flow

Events Flow

From the Deliberative Level

Fig. 5. Automatic Level in the AD architecture. There are two more elements that have been used in this work: the Long Term Memory and the Short Term Memory. The Long Term Memory is exclusively used for the deliberative level skills, containing information which needs to be stored for long periods of time. The Long Term Memory contains, in this case, the environment representation as a Navigation Chart. On the other hand, the Short Term Memory stores temporary data and is used for both the deliberative and automatic level. The Short Term Memory contains, in this work, the generated plan for the planning skill.

5. DELIBERATIVE PLANNER SKILL

The planner has been implemented as a deliberative skill in the AD control architecture. This skill has been considered one of the deliberative skills because it has to take decisions about the more suitable path according to some criteria. Its mission is to generate the sequence of nodes and edges which will be used by the Navigator skill to take the robot from an initial event v0 to a final event vf, considering the parameters from previous planning experiences. The basic structure of the Planner deliberative skill is shown in figure 6.

Fig. 6. The planning deliberative skill.

The skill is made up of an active object and a data object. The active object receives as activation parameters the initial and final nodes (v0,vf) which show the first and last planning events and as data input the Navigation Chart from the Long-Term Memory. Its output will be the events which will get to the sequencer from the Deliberative Level and the plan extracted from the Navigation Chart that will be stored in the data object, that is, in the Short-Term Memory, for further use by the navigation deliberative skill. The navigator will execute the plan and will carry out the plan supervision, considering the possibility of changing weights and sentinels of the Navigation Chart that is found in the Long-Term Memory 4.1 Simple planning with Dijkstra algorithm. In the first approximation the Dijkstra Algorithm is used to carry out the planning. Used by other authors as (Cara et al. 2001) it is considered as the most adequate algorithm to establish the path which minimizes a parameter in a graph of these characteristics. Its functionality implies to search for a sequence between nodes of the Navigation Chart which will optimised a certain parameter. Thinking on the way human beings decide which way to take, it was considered, that one of the factors that influences in the decision-taking task is the time needed to get from one place to another 5.2 Planning with Guided Dijkstra Algorithm. Based on previous works carried out by expert authors in algorithmia (Voudouris, et al., 1999) and observing the human being behaviour reflected in similar works dedicated to its study (Ruddle, et al., 1997) the algorithm was modified to give the robot a behaviour that will take into account human traces. The psychologist R.Zajonc demonstrated in 1980 at Michigan University that human beings prefer stimulus that they have already experienced over new ones. Zajonc in (Zajonc 1980) described that there exists a clear tendency towards what is familiar (more pleasant) over the unknown. The effect of the guided Dijkstra algorithm is the following: if we consider an initial situation in which no previous planning has been done, all the nodes have its sentinels set to 0 and it will choose the path considered the shortest, speaking in terms of time, (parameters related to edges) applying the Dijkstra Algorithm without modification. The optimisation parameter is the time that it takes to accomplish a motor action designated by an edge “e”. That time is considered as cost[e(i,j)] which is the time needed to perform the motor action that takes the robot from event vi to another event vj. The algorithm modification consists of changing that parameter and transforming it in: guidedcost[e(i,j)] = cost[e(i,j)] – λr(vj,e(i,j)) S(vj) (1)

Active object

activationparameters. (v0 ,v f )

input

data

data

objectsoutput

data

events

Long termmemoryShort termmemory

principal

sequencer

planner

principal

sequencer

Active object

activationparameters. (v0 ,v f )

input

data

data

objectsoutput

data

events

Long termmemoryShort termmemory

principal

sequencer

planner

principal

sequencer

Where r(vj,e(i,j)) is the reinforcement parameter if its value is greater than 0 and the penalisation parameter if its value is lower than 0, S(vj) is node j sentinel and λ is the daring factor. The S(vj) sentinel will take value 1 if the node has been previously visited and 0 if it has not been visited or a problem while detecting the event, has occurred. If the sentinel is 0 the planner will not consider the previous reinforcements of the nodes in the following planning. Once the event is detected, the sentinel will take value 1 again, having in mind previous weight modifications. The r value is set by the following formula: r(vj,e(i,j)) = w(vj)*cost[e(i,j)] (2) where w(vj) is the weights vector. Its value begins at 0 and increases or decreases one by one according to the following criterion: if the node or event is detected at the instant set by the Navigation Chart, during the navigation, the node is then considered as visited, its sentinel is set to 1 and w(vj) is increased in one unit. On the other hand, if the event does not occur, w(vj) is decreased in one unit and its sentinel is set to 0. The values of the weight vector w(vj) have been empirically chosen and are between [-2,3] values. This values show the number of failure and success situations that the planner will consider as limits for its decisions. Therefore, considering the λ value as 0.1, the paths that will save up to 30% of the time consumed over a known path, will not be excluded by the planner. On the other hand, if the event related to the node has at least two failures, the navigator will consider the possibility of eliminating this node, studying the value of the vector. When modifying the daring factor, λ , robots with different personalities (adventurer, conservative, fearful…) can be considered. If its value increases we will have robots that would choose known zones over unknown ones (fearful robot) while decreasing the value will get the opposite effect.

6. EXPERIMENTAL RESULTS The planning skill has been implemented in C++ language using CORBA specifications to perform the integration in the AD architecture. To test the skill, an indoor environment formed by corridors and rooms has been considered. Before the planning experiments were carried out, the robot generated autonomously its own model of the environment. In this case, this model was the Navigation Chart. The process of autonomous generation of this Navigation Chart by the robot is explained in (Egido et al. 2002) The environment layout during the planning experiments is shown in figure 7 where doors which belong to rooms C8, C4, C12, C13 and the one which communicates C12 with C13 are opened. In this environment several plans are

carried out, showing how the planning criteria is modified as a consequence of the weight and sentinel changes during the navigation.

C11 C10 C9 C8 C7 C6

C12 C13

C5 C4 C3 C2 C1C11 C10 C9 C8 C7 C6

C12 C13

C5 C4 C3 C2 C1

Fig. 7. Work environment. In the beginning the robot is commanded to reach, from its current position (end of corridor), the event that identifies room C13. Being this navigation mission the first one carried out in this environment, the robot chooses the nodes and edges sequence that takes less travel time. Those sequences have been drawn in a thicker line in figure 8, over part of the Navigation Chart. The representation of the path followed by the robot is obtained through its odometry, for this case, as it is shown in figure 9.

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.

Ub.I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Left

T. R

ight

.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.

Ub.I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Left

T. R

ight

.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

Fig. 8. Plan obtained from the Navigation Chart. Afterwards, starting with the robot without previous navigation experiences (sentinels set to zero), several navigation processes that implied corridor travelling were carried out. This situation made that the nodes associated to events placed in the corridor were reinforced, compared with those that were unknown zones. The generated plan of these missions and the accomplished navigation processes are shown in figure 10. Fig. 9. Path followed by the robot during the

navigation. During the different navigation processes, the navigation skill modified the weights in the Navigation Chart. After these modifications the robot is

End of corridor – Room C13 Planning

-4

-3

-2

-1

0

1

0 2 4 6 8 10 12

Room C12

Room C13

Corridor

End of corridor – Room C13 Planning

-4

-3

-2

-1

0

1

0 2 4 6 8 10 12

Room C12

Room C13

Corridor

commanded to reach room C13 again. In this case, due to the fact that the corridor zone was reinforced through navigation processes carried out with success, the robot chose a different sequence of the one initially established without previous navigation. The process is shown in figure 11. It can then be observed that the events and actions sequence has been modified by another plan in which a zone of the Navigation Chart was reinforced, the corridor. The path followed by the robot is shown in figure 12.

Fig. 11. Plan obtained after some previous navigation processes.

Fig. 12. Path followed by the robot.

6. CONCLUSIONS In this paper a planner for a topological map (the Navigation Chart) has been presented. The planner has been implemented as a deliberative skill in the AD control architecture. Two criteria have been established for the plan generation: one that only minimizes travel time (Dijkstra) and a modification in the cost function that considers a human behaviour, minimizing the

chance of getting lost by taking longer but known routes (Guided Dijkstra). To develop this system changes in the Navigation Chart has been made to consider past experiences during the navigation. As a consequence of this, when planning, the previous experiences make the robot choose a known zone instead of a shortest but unknown path. The robot navigation becomes then safer.

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.

F.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Le

ft

T. R

ight.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. Le

ft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.

F.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Le

ft

T. R

ight.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. Le

ft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

-2-1 ,5

-1-0 ,5

00,5

11,5

2

2 4 6 8 10

End of C orridor – Room C 8 P lanning

C orridor

R oom C 12

R o om C 8

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Le

ft

T. R

ight

.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Le

ft

T. R

ight

.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

-2

-1

0

1

2

0 5 10 15 20 25

Corridor Following Planning

0

1 9 23

31

3 45

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Left

T. R

ight.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

0

1 9 23

31

3 45

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Left

T. R

ight.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

0

1

2

3

-8 -6 -4 -2 0 2

Room C8 – Room C4 Planning

Room C4 Room C8

Fig. 10. Different navigation processes carried out successfully by the robot. In future researches a study of how humans use their intuition to navigate safety will be considered. The daring factor will change depending on past experinces.

0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Le

ft

T. R

ight

.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r0

1 9 23

31

3 4

5

78

2 10 24

33

32

11 12

13

1415

25 26

27

2930

c12 c8 c13

Corr.F

.

Corr.F. Corr.F. Corr.F.

Corr.F.Corr.F.

Corr.

F.

R.D.Det. L.D.Det. R.D.Det.

L.D.Det. R.D.Det. L.D.Det.

End of Corridor

End of corridor

C.Door.

C.Door

I.Ub. I.Ub.I.Ub.

C.F. C.F.

C. Door

C. Door

I.Ub.I.Ub.

I.Ub.

C. D

oor

C. D

oor

C. D

oor

T. right

R. Left

T. Le

ft

T. R

ight

.

Corr.F. Corridor Following. T. Righ. Turn Right T.Left. Turn Left. I. Ub Identify Ubication

C.F Contour Following C. Door. Cross Door R.D.Det. Right Door Detected. L.D.Det. Left Door Detected.

16

17

18 19

20

2122

C.Door.

I.Ub.

C. Doo

r

C. D

oor

I.Ub.

T. Right

T.Righ

t.

c4

C.F.

C.F.

C.Door

T. Right

L.D.Det.

T. L

eft.

I.Ub.

R.D.Det.

Corr.F.

Corr.F.

Corr.F.

6

C.F.

C. Pta

28

C.F.

C.F.

C.F.

C.Door

C. Doo

r

-1,5

-1

-0,5

0

0,5

1

1,5

-5 0 5 10 15 20

End of Corridor - room C12

Room C12

Corridor

-1,5

-1

-0,5

0

0,5

1

1,5

-5 0 5 10 15 20

End of Corridor - room C12

Room C12

Corridor

. CONCLUSIONS In this paper a planner for a topological map (the Navigation Chart) has been presented. The planner has been implemented as a deliberative skill in the AD control architecture. Two criteria have been established for the plan generation: one that only minimizes travel time (Dijkstra) and a modification in the cost function that considers a human behaviour, minimizing the

chance of getting lost by taking longer but known routes (Guided Dijkstra). To develop this system changes in the Navigation Chart has been made to consider past experiences during the navigation. As a consequence of this, when planning, the previous experiences make the robot choose a known zone instead of a shortest but unknown path. The robot navigation becomes then safer.

7. ACKNOWLEDGMENT The authors gratefully acknowledge the funds provided by the Spanish Government through the MCYT projects TAP1999-214 and DPI2002-00188.

REFERENCES Barber, R. and M.A. Salichs. (2001) Mobile Robot

Navigation Based on Event Maps. Field and Service Robotics, pp. 61-66.

Barber, R. and M.A. Salichs, (2001b). A new human based architecture for intelligent autonomous robots. The Fourth IFAC Symposium on Intelligent Autonomous Vehicles, pp. 85-90.

Cara, A., G. Taylorb and C. Brunsdonc (2001). An Analysis of the performance of a hierarchical wayfinding computational model using synthetic graphs. Computers, Environment and Urban Systems. 25, pp. 69-88.

Chatila, R., R. Alami and R. Prayoux (1992). An Architecture Integrating Task Planning and Reactive Execution Control. Workshop on Architectures for Intelligent Control Systems, IEEE Nice, France.

Egido, V., R. Barber, M.J.L. Boada and M.A. Salichs (2002). Self-Generation by a Mobile Robot of Topological Maps of Corridors. IEEE International Proceedings IEEE Conf. Robotics and Automation, (ICRA) pp. 2662-2667.

Firby, J. (1994). Task networks for controlling continuous processes. Proceedings of the Second International Conference on AI Planning Systems. (K. Hammond) pp.49-54.

Inhelder, B. and J. Piaget (1967). The Child’s Concept of Space. Norton, New York.

Roy, N., W. Burgard, D. Fox and S. Thrun (1999). Coastal Navigation – Robot Motion with Uncertainty. Proceedings IEEE Conf. Robotics and Automation (ICRA) pp. 35-40.

Ruddle, R.A., S.J. Payne and D.M. Jones (1997), Navigating buildings in ‘desk-top’ virtual environments: experimental investigations using extended navigational experience .J Exp Psychol pp. 143-159.

Timpf, S., F.S. Volta, D.W. Pollock and M.J. Egenhofer (1992). A Conceptual Model of Wayfinding Using Multiple Levels of Abstraction. In Theories and Methods of Spatio-Temporal Reasoning in Geographic Space, pp 348-367.

Voudouris, C. and E. Tsang (1999). Guided local search and its application to the travelling salesman problem. European Journal of Operational Research. Volume 113, issue 2. pp. 469-499.

Zajonc, R. (1980). Feeling and Thinking: Preferences need no inferences. American Psychologist 35:151;17