33
Visual interaction for real-time navigation of autonomous mobile robots Source 2009 International Conference on Cyberworlds Authors Marco L. Della Vedova, Tullio Facchinetti, Antonella Ferrara, Alessandro Martinelli Speaker 余余余 Advisor 余余余 余余 Date 99.05.04 1

Visual interaction for real-time navigation of autonomous mobile robots Source : 2009 International Conference on Cyberworlds Authors : Marco L. Della

Embed Size (px)

Citation preview

Visual interaction for real-time navigation ofautonomous mobile robots

Source : 2009 International Conference on Cyberworlds Authors : Marco L. Della Vedova, Tullio Facchinetti, Antonella Ferrara, Alessandro MartinelliSpeaker :余俊瑩Advisor :洪國寶 老師Date : 99.05.04

1

2

To limit the latency introduced by the acquisition

system , image processing must be performed in a

real-time fashion. Robotics applications can be often modelled as

cyberworlds that can be defined as a world created on a cyberspace: the space of circuits and electronic components which allow the communication and elaboration of system data.

The camera itself and the image processing unit are important elements of the cyberspaces.

3

4

5

The robots are a set of similar objects, with the same shape, that must be tracked within the workspace, i.e., their positions must be suitably detected.

This is performed by evaluating the both extrinsic and intrinsic camera parameters.The formers refer to the orientation of the camera with respect to the world coordinates system, while the latter concern on how the camera maps the points in its own reference system into the final grabbed image.

6

7

8

Homogeneous coordinate 使用四個元素來表示,即 (x, y, z, w) ,要將轉換為三維座標,其關係為 (x/w, y/w, z/w) ,其中 w 表示座標軸的遠近參數,通常設為 1 ,如果要用來表示遠近感,則會設定為距離的倒數( 1/ 距離),例如表示一個無限遠的距離時,我們會將 w 設定為 0 。

9

10

Def.1:

11

Def.2:

12

Def.3:

13

Def.4:

14

Def.5:

15

Def.6:

16

Def.7:

17

Def.8:

18

Def.9:

19

Def.10:

20

21

22

23

24

25

26

Two indipendet DC-motors actuate two not steering wheels; thus , two low-friction supports are required to guarantee the equilibrium of the robot.

FLEX board which on board micro-controller is interfaced with the motors and the wireless transceiver. The software is built on top of the ERIKA Real-time Operating System .

1) to receive information about the objects positions in the workspace from the wireless interface2) to plan the trajectory3) to drive the motors according to the planned trajectory

A trajectory tracking : a feedback control algorithm based on a PID and Sliding-Mode cascade scheme.

The trajectory planning : modified gradient-tracking algorithm

27

• Acquisition: retrieves a stream of JPEG images from the network camera using the HTTP protocol1 and decompresses the image• Recognition: reads the decompressed image and applies the segmentation technique to recognize the markers• Remapping: transforms markers’s coordinate from the image space to the real space, by applying the degenetated camera model presented in Section IV;• Interpretation: associates markers colors to robots and obstacles;• Transmission: composes and sends the message to the wireless radio transceiver, to actually broadcast the information to the robots.

Auxiliary modules are:• Calibration: performs the calibration routine, which sets the parameters used by the Remapping module• UI module: allows an operator to set the goal-point position and the system monitoring

28

29

The navigation algorithm is conceived such as the ideal robot’s trajectory can be at most tangent to the obstacle perimeter increased by the security range.

An interesting result derived from this set of experiments is that the distance between theideal and real trajectories is always less than the security range:this means that the imprecision of the real trajectory is never too large to lead to a collision.

30

31

The main contribution of the paper is the definition of the camera model, which allows to use a single camera to detect the motion of relevant objects in the considered robotics application. .

This evaluation takes into account the inaccuracies produced by the visual tracking infrastructure as well as the approximations made for building the dynamic system that models the robots.

32

Thanks for your attention

33