View
213
Download
0
Tags:
Embed Size (px)
Citation preview
Efficient Path Determining Robot
RIT Computer Engineering Senior Design ProjectJamie Greenberg
Jason TorreOctober 26, 2004
A motorized robot will navigate from a specified starting position to a specified ending position of a grid, using a set of commands sent through a serial cable from a PC. The PC will create these commands based on data from a webcam mounted above the grid. It will perform Dijkstra’s Least Cost Path Algorithm and find the best path between the 2 specified coordinates, convert this path to commands, and send the commands to the robot.
Objective
Image Analysis
Every position in the grid will be analyzed to determine if it contains a wall or an empty space. All walls are painted white, while the floor is dark. Using these characteristics, the pixel data from the image is analyzed to determine if the majority of the pixels in a given area (for each grid position) are white (wall) or black (no wall).
Sample wall segment analysis
Sample Grid setup Sample Grid setup w/ Nodes
Graphical User Interface
Main User Interface Window
The PC will guide the user through the process. In addition to the window above there will also be graphical notification of the status of the serial connection test, when the image is being acquired, and if the coordinates or path are invalid. There will also be instructions to the user to attach and detach the serial cable when appropriate.
Dijkstra's Algorithm
Dijkstra’s Algorithm determines the best path to get from the starting node to all other nodes. It does this incrementally by first finding the initial cost to nodes directly connected to the starting node, and then removing the node with the shortest cost and finding the new costs through this node to all nodes it is connected to.
D4
D2
TOP LEVEL OF TANKROBOT
1
23
R6
2.2k
1
23
R2
1k
Distance Sensors
R3
1kR5
2.2k
Power Button
Distance Sensors
1
23
R1
1k
55
08
131826
VC
CG
ND
Port APort BPort TPWMATD
D1
1
23
Go button
HCS12
V19.6
Bottom Level
D3
R4
1k
R8
22k
O1
R7
2.2k
TO HCS12
BOTTOM LEVEL
O2
R7
2.2k
HBRIDGES
OPB TO ATD
UNDERSIDE OF ROBOT
RIGHT TREAD OPTO REFLECTOR
PWM input
R8
22k
V29.6V
U8A4N25
R8
22k
O2
R7
2.2k
LEFT TREAD OPTO REFLECTOR
DC MOTOR
DI
PWM input
U7A4N25
DC MOTOR
O1
OPB TOATD
U6A4N25
UNDERSIDE OPTO REFLECTOR
DI
OPB TO ATD
Overhead view of Robot Front view of Robot
Underside of robot with view of opto-reflector
The robot receives the movement commands through the serial interface with the PC. It then “breaks” down the commands to simple instructions that it can execute. The robot has 5 sensors that it uses to detect the walls in the maze and to know how far along it is through its path to the end of the maze. It uses three opto-reflectors in order to “see” what grid space it has passed into as well as precision turning of the robot. This it to ensure that every turn is done exactly the same way every time. There are also two distance sensor on each side of the robot( not seen in picture very well) that keep the robot on course. They monitor the robot’s distance from the walls and if the robot veers too close, the sensors will pick this up and compensate the other direction. The robot will always face toward the left in the maze, so that when it has to move left it will be seen as forward and right will be seen as backwards.
Schematic top level of robot
The top level of the robot can be seen above. The user interface consists of output lines running from the port A of the HCS12 to AND gate which source current for the status LEDs (D1 = Download Complete, D2 = Running, D3 = Finished, D4 = Error). There is also a power and go buttons which will tun on/off the robot and start it through the maze respectively.
The lower level Schematic can be seen below. It consists of two H bridges that control the DC motors, and all the opto-reflectors that controller precise turning and position recognition. There is the ability with the H bridge to reverse direction of motors which will allow it to go backwards without any new connections.
Schematic Lower level of robot