18
SYSTEM DOCUMENT Project: DPM Design Project 2015 Task: Construct an autonomous robot capable of identifying its position and navigating to specific points when placed within a 12’ x 12’ enclosure containing several obstacles and shoot Ping-Pong balls at specified targets. Document Version Number: 7. Date: April 12th 2015. Authors: Bahar Demirli, Hernan Gatta. 1 TABLE OF CONTENTS 2 System Model................................................. 2 3 Hardware Available and Capabilities..........................3 3.1 Lego Mindstorms Kits......................................3 3.2 Processor.................................................3 3.3 I/O....................................................... 3 3.4 Power..................................................... 3 4 Software Available and Capabilities..........................3 4.1 Software Available........................................3 4.2 Software Development Kits.................................3 4.3 Available Code............................................4 5 Compatibility................................................ 4 6 Reusability.................................................. 4 7 Structures................................................... 4 7.1 Software Structures.......................................4 7.2 Mechanical and Electrical Structures......................4 8 Methodologies................................................ 5 8.1 Software Methodologies....................................5 8.1.1 Coordinate System.......................................5 8.1.2 Ultrasonic Localization.................................5 8.1.3 Navigation..............................................9 8.1.4 Gridline Detection.....................................10 SYSTEM DOCUMENT 1

System Document (Revised)

Embed Size (px)

Citation preview

Page 1: System Document (Revised)

SYSTEM DOCUMENT

Project: DPM Design Project 2015Task: Construct an autonomous robot capable of identifying its position and navigating to specific points when placed within a 12’ x 12’ enclosure containing several obstacles and shoot Ping-Pong balls at specified targets.

Document Version Number: 7.Date: April 12th 2015.

Authors: Bahar Demirli, Hernan Gatta.

1 TABLE OF CONTENTS

2 System Model....................................................................................................................2

3 Hardware Available and Capabilities..................................................................................33.1 Lego Mindstorms Kits........................................................................................................33.2 Processor...........................................................................................................................33.3 I/O......................................................................................................................................33.4 Power.................................................................................................................................3

4 Software Available and Capabilities...................................................................................34.1 Software Available.............................................................................................................34.2 Software Development Kits...............................................................................................34.3 Available Code...................................................................................................................4

5 Compatibility.....................................................................................................................4

6 Reusability.........................................................................................................................4

7 Structures..........................................................................................................................47.1 Software Structures...........................................................................................................47.2 Mechanical and Electrical Structures.................................................................................4

8 Methodologies..................................................................................................................58.1 Software Methodologies...................................................................................................5

8.1.1 Coordinate System......................................................................................................58.1.2 Ultrasonic Localization................................................................................................58.1.3 Navigation...................................................................................................................98.1.4 Gridline Detection.....................................................................................................10

9 Tools................................................................................................................................139.1 Software Tools.................................................................................................................139.2 Hardware Tools................................................................................................................13

10 Glossary of Terms...........................................................................................................13

1

Page 2: System Document (Revised)

2 SYSTEM MODEL

2

Robot

Navigation

LocalizationUltrasonic

Light

Odometry Odometry Correction

Path FindingObstacle Map

Precomputed Paths

TargetingAiming

ShootingCradle

Gun

Servo ControlMotors

Actuators

Sensor Input Filters

Mean

Average

Differential

Page 3: System Document (Revised)

3 HARDWARE AVAILABLE AND CAPABILITIES

3.1 LEGO MINDSTORMS KITSThree Lego Mindstorms kits are available. Each kits includes the following parts:

1 NXT brick; 1 Touch sensor; 1 Ultrasonic sensor; 2 Light sensors (detects red, green, blue, yellow, black and white/light levels); 2 Servo motors; Structural and connecting parts of various shapes and sizes.

3.2 PROCESSOREach NXT brick ships with the following processor specifications:

Atmel AT91SAM7S256 CPU:o 32-bit ARM7TDMI-core processor;o 256KB of FLASH;o 64KB of RAM.

8-bit Atmel AVR ATmega48 microcontroller:o 4KB of FLASH;o 512B of RAM.

3.3 I/OEach NXT brick ships with the following I/O specifications:

4 sensor ports and 3 motor ports over RJ12 connections; Bluetooth Class II V2.0; 1 100x60 monochrome console; 1 Speaker (sampling between 2-16kHz); 1 USB port.

3.4 POWEREach NXT brick ships with the following power specifications:

6 AA batteries @ 1.5V.

4 SOFTWARE AVAILABLE AND CAPABILITIES

4.1 SOFTWARE AVAILABLE Lego Digital Designer (LDD): Robot Modelling; Lego Mindstorms Software: LabVIEW modeler and utilities; Eclipse + leJOS plugin: Java IDE and uploader.

4.2 SOFTWARE DEVELOPMENT KITS leJOS NXT: Java-based runtime and libraries

o Java-based;o Powerful API;o Possibly slower, bulkier code as Java is a high-level system;o Lego-supported.

NXT-G: LabVIEW-based runtimeo Ease of use;

3

Page 4: System Document (Revised)

o Difficult to implement complex programs;o Lego-supported.

ROBOTC: C-based language and librarieso C-based;o Possibly faster, lighter binaries;o 3rd party;o Not free.

Microsoft Robotics Studio:o .NET-based (C#, VB.NET, etc.)o Possibly faster than Java, do not know if code is compiled to IL or binary;o 3rd party;o Free.

4.3 AVAILABLE CODECode is available from the R&D labs carried out by the members of the team prior to assembling the final group; three such software sets are available for use. This existing software will, naturally, require modifications to fit the final robot design.

5 COMPATIBILITYThere are no plans to integrate 3rd party blocks into the robot and Lego’s own libraries are assumed to be internally compatible.

6 REUSABILITYThe following list describes the fundamental elements from previous labs that have reusable material at our disposal:

Odometry and odometry correction (software) were developed in labs 2 and 3; Navigation (software), developed in lab 3; Localization (software), developed in lab 4; Launching mechanism (hardware) derived in lab 5; Falling and Rising Edge detection techniques for obstacle detection, lab 1.

7 STRUCTURES

7.1 SOFTWARE STRUCTURESThe software controllers for the various functional blocks should be separate and communicate only through defined interfaces. This will allow multiple subsystems to be built, modified and tested independently.

7.2 MECHANICAL AND ELECTRICAL STRUCTURESThe following lists the structures used for the final mechanical design:

1 ultrasonic sensor located at the front-center of the robot (for localization); 3 light sensors located directly behind each wheel and on the center of the robot for improved

odometry correction and localization; A launching mechanism (with the use of a motor), constructed with Lego pieces; One NXT brick (3 motor ports and 4 sensor ports required); Rubber elastic bands wrapped around the launcher.

4

Page 5: System Document (Revised)

8 METHODOLOGIES

8.1 SOFTWARE METHODOLOGIES

8.1.1 Coordinate SystemThe software adopts the coordinate system of the odometer developed in lab 1. This coordinate system is defined as follows:

1. The Y-axis crosses the robot back to front;2. The X-axis crosses the robot from port to starboard;3. The positive Y-axis points in the direction of 0°;4. Heading increases clockwise.

The rationale behind this choice is that every other software component developed in further labs assumes this coordinate system.

8.1.1.1 Notes on the built-in odometerThe leJOS API ships with an odometer whose functionality is equivalent to that developed in lab 1. However, this odometer works in a coordinate system with two wrap-around points, as opposed to one. Namely, the (180°, 359°) range in the lab’s odometer maps to (180°, 1°) and the (0°, 179°) range maps to (0°, -179°). An attempt was made to integrate this odometer into the controlling software in order to use other components provided by the leJOS API. However, mapping already existing localization components to this new coordinate system proved quite difficult, given ambiguities in the sign of the angles. While adopting the built-in odometer would have enabled seamless usage of other provided classes, it was decided to abandon that effort due to these mapping problems.

8.1.2 Ultrasonic LocalizationDuring the development of the control software, two ultrasonic sensor localization routines were designed, implemented and tested. While only one of these is in use, both of them will be described and discussed.

8.1.2.1 OverviewAs per the project guidelines, the robot will be placed anywhere within a two-square by two-square area, in an arbitrary heading. Two issues arise with this setting. First, if the robot is not placed on the imaginary diagonal that joins the vertices of the starting area, beginning at the corner of the terrain, the distance from the back and left wall to the robot will not be equal. For instance, were the robot to be placed in the middle of tile (1, 2), the distance to the left wall will be half than that to the back wall. Therefore, preset distance thresholds to latch angles as proposed in lab 4 are bound to fail. Second, at the farthest distance from the corner of the terrain, i.e. tile (2, 2), distances to obstacles outside of the starting area are comparable to distances to the back and left wall. Some mechanism must therefore be put in place to avoid interpreting the presence of obstacles beyond the starting area as walls.

8.1.2.2 First MethodThe first method considered for ultrasonic localization consists of rotating the robot on itself for a complete turn while collecting distance samples. These samples are then analyzed to find certain features of the resulting distance distribution whose associated information yields, after processing, both the position and heading of the robot.

5

Page 6: System Document (Revised)

Shape of the Distance Distribution

Given the geometry of the starting area in which the robot is placed, the full clockwise turn distribution always has a shape similar to the following (a value of -1 indicates an incomplete collection buffer):

360 18 38 57 77 97 117 137 157 177 196 216 236 256 276 295 315 335 355 15 35 55 75 94 114-1

49

99

149

199

249

Raw Distance Filtered Distance

Heading (degrees)

Sens

or V

alue

(cm

)

Figure 1: Single Ultrasonic Sensor Samples.(Median Filter, 5 Sample Window Size, 50Hz Polling Rate, Started facing north, Collected over USB RConsole)

This distribution is primarily characterized by two sets of minima left and right of a small peak in the valley. The central minima on either side of the peak correspond to the lowest distances to the back and left walls, respectively. With regards to localization, collecting heading information along with every sample in this distribution, it is possible to compare these distance-heading pairs against expected values. That is, the lowest distances to both walls are expected to be at well-defined headings with respect to the robot’s coordinate system. Computing the difference between the recorded and expected values, it is straightforward to estimate the true heading of the robot. Furthermore, the distance information can be used to estimate the (X, Y) position of the latter as well.

Distribution Shape Changes

It is important to note that, depending on the starting position and heading of the robot in the starting area, the distribution will change shape:

1. Different starting headings will shift the distribution to either side along the X-axis:a. If the robot starts facing a wall, the valley will be split into two discontinuous portions.

2. If the distances to the left and back wall are not equal:a. The samples in the valley to the left and right of the peak will shift in either direction

along the Y-axis; b. One set of samples to either the left or right of the peak will expand along the X-axis,

since a longer arc is covered for the same change in heading;c. The other set of samples will be compress along the X-axis, for the converse reason.

6

Page 7: System Document (Revised)

In other words, if the robot is not placed along the 45° diagonal of tile (1, 1) from the corner, the distribution is asymmetric in both axes.

Algorithm

The procedure for ultrasonic localization will be as follows (the values chosen below were decided on by analyzing test data; see companion Excel spreadsheets):

1. Rotate clockwise until the open field is in view of the ultrasonic sensor;2. Reset the odometer (i.e. set (X, Y, Heading) equal to (0, 0, 0°));3. Perform one full rotation clockwise at 30deg/s:

a. Poll distance samples every 50ms:i. Use a median filter on the raw data;

ii. The window is 5 samples in width:1. This corresponds to 250ms of data.

b. Associate to every distance polled the current heading as reported by the odometer;c. The resulting sample distribution consists of a collection of distance-heading pairs, in the

order in which they were collected.4. Clip the resulting distribution by distance, thereby including only those pairs whose distance

value is within 1cm and 60cm inclusively;5. Classify the samples by distance differentially:

a. Assemble contiguous samples whose distance values are within a 7cm band of one another into clusters inclusively;

b. Each cluster must contain at least 25 samples.6. The result of this operation, provided the normal form of the distance distribution given the

geometry of the problem, can either be:a. A single cluster:

i. The bump in distance values at the corner is less than the clustering threshold;ii. The cluster thus covers both the back and left walls.

b. Two clusters:i. Each cluster corresponds to the back and left walls, respectively.

7. If no cluster is found or more than two are detected, ultrasonic localization fails;8. If only a single cluster is detected:

a. Find the local maximum within a central band of width equal to the third of the number of samples in the cluster;

b. Split the cluster into two clusters at said local maximum;c. Proceed as in the next point.

9. If two clusters are detected:a. Find the minimum in each cluster which is closest to the center of said cluster;b. The distance-heading pair of the first cluster corresponds to the smallest distance to the

back wall;c. The distance-heading pair of the second cluster corresponds to the smallest distance to

the left wall;10. The headings at which the minimum distances to the back and left wall are known, as per the

definition of the coordinate system. Hence:a. Compute the signed difference between the expected heading for the back wall and the

recorded heading;b. Idem for the left wall;

7

Page 8: System Document (Revised)

c. Compute the average of the differences;d. Add this average to the reported odometer heading;e. Adjust the new heading to be within the 0° to 360° range;f. This final corrected heading is the true heading of the robot.

11. Use the minimum distances collected and the measured (X, Y) offsets from the center of the ultrasonic sensor to the center of the robot to set the (X, Y) position of the latter in the obvious way;

12. Move the robot to (0, 0, 45°).

Note: To compute the headings and distances for the back and left wall, average over three samples around the minima.

106113

121127

135141

149156

164170

178184

194203

211217

225232

239246

254260

268274

282289

35

37

39

41

43

45

47

49

51

53

55

Filtered Distances Left Cluster Right Cluster

Heading (degrees)

Sens

or V

alue

(cm

)

Figure 2: Sample Clustering (not the same distribution as above.)(Lower Clip Bound = 1, Upper Clip Bound = 60, Differential Classifier Threshold = 7, Minimum Cluster Size = 25)

8.1.2.3 Second MethodThe second method considered for ultrasonic localization is a simplified version of the first method. Instead of attempting to split the distance reading distribution into two contiguous clusters, this second method only looks for the cluster with the deepest minimum. Provided the geometry of the problem, finding this point is guaranteed, whereas finding a proper cluster split is not.

Algorithm

Notation: Distanceof SampleW=D (W ); Headingof sampleW=H (W ).1. Rotate clockwise until the open field is in view of the ultrasonic sensor;2. Reset the odometer (i.e. set (X, Y, Heading) equal to (0, 0, 0°));3. Perform one full rotation clockwise at 30deg/s:

a. Poll distance samples every 50ms:i. Use a median filter on the raw data;

ii. The window is 3 samples in width:1. This corresponds to 150ms of data.

b. Associate to every distance polled the current heading as reported by the odometer;

8

Page 9: System Document (Revised)

c. The resulting sample distribution consists of a collection of distance-heading pairs, in the order in which they were collected.

4. Find the pair with the minimum distance;5. Associate all adjacent distance samples around the minimum sample whose difference from the

latter is at most one;6. Find the sample in the middle of this set:

a. This sample’s heading corresponds to the wall which is closest to the robot;

b. Let this sample be W 1.

7. Rotate to H (W 1 );8. Rotate 90° clockwise;9. If there is no wall, rotate 180° clockwise;10. The robot is now facing the other wall:

a. Collect a distance-heading pair;

b. Let this sample be W 2.

11. If H (W 1 )>H (W 2):a. Sample W 1 corresponds to the left wall W l;

b. Sample W 2 corresponds to the right wall W r .

12. Otherwise:

a. Sample W 1 corresponds to the right wall W r ;

b. Sample W 2 corresponds to the left wall W l.

13. Correct the (X, Y) position:

a. The corrected X position is: D (W l )−SquareWidth+Ultrasonic SensorOffset ;b. The corrected Y position is: D (W r )−SquareWidth+Ultrasonic SensorOffset .

14. Correct the heading:

a. Compute the difference ∆1=270 °−H (W l );b. Compute the difference ∆2=180 °−H (W r );c. Compute the average of the differences: ∆=(∆1+∆2) /2;

d. This value corresponds to the error in the odometer;

e. Compute the corrected heading value: H new=Reported Heading+∆;

f. If the value is larger than 360°, subtract 360° from it;g. If the value is negative, add 360° to it;h. This final value is the true current heading.

15. Move the robot to (0, 0, 45°).

8.1.2.4 ComparisonThe second method clearly implements a much simpler version of the first in terms of calculations. Namely, computation is traded for movement where instead of finding the minima directly from the distance reading distribution, only one minimum is found way computationally and the second is located by manipulating the robot. While this requires more time and energy, it is less prone to the changes to the distribution outlined previously. In addition, the software is significantly less complex. Therefore, it is the second method that is used in the controlling software.

9

Page 10: System Document (Revised)

8.1.3 NavigationAs with ultrasonic localization, two different approaches will be presented.

8.1.3.1 First MethodAs per the project description, a map will be provided ahead of time. Using this information, a path consisting of multiple waypoints can be incorporated into the source code for each different map before starting. Once it is known which map will be used, this information will be provided to the robot before the start of the run. The latter, will, in turn, follow the preset path.

8.1.3.2 Second MethodThe second method consists of providing the known obstacles map to the robot in a defined format and computing paths on the fly. As per the requirements and project description, a path from the starting point to the shooting area is guaranteed to exist and all the necessary information to compute it is to be provided to the robot ahead of time. As such, the following methodology can be adopted for navigation:

Upload the known maps to the robot ahead of time; Inform the robot via its buttons which map to use before its run; Transform the map into a suitable internal format for pathfinding; Compute a path from the starting point to the center of the shooting area using A*; Travel through the waypoints in the path in cardinal directions, rotating in place to turn.

Map Format

A map consists of the following information: A binary matrix whose entries correspond to subdivisions in the grid of the floor:

o A zero entry corresponds to an empty subdivision, and a one otherwise. The number of horizontal and vertical subdivisions; The physical width and height of each subdivision.

Path Computation

The leJOS API provides an implementation of the A* algorithm. This implementation utilizes a graph-based representation of the terrain’s topography. While a binary matrix is easiest to encode, it is necessary to transform the latter into the former. To such end, the following algorithm suffices:

Create a node for each zero entry in the matrix and store them by their index into said matrix; Connect adjacent nodes if their corresponding matrix entries are zero; Do not connect diagonally adjacent nodes since travel is in cardinal directions only.

Once the graph has been created, it is fed to an instance of the class that implements the A* algorithm. The latter then returns an ordered collection of nodes that can be used for navigation.

Notes

For this navigation approach to function, it is necessary that the odometer not yield errors in position greater than half of the grid subdivision’s dimensions minus the width of the robot. Otherwise, without obstacle detection, the robot may eventually run into an obstacle. Furthermore, such errors in the odometer will undermine the positioning of the robot for shooting.

8.1.3.3 ComparisonWhile the second method offers more flexibility, the first one is much simpler to implement. For the same reasons as ultrasonic localization, the first method is currently used.

8.1.4 Gridline DetectionThree light sensors are mounted on the robot in order to detect gridlines printed on the terrain’s floor. Detection of these gridlines is crucial for the robot to be able to perform light localization and to correct

10

Page 11: System Document (Revised)

cumulative deviations in the odometer’s values. This section presents the overall shape of the light sensor value distribution at gridline crossings, issues involved in their detection and methods of recognizing their presence that take these factors into consideration.

Sampling and Polling Rates

The internal sampling rate of light sensors appears to be 1kHz. At a forward speed of 8cm/s and a polling rate of 200Hz in software, given that the gridlines are black and approximately 45mm in width, it is possible to obtain up to 10 samples per gridline. Data collected during tests (shown below) indicate that 10 samples are amply sufficient to detect a line. However, it should be noted that it is up to the firmware to schedule the polling threads on time, which in practice means that the true polling rate will never exactly be 200Hz.

Shape of the Light Value Distribution

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 10110611111612112613190

190

290

390

490

590

Left Sensor Center Sensor Right Sensor

Sample Number

Raw

Val

ue

Figure 3: Light Sensor Samples.(Unfiltered, Full Fluorescent Lab Lighting, Red Floodlight, 200Hz Polling Rate, 2 Gridline Crossings, Collected over

USB RConsole)

The distributions shown in Figure 3 clearly display a number of interesting phenomena that must be factored in when attempting to detect gridlines.

Firstly, it is evident that the average value for each sensor is different, regardless of the fact that these samples were all collected simultaneously and under the same environmental conditions. Therefore, each sensor introduces its own constant bias in the readings. Furthermore, it is important to note that the dips representing the same gridline are not of the same magnitude across sensors. This can be explained in two different and not necessarily mutually exclusive ways. On the one hand, the gridlines on the floor have been washed away over time. Thus, not every section of every gridline has the same strength of black; on certain parts of the flooring, some lines have been almost completely worn off. On the other hand, different sensors may have different sensitivities to lack of light. This can be attributed to a multitude of factors such as scratches and erosion of the light intakes due to contact with the floor, dust inside the sensors, etc. The fundamental take-away is that the same gridline may cause dips of varying magnitudes across the sensors. Thirdly, it is expected that varying ambient lighting may cause vertical shifts of the distribution. That is, the constant value bias may increase or decrease by increasing and

11

Page 12: System Document (Revised)

decreasing the amount of external light shining on the terrain. It is known that the competition will take place on an area exposed to direct sunlight, which itself changes due to weather. In addition, it may be noted from the distributions above that some points appear partway between the average distribution value and the bottom of the dips. Since gridlines are not smeared, it is sensible to presume that an averaging mechanism may be built into the sensors. Moreover, it is obvious that a change in vertical elevation of the sensors with respect to the ground will change the average distribution value, given there will be less light falling on the detectors. Lastly, and perhaps most importantly, gridline values are significantly offset from the average. In other words, noise is statistically insignificant with respect to the values at gridline crossings.

The following list summarizes these conclusions:1. Sensors introduce their own, constant vertical bias;2. Sensors detect the same gridline with different dip magnitudes;3. Ambient light introduces an external, potentially varying vertical bias;4. The fall from the average distribution value and the bottom of each dip may be gradual;5. Changes in vertical sensor elevation shift the average distribution value;6. Noise is negligible with respect to gridline dips.

Differential Filtering

In order to counteract vertical biases in the light sensor value distributions, the best choice is a differential filter. For the controlling software, a first-order differential filter appears to successfully detect most gridlines. The filter can be defined as follows:

∆ [n ]=x [n ]−α x [n−1 ]

where n∈N is the sample number, x [n ] is the raw value of the n-th light sensor sample, ∆ [n ] is the differentially filtered value for sample n and α=1 in this case. The resulting distribution is shown below:

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101106111116121126131

-200

-150

-100

-50

0

50

100

150

200

Left Sensor Center Sensor Right Sensor

Sample Number

Firs

t-O

rder

Raw

Val

ue D

eriv

ative

Figure 4: First-Order Numerical Derivative of Raw Light Sensor Values (from the distribution above.)

It is clear from the distributions in Figure 4 that a differential filter effectively does away with constant biases, as is to be expected. Additionally, their shape further corroborates the overall insignificance of

12

Page 13: System Document (Revised)

noise (though not entirely as seen at the very beginning of the graph.) Therefore, the average offset factors introduced by points 1, 3, 5 and 6 above are effectively eliminated.

However, points 2 and 4 are indicative of a remaining piece of the puzzle. That is, given the distributions shown here, a value must be found such that a meaning can be ascribed to the spikes. First, it is not functionally necessary to distinguish between floor-to-line and line-to-floor crossings (i.e. average-to-peak and peak-to-average, respectively.) Therefore, the only magnitude in question is the absolute value of the numerical derivative1. Second, point 4 implies that an overall minimum for gridline detection ought to be found. That is, given a differential distribution, any value above such a minimum could be considered to be a sample above a gridline. Lastly, point 2 suggests that such gridline thresholds are unique to each sensor, as it is clear from the figure above. As such, testing has been performed to determine these thresholds; see companion tests for reference.

Implementation

Considering the points brought forward by the preceding discussion, gridline detection in the controlling software is performed in the following way. First, three independent threads are launched that poll each light sensor respectively every 5ms. These threads push the raw light values into a circular buffer. Then, on every possible occasion, the buffers are pulled, locked and the differential filter described above is applied to each one. Each filter instance is given a reference threshold, as found during testing, which it uses to determine which samples correspond to gridlines. The filters then return a list of indexes into the buffer, pointing to those samples. What happens next depends on what the information is used for.

Notes: The size of the circular buffers and the time between successive checks depend on the use case. Also, because the buffers are locked for processing, the polling threads are stalled in the meantime. Since buffer sizes are relatively small and computing numerical derivatives is an O (n ) operation, this does not appear to be a problem. Another way to accomplish processing with a possibly shorter locking time is to copy it first; the leJOS API provides a fast, firmware-level method to do this.

8.1.5 Odometer CorrectionThe model used in the odometer to calculate the position of the robot is based on perfect physical mechanisms. However, no part of the robot behaves ideally. Therefore, the values provided by the odometer accrue errors as the robot moves. It is thus necessary to implement a correction system that to cancel these errors before they grow too large for the odometer to be of use.

To this end,

9 TOOLS

9.1 SOFTWARE TOOLS Eclipse and lejOS for development; LDD: for robot modeling and building; git: for code control and GitHub for storage; Microsoft PowerPoint: for weekly meetings with the client; DropBox: for file sharing among team members and the client.

1 A more sophisticated line detection technique could make use of this information to eliminate potential false positives. That is, two negative peaks in succession, say, could be considered as a false positive. However, other problems could potentially be introduced by this idea, especially when working with limiting forward speeds and therefore few samples per line.

13

Page 14: System Document (Revised)

9.2 HARDWARE TOOLS Motors: electrically powered mechanisms that rotate at a desired speed by a chosen angle. In

our design, two motors will be attached to wheels and will be used to make the robot move and one will be used for the launching mechanism. The motors are fairly accurate and are essential for to the robot design.

Light Sensor: a mechanical device that is sensitive to light by detection of light frequencies. Its purpose is to correct the robots odometer and orientation (for localization) by detecting grid lines. Three light sensors will be used in our design, one directly behind each wheel in order to ensure a more accurate correction in the robots orientation and odometer and one in the back center for localization. Issues arise with the sensors when detecting the gridlines under various ambient lighting conditions. This can be dealt with using differential filters.

Ultrasonic Sensor: sends out ultrasonic signals and detects the return of these signals when they bounce off an object. The sensor measures the time it took for the signal to return and thus produces a reading. It frequently produces both false negatives and false positives and is thus very inconsistent. In addition, its detection range is limited and cannot detect objects that are closer than 5cm. One ultrasonic sensor will be used in the front of the robot. It will be used for localization.

Touch Sensor: detects and object if it touches it. This sensor is available for use, however its use is not necessary to complete the task and will therefore not be used in our design.

10 GLOSSARY OF TERMS LabVIEW: Visual programming language from National Instruments; leJOS: Firmware for Lego Mindstorm bricks; LDD: Lego Digital Designer; False-Negatives: Failure of the ultrasonic sensor to detect objects; False-Positives: Detection of the ultrasonic sensor of non-existent objects; Flash: Persistent storage; Eclipse: Modular integrated development environment.

14