17
This article was downloaded by: [University of Chicago Library] On: 20 November 2014, At: 06:02 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Computer Science Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ncse20 Designing and Building Autonomous Minirobots Maria Gini a a University of Minnesota Published online: 03 Aug 2006. To cite this article: Maria Gini (1996) Designing and Building Autonomous Minirobots, Computer Science Education, 7:2, 223-237, DOI: 10.1080/0899340960070207 To link to this article: http://dx.doi.org/10.1080/0899340960070207 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is

Designing and Building Autonomous Minirobots

  • Upload
    maria

  • View
    220

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Designing and Building Autonomous Minirobots

This article was downloaded by: [University of Chicago Library]On: 20 November 2014, At: 06:02Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Computer Science EducationPublication details, including instructions for authorsand subscription information:http://www.tandfonline.com/loi/ncse20

Designing and BuildingAutonomous MinirobotsMaria Gini aa University of MinnesotaPublished online: 03 Aug 2006.

To cite this article: Maria Gini (1996) Designing and Building Autonomous Minirobots,Computer Science Education, 7:2, 223-237, DOI: 10.1080/0899340960070207

To link to this article: http://dx.doi.org/10.1080/0899340960070207

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly orindirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone is

Page 2: Designing and Building Autonomous Minirobots

expressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 3: Designing and Building Autonomous Minirobots

COMPUTER SCIENCE EDUCATION 7, 2 (1996)

Designing and BuildingAutonomous Minirobots

Maria GiniUniversity of Minnesota

The paper describes the organization and technical details of an undergraduateproject that culminated at the IJCAI robot competition in August 1995. As part of theproject, a team of five undergraduate students designed and built an autonomousminirobot that was able to detect cups and cans, pick them up, and bring them to theappropriate trash or recycling bin. The robot was named Walleye, the state fish ofMinnesota that is known for its voracious appetite.

The limited size of the memory and the limited speed of the microcontroller havedictated most of the design choices. Walleye was, by far, the cheapest of all the entriesin the competition, and performed well obtaining the third place. More important,working on the project has been a tremendous educational experience for the stu-dents in the team.

This project is part of a larger effort aimed at exposing undergraduate students to avariety of projects in robotics, computer vision, and 3D modeling. We have chosenthese topics as the sources of projects because of their interdisciplinary nature andbecause they provide a wide variety of problems where system integration, commu-nication, and cooperation are important, and where the "fun" of building and pro-gramming a robot is a highly motivating force for the learning process.

1. INTRODUCTION

The main purpose of this paper is to describe technical details of an undergraduateproject that culminated at the IJCAI robot competition in August 1995 [1] anddescribe how we are integrating in our undergraduate curriculum a variety ofprojects in robotics.

As part of the project we describe here, a team of five undergraduate studentsdesigned and built an autonomous minirobot that was able to detect cups andcans, pick them up, and bring them to the appropriate trash or recycling bin. Therobot was named Walleye, the state fish of Minnesota that is known for his vora-cious appetite.

John Fischer originally proposed the idea of building minirobots, designed all the microcomputerboards described in this paper, and wrote a large part of the software. Without him this project wouldhave never started.2

Correspondence and requests for reprints should be sent to Maria Gini, Department of Computer'Science, University of Minnesota, 4-192 EE/CSci Building, 200 Union Street SE, Minneapolis, MN55455; [email protected].

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 4: Designing and Building Autonomous Minirobots

224 M.GINI

The project required a variety of skills, from mechanical design (for the gripperof the robot), to hardware design (all the boards used to control the minirobot havebeen designed and built by the students), to programming (at all levels, from thelow level code used to control the motors and read the infrared detectors, to thehigh level image processing routines), and to software and hardware testing. Theproject has been a tremendous educational experience for the students in the team.

2. WALLEYE

Walleye, shown in Figure 1, was built using mostly off-the-shelf components at acost of approximately five hundred dollars. Walleye was built explicitly for "theIJCAI robot competition, even though we had already designed the computerboards and used them for other minirobots before. However, this was the firstproject in which we needed to do vision processing on minirobots, and visionturned out to be the most challenging part of the project.

We decided to build a minirobot, as opposed to use the larger robot we have,because we were interested in exploring how much could be accomplished withingenuity and engineering more than in using a general purpose approach to thetask. The design choices were dictated by the need to keep the robot as simple andinexpensive as possible, but, at the same time, make it robust enough to be capableof surviving and achieving its task in an unstructured environment. Walleye was,by far, the cheapest of all the entries at the IJCAI competition, and performed wellobtaining the third place.

The body of Walleye is made up of a small radio controlled car. The originalelectronics have been replaced by specially designed boards. The boards are small

Figure 1. Walleye, the trash collecting mini-robot

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 5: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 225

(4 inches by 2 3/4 inches) and are stacked. The connections are made using wiresplugged into connectors. We use four different types of boards:

1. Micro-computer board: the micro-computer board is built around a 68hcllmicrocontroller, with 16k of ROM and 32k of RAM. The board can beconnected to a workstation through the asynchronous serial port todownload programs and upload results. A second microcomputer boardis used for the vision system. This second board instead of having a RAMchip is connected to the frame grabber board by a connector. The RAMchip is in the frame grabber board.

2. Motor board: this board is also built around a 68hcll microcontroller, with16k of EPROM and a L293E dual-motor-driver chip. This board operatesas a slave, and communicates to the micro-computer board through asynchronous serial port (with a transfer rate of 62.5 kilobits/second). Themain task of this board is to keep the car at a constant velocity, to controlthe opening and dosing of the fingers, and to control a sound board. Thesound board (taken from a small key-chain) generates different soundsdepending on what Walleye is doing (found a cup, found a can, lost anobject, found a bin, etc). The sound is useful for debugging and helps infollowing what is going on. The motor board keeps track of the robotmovements by using a rudimentary position sensor we built. The positionsensor uses two infrared sensors placed slightly less than half wayaround the wheel. A paper disk, with white and black sectors, is mountedon the wheel. As the car rrioves, the disk on the wheel rotates and theinfrared sensors sense a different amount of infrared light, depending onwhether the white part or the black part of the disk is presented to them.

3. Interface board: the interface board synchronizes the synchronous serialcommunication between the micro-computer and the motor board. It has8 LEDs used for debugging.

4. Frame grabber board: this board is connected to a small charge-coupleddevice (CCD) camera and to a microcomputer board. Basically the framegrabber contains a RAM chip that can be accessed from themicroprocessor board and from the camera. The microprocessor performsall the vision processing software and enables/disables the framegrabber. The camera is a WL-1070 chip with digital output and a wide-angle lens. The chip communicates with the frame grabber through aserial port and can transmit up to 24 images/second. Each image contains160 x 160 pixels with 256 gray levels.

Two 7.2 volt rechargeable batteries are used, one for the motors, one for thecomputer boards. The batteries last approximately 15-20 minutes.

All software is written in C, with the exception of a few routines written inassembly. The software is developed under Linux and cross compiled using the"gcc" cross compiler. A variety of tools have been developed to download pro-grams and upload results through the serial port of the workstation or PC.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 6: Designing and Building Autonomous Minirobots

226 M.GINI

The robot is equipped with a specially designed mechanical gripper. Thedesign of the gripper was modeled after the design Georgia Tech used for the com-petition in the previous year [2]. The gripper was designed to allow the capture ofcups and cans, but yet be able to release itself whenever an immovable object isgrasped. An infrared beam in the gripper's hand alerts the robot when an object isin the grasping range so the fingers can be dosed.

2.1 Control Strategy

The strategy used to achieve the task is based on decomposing the task into severalstages. The main idea is to keep the stages as simple as possible, very much like thestrategy described in [2].

Walleye does not keep track of its own position and does not even attempt toconstruct a map of the world. The position sensor we use is too crude for the mapto be of any use. This allows objects to be moved around, and trash bins to be rear-ranged dynamically during the execution of the task. The presence of other obsta-cles, such as chairs or furniture, does not affect Walleye.

• Start: in this stage the robot is ready to start. To move to the next stage theinfrared beam must be broken.

• Sit-And-Spin: the robot takes a picture of the scene and searches for cups orcans. If nothing is found the robot backs up at an angle and then moves for-ward a short distance and takes a new picture. The movement ensures thata new part of the room comes into the field of view of the camera. If, afterrepeating this sequence a certain number of times, the robot has not yetdetected any trash, the robot executes a random move and then returns tothe sit-and-spin cycle.

• Track-Trash: once a piece of trash has been detected, the robot starts trackingit. This is accomplished by moving a short distance then taking another pic-ture. Depending on the scene, the robot determines what adjustments needto be made in its direction. It then moves forward again and repeats the cycleuntil it has reached the trash.

• Grab-Trash: when the infrared beam is broken, the robot stops and closes thegrippers. After completing this stage, the robot moves into the backup stage.

• Backup: when the grippers are successfully closed, the robot backs up a cer-tain distance and stops. Then it opens the grippers again and checks to see ifthe infrared beam is still broken. The reason for this maneuver is to be cer-tain that the robot did not grasp an immovable object. If an immovableobject was grasped, the infrared beam will no longer be broken and therobot will return to the sit-and-spin stage. If the infrared beam remains bro-ken, the robot assumes that the object picked up was legitimate trash. At thistime the grippers close again, and the robot moves to the next stage.

• Look-For-Bin: this stage is similar to the sit-and-spin stage. The robot takes apicture and examines it to find the appropriate bin (trash or recycling). Ifnothing is found, the robot backs up at an angle, then moves forward andtakes another picture. If no bin is located after repeating this maneuver a

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 7: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 227

start

^ sit-and-spin random-move

trash found y * trash losttrack-trash

IR beam broken ygrab-trash

grab failed

grab successful

backup

tlook-for-bin

bin found ytrack-bin ~̂ "" random-move

bin reached ydrop-trash

drop successful Ylong-backup

trandom-move

Figure 2. The control strategy used by Walleye

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 8: Designing and Building Autonomous Minirobots

228 M. GINI

number of times then a random move is executed and the process isrepeated. Once the appropriate bin has been found, the robot starts trackingit.

• Track-Bin: once a trash bin has been detected, the robot tracks it. This isaccomplished by moving a short distance and taking a new picture.Depending on the location of the bin in the current image, the robot makesan appropriate adjustment to its direction, then moves forward and repeatsthe cycle. This process continues until the trash bin has been reached.

• Drop-Trash: when the trash bin has been reached, the grippers open up toallow the trash to be deposited next to the trash bin. Since Walleye cannotreach into the bins, the trash is deposited close to them.

• Long-Backup: the robot backups for a longer period of time in order to moveaway from the trash area.

• Random-Move: the robot turns by a random angle and drives for a randomdistance.

2.2 Vision

The most important part of Walleye is its vision system [3]. We designed the visionsoftware to be fast (which was a major challenge considering the limited comput-ing power of the microprocessor we use) and robust. The other major problem wehad to face was the limited amount of memory available. With only 32K of RAMand an image of 160 x 160 pixels there isn't too much memory to spare for interme-diate results.

So, instead of transferring the image to the main microcomputer board we doall the vision processing on the microprocessor connected to the frame grabberboard. This saves transfer time, allows us to use the memory on the main micro-computer for the control stages, and keeps the vision software independent of thecontrol part of the program. The vision software appears, to the program runningon the main microcomputer, as a collection of routines to perform operations suchas grab an image, find a cup in the image and report its x and y coordinates in theimage plane, or find a trash bin.

The camera height on Walleye was chosen to make the markers of the trashbins to appear in the upper half of the image. No matter what the distance from therobot, the markers would never fall below the mid-point of the image. Similarly,since cups and cans are lower than the position of the trash markers, they alwaysappear in the bottom part of the image. So, if the robot is searching for cups orcans, only the bottom half of the image is utilized, and the top half becomes a workarea. If the robot is searching for trash bins, then only the top half of the image issearched. This is extremely important because, first, we do not need to process thewhole image (only the half that contains what we are looking for), and, second, wecan use the half of the image not needed as work area (and so we have some mem-ory on which to store the intermediate results of image processing).

All of the cans used are of a dark color, and all of the styrofoam cups are white.The trash bins are designated by a large black "T" and the recycling bins by a largeblack circle, as shown later in Figures 5 and 6. The assumption we make is that the

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 9: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 229

color of the floor is sufficiently different from the color of both the cups and thecans, so that cups or cans can be found by performing different threshold opera-tions on ihe image, as explained down below.

The initial operation is to transform the image into a binary image, by using athreshold that transforms all pixels to either black or white, depending on eachpixel's brightness intensity. Since we need to distinguish white and dark objects,we use two different types of threshold on the image, a white threshold to searchfor cups, and a black threshold to search for cans and bins, as shown in the Figures3 and 4.

The robot looks for cups and cans while in the sit-and-spin stage. The imagecollected by the frame grabber is divided into two halves. The top half is used as a

Figure 3. An image of cups and cans. A cup is barely visible on the left side of the image, acark can is in the center, and a lighter can on the right side.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 10: Designing and Building Autonomous Minirobots

230 M. GINI

Figure 4. The image shown in Figure 3 is shown after thresholding the bottomhalf of the image and writing the result in the top half. The threshold imageshows clearly a cup on the left side. Notice the noise at the bottom of the cup

caused by specularities and reflections off the floor.

work area, since when looking for cups or cans we need to look only at the bottomhalf of the image. The result of thresholding is placed in the top half of the image.

First, we do a white threshold to find out if there are any cups. The thresholdchanges all pixels that fall below a certain brightness value to black, and all pixelsabove to white. The system then searches through this image to look for white pix-els. Once a white pixel is found, the pixels around it are checked to see if they arewhite. If they are not, the white pixel is changed to black, and the search continues.This removes from consideration very small white areas that are likely to be noiseor objects so far away that are not recognizable. If a white pixel with neighboring

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 11: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 231

white pixels is found, then the neighbors are examined, until the whole white areais determined.

After finding a white area, the area is analyzed. If its height-to-width ratio isaccording to specifications, the white area is assumed to be a cup, and the robotbegins to track it. If not, the area is deleted and searching continues.

After the whole image has been searched, and no cup has been found, a blackthreshold is taken to look for cans. In this threshold, all pixels below a certainbrightness value are changed to white, and all pixels above the same value arechanged to black. In effect, what happens is that all dark items in the original pic-ture are changed to white, and all light items are changed to black.

When the new threshold is completed, the program starts searching the workarea in the same way it did for cups, except that now if a white area is found, it isa can. In this way we reuse the same code used to search for cups, with a consider-able savings in programming and in memory usage. If no can is found, the robotexecutes a random move and starts its search again.

This overall approach makes the system quite robust. As long as there arecups and cans on the floor at some point this strategy will succeed. Even if objectsare often missed, by keeping on searching around Walleye ends up findingobjects, and sometimes stumbles into them.

The strategy used to search for the trash and recycling bins is similar. Since weknow that bin signs can appear only in the top half of the image we process onlythe top half image. When the picture is taken, the program thresholds the top halfof the image using the same method used for cans (i.e. dark areas are changed towhite, and light areas are changed to black). Figures 5 and 6 show examples of theimages of bins after thresholding.

Since we know that signs are black letters on a white background we need touse only one type of threshold (what we called earlier a black threshold). The factwe need only one type of threshold allows us to store the thresholded image in thesame memory location used for the original image. In this way we end up usingalways only the upper half of the image for the thresholded images.

The work area is then searched to find the sign for a recycling or trash bin(depending on the type of object grabbed by the robot). To save time when search-ing for a "T" or a circle, only two lines in the image are examined. These lines arenear the middle of the image since the bottom of the symbol is always near themiddle of the image. If a white pixel is found on these lines, the process of expand-ing it by examining its neighbors starts and proceeds in the same way as whenlooking for cups and cans. The only difference here is that if the expanded areafalls below the middle of the image, then the area is deleted. This is because noneof the bin markers can be below the middle of the image, and so if the white areafound goes below the middle of the image it is considered to be noise or someother vertical line and it is discarded. When the expansion is completed, the whitearea is checked to see if it is a "T" or a circle. If not, the area is deleted, and search-ing resumes in the same way. After the desired bin sign is found, the robot movesinto the tracking stage. Tracking is done very much in the same way as the track-ing of a cup or a can.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 12: Designing and Building Autonomous Minirobots

232 M. G1NI

Figtire 5. The recycling symbol on a bin shown after thresholding.Since the symbol appears in the top half of the image, thresholding

is done only on the top half of the image.

The vision algorithms we developed are fast, especially when considering thelimited computing power of the microprocessor. Processing time for trackingtakes less than a second. To find something for the first time takes a little longer,averaging one to three seconds.

3. THE BIG PICTURE

This project is part of a larger effort aimed at exposing undergraduate students toa variety of projects in robotics, computer vision, and 3D modeling. We have cho-sen these topics as the sources of projects because of their interdisciplinary nature

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 13: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 233

* • * i ^ ^

• • • « «

Figure 6. The trash symbol on a bin shown after thresholding.

and because they provide a wide variety of problems where system integration,communication, and cooperation are important. This effort is intended to:

• Motivate the study of advanced applied mathematics by demonstrating itsimportance to solving real world problems;

• Teach students how to build complete systems (as opposed to write isolatedprograms);

• Allow them to experiment with the object oriented programming para-digms they learn in class for a variety of complex problems;

• Expose students to application areas (Virtual Reality, 3D modeling for man-ufacturing, graphical interfaces, real time operating systems, etc.) wherenew opportunities for employment or product development might exist;

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 14: Designing and Building Autonomous Minirobots

234 M. GINI

• Provide hands-on experience with distributed systems and with fundamen-tal issues in communication and real-time control;

• Familiarize the students with hardware interfaces, low level input/output,device drivers, and basic electronics, with hands-on experience;

• Show students how to develop their own independent projects; fosterdevelopment of leadership skills among students in project teams.

Robotics, including computer vision, graphics, and 3D modeling have beenselected as application areas because of their interdisciplinary nature and becausethey provide a wide variety of problems where system integration, communica-tion, and cooperation are important [4]. We have had positive experiences over theyears with a number of undergraduates working with Graduate students onresearch projects in the Robotics and Vision Laboratory and we have observedhow it becomes much easier for students to assimilate their course knowledgearound projects in robotics.

3.1 Sample Short Projects

Here we describe briefly a few of the short projects we have used in some of ourcourses.

3.1.2 Object-Oriented Programming. In the introductory courses in Algo-rithms and Data Structures students learn the fundamentals of programming com-plex tasks, using object-oriented programming, modular design, manipulatingcomplex data structures to store and retrieve data. In this project, a limited set ofprimitive procedures is used to control the movements of a Lego robot to carry outthe simple task of following a line on the floor made out of reflective tape. Theproject exposes students to the simple protocols involved in verifying that a taskwas performed as programmed, and about handling error conditions. Variousdata structures need to be used to save information about the environment, suchas the locations of objects. The hardware used is a set of Lego Technic with a hard-ware interface from Hyperbots that plugs into the serial port of a workstation orPC. After experimenting with the simple control interface directly from theScheme language to the hardware (available from the MIT Scheme software repos-itory) a student has developed a similar program in C++, so learning about a dif-ferent style of object-oriented programming.

3.2.2 Contour Following Using Double-Linked Lists. In a more advancedAlgorithms and Data Structures class, students learn about algorithms and datastructures using C++. One of their complaints is that they do not understand theusefulness of trees and linked lists. As part of this course, students are providedwith digital images of several simple objects. Students use the data structures theylearn in the course, in particular double-linked lists and trees, to represent theedges computed from the images.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 15: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 235

3.1.3 Building Device Drivers. Students in the course Structure and Pro-gramming of Software Systems, learn machine architecture issues such as timing,interrupt handling, and I/O system programming. Their knowledge can be put towork to design device drivers for physical devices, such as serial ports used toconnect external devices such as microprocessors, a GPS system, a conveyor belt,a rotary table, and other computer-controlled devices.

3.1.4 Cooperative Control of Equipment. The purpose of this project is forstudents to develop and test software that controls multiple pieces of equipment.In Introduction to Operating Systems students study process scheduling andinterprocess communication. Problems, such as writing software for interprocesscommunication among the various processor boards used in Walleye, expose stu-dents to the subtleties of communication and synchronization.

3.1.5 Object Recognition For Pick And Place Tasks. In the course ComputerVision, students learn about basic image processing, 2-D region processing, objectrecognition systems, techniques for the derivation of shape from motion, texture,and stereo, and the different representations of solids and surfaces. As part of thiscourse, students use cameras to obtain images of objects and will have to recognizedifferent objects (maybe overlapping). Students may use an arm retrieve recog-nized objects. The relative pose of the object with respect to the camera has to bedetermined.

3.2 Sample Large Projects

We believe that students greatly benefit from the ability to complete large projects.At the same time, we recognize that most large projects require skills that studentsacquire in several courses and far more time than can be justified within the frame-work of existing courses. To address this need, we have allowed students to workon long term projects that span over multiple quarters. Students typically proposea project to a faculty advisor, and work with the advisor to develop a plan of studyand a plan for completing the project over the course of three or more quarters.Some students receive support for this type of work from the University as part ofthe Undergraduate Research Opportunities Program. Theses grants require stu-dents to submit a formal proposal, and are awarded competitively.

3.3 Cooperation and Collaboration

This project allows students to apply material learned in data structures, systemssoftware, and operating systems. The complete project is to control a pair ofrobotic arms, along with associated sensors, to keep a set of objects flowing evenlyon a conveyor belt and a rotary table. The goal is to detect the objects and controlboth arms to prevent the objects from falling off of the conveyor belt. During theearly stages of the project, the students wrote software to control one arm to pickup a stationary object from a rotary table [5], put it down on a conveyor belt, andturn on the conveyor belt. Students then worked on the problem of controlling a

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 16: Designing and Building Autonomous Minirobots

236 M. GINI

pair of robots, communicating between them, and synchronizing them. By com-pleting this lab, students gained experience with designing and implementingdevice driver routines, building object-oriented abstractions for complex inter-faces, integrating real-time inputs into an application, programming systems withreal-time constraints, communication between cooperative processes, and control-ling multiple real-time processes [6].

3.4 Learning how to balance a pole

Another project, that involved building a minirobot with the same boards used inWalleye, focused on learning the classical control task on balancing a pole. Pole-balancing grew out of a classic dynamics problem (see e.g. [7]) and is a well-stud-ied control-learning problem. For the control-learning form of the problem thesystem must learn to keep a long rod or pole, hinged to a cart and free to fall in aplane, in a roughly vertical orientation by applying forces to the cart. If the polepasses a certain value from the vertical (denned to be by most authors and wehave followed suit), or the cart goes beyond a certain fixed distance from its start-ing point, a failure signal is generated.

The robot we built, FBMin (Pole Balancing Mini-robot), and the method usedfor learning is described at length in [8]. Since this is a much more advancedresearch project, it was done by a team including a graduate student and twoundergraduates. PBMin uses two sensor inputs, the pole angle and the car posi-tion. The pole-angle sensor is a variable resistor attached to a common yard stick.The pole is limited to move in a plane up to approximately in either direction. Thecar position is sensed using the same system described earlier for Walleye.

4. CONCLUSIONS

We have described a major robotics project that a team of undergraduate studentshas completed last Summer, and a few other robotics related projects. Theseprojects are designed to give students a basic understanding of real time program-ming, distributed systems, and system integration. These projects are part of alarge effort aimed at exposing undergraduate students to a variety of real worldproblems that have robotics as the common theme. The "fun" of building and pro-gramming a robot and the challenge of competing with other teams in a roboticscompetition are highly motivating forces for the learning process.

NOTES

1. Notice the noise created at the bottom of the cup created by specularities and reflec-tions off the floor.

2. We would also like to acknowledge the Undergraduate Research OpportunitiesProject of the University of Minnesota, AAAL the AT&T Foundation, and NIHthrough the Summer REU program at the University of Minnesota for their financial

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14

Page 17: Designing and Building Autonomous Minirobots

AUTONOMOUS MINIROBOTS 237

support to attend the IJCAI competition.I wish to thank the other members of the competition team, Elena Beltran, Luis

Ortiz, Maxim Tsvetovatyy, and Paul Zobitz, for their enthusiasm and hard work in theWalleye project, and all the other students who have worked on projects in robotics.

This work was funded in part by the NSF under grant NSF/DUE-9351513.

REFERENCES

[1] D. Miller D. Hinkle, D. Kortenkamp, "The 1995 robot competition and exhibition", AI Magazine,Vol. 17, No. 1, Spring 1996, pp 31-45.

[2] T. Balch, G. Boone, T. Collins, H. Forbes, D. MacKenzie, and J. C. Santamaria, "Io, Ganymede, andCallisto: A multiagent robot trash-collecting team", AI Magazine, Vol. 16, No. 2, Summer 1995, pp39-51.

[3] John Fischer and Maria Gini, "Vision-based mini-robots", The Robotics Practitioner, Vol. 2, No. 2,Spring 1996, pp 40-46.

[4] Joseph L. and Anita M. Flynn, Mobile Robots: Inspiration to Implementation, A.K. Peters, Wellesley,MA, 1993.

[5] R. P. Paul, Robot Manipulators: Mathematics, Programming and Control, The MIT Press, Cambridge,MA, 1981.

[6] S. T. Levi and A. K. Agrawala, Real Time System Design, McGraw-Hill, New York, 1990.[7] R. Cannon, Dynamics of Physical Systems, McGraw-Hill, New York, 1967.[8] Dean Hougen, John Fischer, and Deva Johnam, "A neural network pole-balancer that learns and

operates on a real robot in real time", Proceedings of the MLC-COLT Workshop on Robot Learning,1994, pages 73-80.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

hica

go L

ibra

ry]

at 0

6:02

20

Nov

embe

r 20

14