56
71 Robotics Frank L. Lewis University of Texas at Arlington John M. Fitzgerald Adept Technology Kai Liu Alcatel Telecom 71.1 Introduction ...................................... 71-2 71.2 Robot Workcells .................................. 71-2 71.3 Workcell Command and Information Organization ...................................... 71-4 Intelligent Control Architectures Behaviors and Hybrid Systems Design Workcell Planning, Coordination, and Control Structure 71.4 Commercial Robot Configurations and Types ...... 71-7 Manipulator Performance Common Kinematic Configurations Drive Types of Commercial Robots Commercial Robot Controllers 71.5 Robot Kinematics, Dynamics, and Servo-Level Control ............................... 71-13 Kinematics and Jacobians Robot Dynamics and Properties Robot Servo-level Motion Control Robot Force/Torque Servocontrol Motion Trajectory Generation 71.6 End Effectors and End-of-Arm Tooling ............ 71-21 Part Fixtures and Robot Tooling Grippers and Fingers Robot Wrist Mechanisms Robot/Tooling Process Integration and Coordination 71.7 Sensors ........................................... 71-25 The Philosophy of Robotic Workcell Sensors Types of Sensors Sensor Data Processing Vision for Robotics 71.8 Workcell Planning ................................ 71-31 Workcell Behaviors and Agents Task Decomposition and Planning Task Matrix Approach to Workcell Planning Path Planning 71.9 Job and Activity Coordination ..................... 71-41 Matrix Rule-Based Job Coordination Controller Process Integration, Digital I/O, and Job Coordination Controller Implementation Coordination of Multiple Robots 71.10 Error Detection and Recovery ..................... 71-44 Error Detection Error Recovery 71.11 Human Operator Interfaces ....................... 71-45 Levels of User Interface Mechanisms for User Interface 71.12 Robot Workcell Programming ..................... 71-47 Robot Programming Languages V+, A Representative Robot Language 71.13 Mobile Robots and Automated Guided Vehicles .... 71-49 Mobile Robots Automated Guided Vehicle Systems 1-58488-360-X/$0.00+$1.50 © 2004 by CRC Press, LLC 71-1 F.L. Lewis, M. Fitzgerald, and K. Liu “Robotics,” in The Computer Science Handbook, Allen B. Tucker, Jr. ed., Chapter 71, CRC Press, 2011.

Robotics robotics SCUT/NEU June...for robotic workcells such as the one in Figure 71.2 [Decelle 1988, Jamshidi et al. 1992, Pugh 1983]. In the In the robotic workcell, robots are used

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • 71Robotics

    Frank L. LewisUniversity of Texas at Arlington

    John M. FitzgeraldAdept Technology

    Kai LiuAlcatel Telecom

    71.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-271.2 Robot Workcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-271.3 Workcell Command and Information

    Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-4Intelligent Control Architectures • Behaviors and HybridSystems Design • Workcell Planning, Coordination, andControl Structure

    71.4 Commercial Robot Configurations and Types . . . . . . 71-7Manipulator Performance • Common KinematicConfigurations • Drive Types of Commercial Robots• Commercial Robot Controllers

    71.5 Robot Kinematics, Dynamics, andServo-Level Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-13Kinematics and Jacobians • Robot Dynamics and Properties• Robot Servo-level Motion Control • Robot Force/TorqueServocontrol • Motion Trajectory Generation

    71.6 End Effectors and End-of-Arm Tooling . . . . . . . . . . . . 71-21Part Fixtures and Robot Tooling • Grippers and Fingers• Robot Wrist Mechanisms • Robot/Tooling ProcessIntegration and Coordination

    71.7 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-25The Philosophy of Robotic Workcell Sensors• Types of Sensors • Sensor Data Processing• Vision for Robotics

    71.8 Workcell Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-31Workcell Behaviors and Agents • Task Decomposition andPlanning • Task Matrix Approach to Workcell Planning• Path Planning

    71.9 Job and Activity Coordination . . . . . . . . . . . . . . . . . . . . . 71-41Matrix Rule-Based Job Coordination Controller• Process Integration, Digital I/O, and Job CoordinationController Implementation • Coordination of MultipleRobots

    71.10 Error Detection and Recovery . . . . . . . . . . . . . . . . . . . . . 71-44Error Detection • Error Recovery

    71.11 Human Operator Interfaces . . . . . . . . . . . . . . . . . . . . . . . 71-45Levels of User Interface • Mechanisms for User Interface

    71.12 Robot Workcell Programming . . . . . . . . . . . . . . . . . . . . . 71-47Robot Programming Languages • V+, A RepresentativeRobot Language

    71.13 Mobile Robots and Automated Guided Vehicles . . . . 71-49Mobile Robots • Automated Guided Vehicle Systems

    1-58488-360-X/$0.00+$1.50© 2004 by CRC Press, LLC 71-1

    F.L. Lewis, M. Fitzgerald, and K. Liu “Robotics,” in The Computer Science Handbook, Allen B. Tucker, Jr. ed., Chapter 71, CRC Press, 2011.

  • 71-2 Computer Science Handbook

    71.1 Introduction

    The word robot was introduced by the Czech playright Karel Čapek in his 1920 play Rossum’s UniversalRobots. The word robota in Czech means simply work. In spite of such practical beginnings, sciencefiction writers and early Hollywood movies have given us a romantic notion of robots and expectationsthat they will revolutionize several walks of life including industry. However, many of the more far-fetched expectations from robots have failed to materialize. For instance, in underwater assembly andoil mining, teleoperated robots are very difficult to manipulate due to sea currents and low visibility,and have largely been replaced or augmented by automated smart quick-fit couplings that simplify theassembly task. However, through good design practices and painstaking attention to detail, engineers havesucceeded in applying robotic systems to a wide variety of industrial and manufacturing situations wherethe environment is structured or predictable. Thus, the first successful commercial implementation ofprocess robotics was in the U.S. automobile industry; the word automation was coined in the 1940s at FordMotor Company, a contraction of automatic motivation.

    As machines, robots have precise motion capabilities, repeatability, and endurance. On a practical level,robots are distinguished from other electromechanical motion equipment by their dexterous manipulationcapability in that robots can work, position, and move tools and other objects with far greater dexterity thanother machines found in the factory. The capabilities of robots are extended by using them as a basis forrobotic workcells. Process robotic workcells are integrated functional systems with grippers, end effectors,sensors, and process equipment organized to perform a controlled sequence of jobs to execute a process.Robots must coordinate with other devices in the workcell such as machine tools, conveyors, part feeders,cameras, and so on. Sequencing jobs to correctly perform automated tasks in such circumstances is not atrivial matter, and robotic workcells require sophisticated planning, sequencing, and control systems.

    Today, through developments in computers and artificial intelligence (AI) techniques (and often mo-tivated by the space program), we are on the verge of another breakthrough in robotics that will affordsome levels of autonomy in unstructured environments. For applications requiring increased autonomyit is particularly important to focus on the design of the data structures and command-and-control in-formation flow in the robotic system. Therefore, this chapter focuses on the design of robotic workcellsystems. A distinguishing feature of robotics is its multidisciplinary nature: to successfully design roboticsystems one must have a grasp of electrical, mechanical, industrial, and computer engineering, as well aseconomics and business practices. The purpose of this chapter is to provide a background in these areasso that design of robotic systems may be approached from a position of rigor, insight, and confidence.

    The chapter begins by discussing layouts and architectures for robotic workcell design. Then, compo-nents of the workcell are discussed from the bottom up, beginning with robots, sensors, and conveyors/partfeeders, and progressing upwards in abstraction through task coordination, job sequencing, and resourcedispatching, to task planning, assignment, and decomposition. Concepts of user interface and exceptionhandling/fault recovery are included.

    71.2 Robot Workcells

    In factory automation and elsewhere it was once common to use layouts such as the one in Figure 71.1,which shows an assembly line with distinct workstations, each performing a dedicated function. Robotshave been used at the workstation level to perform operations such as assembly, drilling, surface finishing,welding, palletizing, and so on. In the assembly line, parts are routed sequentially to the workstationsby conveyors. Such systems are very expensive to install, require a cadre of engineering experts to designand program, and are extremely difficult to modify or reprogram as needs change. In today’s high-mixlow-volume (HMLV) manufacturing scenario, these characteristics tolled the death knell for such rigidantiquated designs.

    In the assembly line, the robot is restricted by placing it into a rigid sequential system. Robots are versatilemachines with many capabilities, and their potential can be significantly increased by using them as a basis

  • Robotics 71-3

    FIGURE 71.1 Antiquated sequential assembly line with dedicated workstations. (Courtesy of Edkins, M. 1983. Link-ing industrial robots and machine tools. In Robotic Technology. A. Pugh, Ed. Peregrinus, London.)

    FIGURE 71.2 Robot workcell. (Courtesy of Edkins, M. 1983. Linking industrial robots and machine tools. In RoboticTechnology. A. Pugh, Ed. Peregrinus, London.)

    for robotic workcells such as the one in Figure 71.2 [Decelle 1988, Jamshidi et al. 1992, Pugh 1983]. In therobotic workcell, robots are used for part handling, assembly, and other process operations. The workcellis designed to make full use of the workspace of the robots, and components such as milling machines,drilling machines, vibratory part feeders, and so on are placed within the robots’ workspaces to allowservicing by the robots. Contrary to the assembly line, the physical layout does not impose a priori a

  • 71-4 Computer Science Handbook

    fixed sequencing of the operations or jobs. Thus, as product requirements change, all that is required isto reprogram the workcell in software. The workcell is ideally suited to emerging HMLV conditions inmanufacturing and elsewhere.

    The rising popularity of robotic workcells has taken emphasis away from hardware design and placednew emphasis on innovative software techniques and architectures that include planning, coordination, andcontrol (PC&C) functions. Research into individual robotic devices is becoming less useful; what is neededare rigorous design and analysis techniques for integrated multirobotic systems.

    71.3 Workcell Command and Information Organization

    In this section we define some terms, discuss the design of intelligent control systems, and specify a plan-ning, coordination, and control structure for robotic workcells. The remainder of the chapter is orga-nized around that structure. The various architectures used for modeling AI systems are relevant to thisdiscussion, although here we specialize the discussion to intelligent control architecture.

    71.3.1 Intelligent Control Architectures

    Many structures have been proposed under the general aegis of the so-called intelligent control (IC)architectures [Antsaklis and Passino 1992]. Despite frequent heated philosophical discussions, it is nowbecoming clear that most of the architectures have much in common, with apparent major differencesdue to the fact that different architectures focus on different aspects of intelligent control or differentlevels of abstraction. A general IC architecture based on work by Saridis is given in Figure 71.3, whichillustrates the principle of decreasing precision with increasing abstraction [Saridis 1996]. In this figure, theorganization level performs as a manager that schedules and assigns tasks, performs task decompositionand planning, does path planning, and determines for each task the required job sequencing and assignmentof resources. The coordination level performs the prescribed job sequencing, coordinating the workcellagents or resources; in the case of shared resources it must execute dispatching and conflict resolution.

    The agents or resources of the workcell include robot manipulators, grippers and tools, conveyors andpart feeders, sensors (e.g., cameras), mobile robots, and so on. The execution level contains a closed-loopcontroller for each agent that is responsible for the real-time performance of that resource, includingtrajectory generation, motion and force feedback servo-level control, and so on. Some permanent built-inmotion sequencing may be included (e.g., stop robot motion prior to opening the gripper).

    At each level of this hierarchical IC architecture, there may be several systems or nodes. That is, thearchitecture is not strictly hierarchical. For instance, at the execution level there is a real-time controllerfor each workcell agent. Several of these may be coordinated by the coordination level to sequence the jobsneeded for a given task. At each level, each node is required to sense conditions, make decisions, and givecommands or status signals. This is captured in the sense/world-model/execute (SWE) paradigm of Albus[1992], shown in the NASREM configuration in Figure 71.4; each node has the SWE structure.

    FIGURE 71.3 Three-level intelligent control architecture from work by Saridis.

  • Robotics 71-5

    FIGURE 71.4 Three-element structure at all levels of the IC architecture: the NASREM paradigm.

    FIGURE 71.5 Behavior-based design after the subsumption technique of Brooks.

    71.3.2 Behaviors and Hybrid Systems Design

    In any properly designed IC system, the supervisory levels should not destroy the capabilities of the systemssupervised. Thus, design should proceed in the manner specified by Brooks [1986], where behaviors arebuilt in at lower levels, then selected, activated, or modified by upper-level supervisors. From the pointof view of still higher level nodes, the composite performance appears in terms of new more complex oremergent behaviors. Such subsumption design proceeds in the manner of adding layers to an onion, asdepicted loosely in Figure 71.5.

    Near or slightly below the interfaces between the coordination level and the execution level one mustface the transition between two fundamentally distinct worlds. Real-time servo-level controller design andcontrol may be accomplished in terms of state-space systems, which are time-varying dynamical systems(either continuous time or discrete time) having continuous-valued states such as temperatures, pressures,motions, velocities, forces, and so on. On the other hand, the coordinator is not concerned about suchdetails, but speaks in terms of discrete events such as “perform this job” or “check this condition.” The

  • 71-6 Computer Science Handbook

    FIGURE 71.6 Hybrid systems approach to defining and sequencing the plant behaviors.

    theory of hybrid systems is concerned with the interface between continuous-state systems and discreteevent systems.

    These concepts are conveniently illustrated by figures such as Figure 71.6, where a closed-loop real-timefeedback controller for the plant having dynamics ẋ = f (x , u) is shown at the execution level. The functionof the coordinator is to select the details of this real-time feedback control structure; that is, the outputsz(t), control inputs u(t), prescribed reference trajectories r (t), and controllers K to be switched in at thelow level. Selecting the outputs amounts to selecting which sensors to read; selecting the control inputsamounts to selecting to which actuators the command signals computed by the controller should be sent.The controller K is selected from a library of stored predesigned controllers.

    A specific combination of (z, u, r, K ) defines a behavior of the closed-loop system. For instance, in amobile robot, for path-following behavior one may select: as outputs, the vehicle speed and heading; ascontrols, the speed and steering inputs; as the controller, an adaptive proportional-integral-derivative(PID) controller; and as reference input, the prescribed path. For wall-following behavior, for instance, onesimply selects as output the sonar distance from the wall, as input the steering command, and as referenceinput the prescribed distance to be maintained. These distinct closed-loop behaviors are sequenced by thecoordinator to perform the prescribed job sequence.

    71.3.3 Workcell Planning, Coordination, and Control Structure

    A convenient planning, coordination, and control structure for robotic workcell design and operation isgiven in Figure 71.7, which is modified from the next generation controller (NGC) paradigm. This is anoperational PC&C architecture fully consistent with the previous IC structures. In this figure, the termvirtual agent denotes the agent plus its low-level servocontroller and any required built-in sequencingcoordinators. For instance, a virtual robot includes the manipulator, its commercial controller with servo-level joint controllers and trajectory generator, and in some applications the gripper controller plus anagent internal coordinator to sequence manipulator and gripper activities. A virtual camera might includethe camera(s) and framegrabber board, plus software algorithms to perform basic vision processing suchas edge detection, segmentation, and so on; thus, the virtual camera could include a data abstraction, whichis a set of data plus manipulations on that data.

  • Robotics 71-7

    FIGURE 71.7 Robotic workcell planning, coordination, and control operational architecture.

    The remainder of the chapter is structured after this PC&C architecture, beginning at the executionlevel to discuss robot manipulator kinematics, dynamics and control; end effectors and tooling; sensors;and other workcell components such as conveyors and part feeders. Next considered is the coordinationlevel including sequencing control and dispatching of resources. Finally, the organization level is treatedincluding task planning, path planning, workcell management, task assignment, and scheduling.

    Three areas are particularly problematic. At each level there may be human operator interfaces; thiscomplex topic is discussed in a separate section. An equally complex topic is error detection and recovery,also allotted a separate section, which occurs at several levels in the hierarchy. Finally, the strict NGCarchitecture has a component known as the information or knowledge base; however, in view of the factthat all nodes in the architecture have the SWE structure shown in Figure 71.4, it is clear that the knowledgebase is distributed throughout the system in the world models of the nodes. Thus, a separate discussionon this component is not included.

    71.4 Commercial Robot Configurations and Types

    Robots are highly reliable, dependable, and technologically advanced factory equipment. The majorityof the world’s robots are supplied by established companies using reliable off-the-shelf component tech-nologies. All commercial industrial robots have two physically separate basic elements, the manipulator

  • 71-8 Computer Science Handbook

    arm and the controller. The basic architecture of all commercial robots is fundamentally the same, andconsists of digital servocontrolled electrical motor drives on serial-link kinematic machines, usually withno more than six axes (degrees of freedom). All are supplied with a proprietary controller. Virtually allrobot applications require significant design and implementation effort by engineers and technicians.What makes each robot unique is how the components are put together to achieve performance that yieldsa competitive product. The most important considerations in the application of an industrial robot centeron two issues: manipulation and integration.

    71.4.1 Manipulator Performance

    The combined effects of kinematic structure, axis drive mechanism design, and real-time motion controldetermine the major manipulation performance characteristics: reach and dexterity, payload, quickness,and precision. Caution must be used when making decisions and comparisons based on manufacturers’published performance specifications because the methods for measuring and reporting them are notstandardized across the industry. Usually motion testing, simulations, or other analysis techniques areused to verify performance for each application.

    Reach is characterized by measuring the extent of the workspace described by the robot motion anddexterity by the angular displacement of the individual joints. Some robots will have unusable spaces suchas dead zones, singular poses, and wrist-wrap poses inside of the boundaries of their reach.

    Payload weight is specified by the manufacturers of all industrial robots. Some manufacturers also specifyinertial loading for rotational wrist axes. It is common for the payload to be given for extreme velocity andreach conditions. Weight and inertia of all tooling, workpieces, cables and hoses must be included as partof the payload.

    Quickness is critical in determining throughput but difficult to determine from published robot spec-ifications. Most manufacturers will specify a maximum speed of either individual joints or for a specifickinematic tool point. However, average speed in a working cycle is the quickness characteristic of interest.

    Precision is usually characterized by measuring repeatability. Virtually all robot manufacturers specifystatic position repeatability. Accuracy is rarely specified, but it is likely to be at least four times largerthan repeatability. Dynamic precision, or the repeatability and accuracy in tracking position, velocity, andacceleration over a continuous path, is not usually specified.

    71.4.2 Common Kinematic Configurations

    All common commercial industrial robots are serial-link manipulators, usually with no more than sixkinematically coupled axes of motion. By convention, the axes of motion are numbered in sequence asthey are encountered from the base on out to the wrist. The first three axes account for the spatial positioningmotion of the robot; their configuration determines the shape of the space through which the robot canbe positioned. Any subsequent axes in the kinematic chain generally provide rotational motions to orientthe end of the robot arm and are referred to as wrist axes. There are two primary types of motion that arobot axis can produce in its driven link — either revolute or prismatic. It is often useful to classify robotsaccording to the orientation and type of their first three axes. There are four very common commercial robotconfigurations: articulated, type I selectively compliant assembly robot arm (SCARA), type II SCARA,and Cartesian. Two other configurations, cylindrical and spherical, are now much less common.

    71.4.2.1 Articulated Arms

    The variety of commercial articulated arms, most of which have six axes, is very large (Figure 71.8). All ofthese robot’s axes are revolute. The second and third axes are parallel and work together to produce motionin a vertical plane. The first axis in the base is vertical and revolves the arm to sweep out a large workvolume. Many different types of drive mechanisms have been devised to allow wrist and forearm drivemotors and gearboxes to be mounted close to the first and second axis of rotation, thus minimizing theextended mass of the arm. The workspace efficiency of well-designed articulated arms, which is the degree

  • Robotics 71-9

    FIGURE 71.8 Articulated arm; six-axis arm grinding from a casting. (Courtesy of Staubli Unimation, Inc.)

    of quick dexterous reach with respect to arm size, is unsurpassed by other arm configurations when fiveor more degrees of freedom are needed. A major limiting factor in articulated arm performance is that thesecond axis has to work to lift both the subsequent arm structure and the payload. Historically, articulatedarms have not been capable of achieving accuracy as well as other arm configurations, as all axes have jointangle position errors which are multiplied by link radius and accumulated for the entire arm.

    71.4.2.2 Type I SCARA

    The type I SCARA (selectively compliant assembly robot arm) uses two parallel revolute joints to producemotion in the horizontal plane (Figure 71.9). The arm structure is weight-bearing but the first and secondaxes do no lifting. The third axis of the type I SCARA provides work volume by adding a vertical or z axis.A fourth revolute axis will add rotation about the z axis to control orientation in the horizontal plane.This type of robot is rarely found with more than four axes. The type I SCARA is used extensively inthe assembly of electronic components and devices, and it is used broadly for the assembly of small- andmedium-sized mechanical assemblies.

    71.4.2.3 Type II SCARA

    The type II SCARA, also a four-axis configuration, differs from type I in that the first axis is a long verticalprismatic z stroke, which lifts the two parallel revolute axes and their links (Figure 71.10). For quicklymoving heavier loads (over approximately 75 lb) over longer distance (more than about 3 ft), the type IISCARA configuration is more efficient than the type I.

    71.4.2.4 Cartesian Coordinate Robots

    Cartesian coordinate robots use orthogonal prismatic axes, usually referred to as x , y, and z, to translatetheir end effector or payload through their rectangular workspace (Figure 71.11). One, two, or threerevolute wrist axes may be added for orientation. Commercial robot companies supply several types ofCartesian coordinate robots with workspace sizes ranging from a few cubic inches to tens of thousands ofcubic feet, and payloads ranging to several hundred pounds. Gantry robots, which have an elevated bridgestructure, are the most common Cartesian style and are well suited to material handling applications wherelarge areas and/or large loads must be serviced. They are particularly useful in applications such as arcwelding, waterjet cutting, and inspection of large complex precision parts. Modular Cartesian robots are

  • 71-10 Computer Science Handbook

    FIGURE 71.9 Type I SCARA arm. High-precision, high-speed midsized SCARA I. (Courtesy of Adept Technologies,Inc.)

    FIGURE 71.10 Type II SCARA. (Courtesy of Adept Technologies, Inc.)

  • Robotics 71-11

    FIGURE 71.11 Cartesian robot. Three-axis robot constructed from modular single-axis motion modules. (Courtesyof Adept Technologies, Inc.)

    also commonly available from several commercial sources. Each module is a self-contained completelyfunctional single-axis actuator; the modules may be custom assembled for special-purpose applications.

    71.4.2.5 Spherical and Cylindrical Coordinate Robots

    The first two axes of the spherical coordinate robot are revolute and orthogonal to one another, and thethird axis provides prismatic radial extension. The result is a natural spherical coordinate system witha spherical work volume. The first axis of cylindrical coordinate robots is a revolute base rotation. Thesecond and third are prismatic, resulting in a natural cylindrical motion. Commercial models of sphericaland cylindrical robots (Figure 71.12) were originally very common and popular in machine tending andmaterial handling applications. Hundreds are still in use but now there are only a few commercially availablemodels. The decline in use of these two configurations is attributed to problems arising from use of theprismatic link for radial extension/retraction motion; a solid boom requires clearance to fully retract.

    71.4.3 Drive Types of Commercial Robots

    The vast majority of commercial industrial robots use electric servomotor drives with speed reducingtransmissions. Both ac and dc motors are popular. Some servohydraulic articulated arm robots are availablenow for painting applications. It is rare to find robots with servopneumatic drive axes. All types ofmechanical transmissions are used, but the tendency is toward low- and zero-backlash type drives. Somerobots use direct drive methods to eliminate the amplification of inertia and mechanical backlash associatedwith other drives. Joint angle position sensors, required for real-time servo-level control, are generallyconsidered an important part of the drive train. Less often, velocity feedback sensors are provided.

    71.4.4 Commercial Robot Controllers

    Commercial robot controllers are specialized multiprocessor computing systems that provide four basicprocesses allowing integration of the robot into an automation system: motion trajectory generationand following, motion/process integration and sequencing, human user integration, and informationintegration.

    71.4.4.1 Motion Trajectory Generation and Following

    There are two important controller related aspects of industrial robot motion generation. One is the extentof manipulation that can be programmed, the other is the ability to execute controlled programmed motion.A unique aspect of each robot system is its real-time servo-level motion control. The details of real-timecontrol are typically not revealed to the user due to safety and proprietary information secrecy reasons.

  • 71-12 Computer Science Handbook

    (a)

    (b)

    FIGURE 71.12 Spherical and cylindrical robots. (a) Hydraulic powered spherical robot. (Source: Courtesy of KoholSystems, Inc. With permission.) (b) Cylindrical arm using scissor mechanism for radial prismatic motion. (Courtesyof Yamaha Robotics.)

    Each robot controller, through its operating system programs, converts digital data from higher levelcoordinators into coordinated arm motion through precise computation and high-speed distributionand communication of the individual axis motion commands, which are executed by individual jointservocontrollers. Most commercial robot controllers operate at a sample period of 16 ms. The real-timemotion controller invariably uses classical independent-joint proportional-integral-derivative control orsimple modifications of PID. This makes commercially available controllers suitable for point-to-pointmotion, but most are not suitable for following continuous position/velocity profiles or exerting prescribedforces without considerable programming effort, if at all.

    71.4.4.2 Motion/Process Integration and Sequencing

    Motion/process integration involves coordinating manipulator motion with process sensors or otherprocess controller devices. The most primitive process integration is through discrete digital input/output(I/O). For example, a machine controller external to the robot controller might send a 1-b signal indicatingthat it is ready to be loaded by the robot. The robot controller must have the ability to read the signaland to perform logical operations (if then, wait until, do until, etc.) using the signal. Coordination withsensors (e.g., vision) is also often provided.

    71.4.4.3 Human Integration

    The controller’s human interfaces are critical to the expeditious setup and programming of robot systems.Most robot controllers have two types of human interface available: computer style CRT/keyboard terminals

  • Robotics 71-13

    for writing and editing program code off line, and teach pendants, which are portable manual inputterminals used to command motion in a telerobotic fashion via touch keys or joy sticks. Teach pendantsare usually the most efficient means available for positioning the robot, and a memory in the controllermakes it possible to play back the taught positions to execute motion trajectories. With practice, humanoperators can quickly teach a series of points which are chained together in playback mode. Most robotapplications currently depend on the integration of human expertise during the programming phase forthe successful planning and coordination of robot motion. These interface mechanisms are effective inunobstructed workspaces where no changes occur between programming and exceution. They do notallow human interface during execution or adaptation to changing environments.

    71.4.4.4 Information Integration

    Information integration is becoming more important as the trend toward increasing flexibility and agilityimpacts robotics. Many commercial robot controllers now support information integration functionsby employing integrated personal computer (PC) interfaces through the communications ports (e.g.,RS-232), or in some through direct connections to the robot controller data bus.

    71.5 Robot Kinematics, Dynamics, and Servo-Level Control

    In this section we shall study the kinematics, dynamics, and servocontrol of robot manipulators; for moredetails see Lewis et al. [1993]. The objective is to turn the manipulator, by proper design of the controlsystem and trajectory generator, into an agent with desirable behaviors, which behaviors can then be selectedby the job coordinator to perform specific jobs to achieve some assigned task. This agent, composed ofthe robot plus servo-level control system and trajectory genarator, is the virtual robot in Figure 71.7; thisphilosophy goes along with the subsumption approach of Brooks (Figure 71.5).

    71.5.1 Kinematics and Jacobians

    71.5.1.1 Kinematics of Rigid Serial-Link Manipulators

    The kinematics of the robot manipulator are concerned only with relative positioning and not with motioneffects.

    71.5.1.1.1 Link A MatricesFixed-base serial-link rigid robot manipulators can be considered as a sequence of joints held together bylinks. Each joint i has a joint variable qi , which is an angle for revolute joints (units of degrees) and alength for prismatic or extensible joints (units of length). The joint vector of an n-link robot is defined asq = [q1 q2 · · · qn]T ∈ �n; the joints are traditionally numbered from the base to the end effector, withlink 0 being the fixed base. A robot with n joints has n degrees of freedom, so that for complete freedomof positioning and orientation in our 3-D space �3 one needs a six-link arm.

    For analysis purposes, it is considered that to each link is affixed a coordinate frame. The base frame isattached to the manipulator base, link 0. The location of the coordinate frame on the link is often selectedaccording to the Denavit–Hartenberg (DH) convention [Lewis et al. 1993]. The relation between the linksis given by the A matrix for link i , which has the form

    Ai (qi ) =[

    Ri pi

    0 1

    ](71.1)

    where Ri (qi ) is a 3 × 3 rotation matrix (R−1i = RTi ) and pi (qi ) = [xi yi zi ]T ∈ �3 is a translation vector.Ri specifies the rotation of the coordinate frame on link i with respect to the coordinate frame on linki − 1; pi specifies the translation of the coordinate frame on link i with respect to the coordinate frameon link i − 1. The 4 × 4 homogeneous transformation Ai thus specifies completely the orientation andtranslation of link i with respect to link i −1.

  • 71-14 Computer Science Handbook

    The A matrix Ai (qi ) is a function of the joint variable, so that as qi changes with robot motion, Aichanges correspondingly. Ai is also dependent on the parameters link twist and link length, which arefixed for each link. The A matrices are often given for a specific robot in the manufacturers handbook.

    71.5.1.1.2 Robot T MatrixThe position of the end effector is given in terms of the base coordinate frame by the arm T matrix definedas the concatenation of A matrices

    T(q) = A1(q1)A2(q2) · · · An(qn) ≡[

    R p

    0 1

    ](71.2)

    This 4 × 4 homogeneous transformation matrix is a function of the joint variable vector q. The 3 × 3cumulative rotation matrix is given by R(q) = R1(q1)R2(q2) · · · Rn(qn).

    71.5.1.1.3 Joint Space vs. Cartesian SpaceAn n-link manipulator has n degrees of freedom, and the position of the end effector is completely fixedonce the joints variables qi are prescribed. This position may be described either in joint coordinates or inCartesian coordinates. The joint coordinates position of the end effector is simply given by the value of then-vector q. The Cartesian position of the end effector is given in terms of the base frame by specifying theorientation and translation of a coordinate frame affixed to the end effector in terms of the base frame; thisis exactly the meaning of T(q). That is, T(q) gives the Cartesian position of the end effector.

    The Cartesian position of the end effector may be completely specified in our 3-D space by a six vector;three coordinates are needed for translation and three for orientation. The representation of Cartesiantranslation by the arm T(q) matrix is suitable, as it is simply given by p(q) = [x y z]T . Unfortunately, therepresentation of Cartesian orientation by the arm T matrix is inefficient in that R(q) has nine elements.More efficient representations are given in terms of quaternions or the tool configuration vector.

    71.5.1.1.4 Kinematics and Inverse Kinematics ProblemsThe robot kinematics problem is to determine the Cartesian position of the end effector once the jointvariables are given. This is accomplished simply by computing T(q) for a given value of q.

    The inverse kinematics problem is to determine the required joint angles qi to position the end effectorat a prescribed Cartesian position. This corresponds to solving Equation 71.2 for q ∈ �n given a desiredorientation R and translation p of the end effector. This is not an easy problem, and may have more thanone solution (e.g., think of picking up a coffee cup, one may reach with elbow up, elbow down, etc.). Thereare various efficient techniques for accomplishing this. One should avoid the functions arcsin, arccos, anduse where possible the numerically well-conditioned arctan function.

    71.5.1.2 Robot Jacobians71.5.1.2.1 Transformation of Velocity and AccelerationWhen the manipulator moves, the joint variable becomes a function of time t. Suppose there is prescribeda generally nonlinear transformation from the joint variable q(t) ∈ �n to another variable y(t) ∈ �pgiven by

    y(t) = h(q(t)) (71.3)

    An example is provided by the equation y = T(q), where y(t) is the Cartesian position. Taking partialderivatives one obtains

    ẏ = ∂h∂q

    q̇ ≡ J (q)q̇ (71.4)

    where J (q) is the Jacobian associated with h(q). This equation tells how the joint velocities are transformedto the velocity ẏ.

  • Robotics 71-15

    If y = T(q) the Cartesian end effector position, then the associated Jacobian J (q) is known as themanipulatorJacobian. There are several techniques for efficiently computing this particular Jacobian; thereare some complications arising from the fact that the representation of orientation in the homogeneoustransformation T(q) is a 3 × 3 rotation matrix and not a three vector. If the arm has n links, then theJacobian is a 6 × n matrix; if n is less than 6 (e.g., SCARA arm), then J (q) is not square and there is notfull positioning freedom of the end effector in 3-D space. The singularities of J (q) (where it loses rank),define the limits of the robot workspace; singularities may occur within the workspace for some arms.

    Another example of interest is when y(t) is the position in a camera coordinate frame. Then J (q) revealsthe relationships between manipulator joint velocities (e.g., joint incremental motions) and incrementalmotions in the camera image. This affords a technique, for instance, for moving the arm to cause desiredrelative motion of a camera and a workpiece. Note that, according to the velocity transformation 71.4, onehas that incremental motions are transformed according to �y = J (q)�q .

    Differentiating Equation 71.4 one obtains the acceleration transformation

    ÿ = J q̈ + J̇ q̇ (71.5)

    71.5.1.2.2 Force TransformationUsing the notion of virtual work, it can be shown that forces in terms of q may be transformed to forcesin terms of y using

    � = J T (q)F (71.6)

    where �(t) is the force in joint space (given as an n-vector of torques for a revolute robot), and F is theforce vector in y space. If y is the Cartesian position, then F is a vector of three forces [fx fy fz]T andthree torques [�x �y �z]T . When J (q) loses rank, the arm cannot exert forces in all directions that may bespecified.

    71.5.2 Robot Dynamics and Properties

    The robot dynamics considers motion effects due to the control inputs and inertias, Coriolis forces, gravity,disturbances, and other effects. It reveals the relation between the control inputs and the joint variablemotion q(t), which is required for the purpose of servocontrol system design.

    71.5.2.1 Robot Dynamics

    The dynamics of a rigid robot arm with joint variable q(t) ∈ �n are given by

    M(q)q̈ + Vm(q, q̇)q̇ + F(q, q̇) + G(q) + �d = � (71.7)

    where M is an inertia matrix, Vm is a matrix of Coriolis and centripetal terms, F is a friction vector, G is agravity vector, and �d is a vector of disturbances. The n-vector �(t) is the control input. The dynamics fora specific robot arm are not usually given in the manufacturer specifications, but may be computed fromthe kinematics A matrices using principles of Lagrangian mechanics.

    The dynamics of any actuators can be included in the robot dynamics. For instance, the electric orhydraulic motors that move the joints can be included, along with any gearing. Then, as long as the gearingand drive shafts are noncompliant, the form of the equation with arm-plus-actuator dynamics has thesame form as Equation 71.7. If the actuators are not included, the control � is a torque input vector for thejoints. If joint dynamics are included, then � might be, for example, a vector of voltage inputs to the jointactuator motors.

    The dynamics may be expressed in Cartesian coordinates. The Cartesian dynamics have the same form asEquation 71.7, but appearances there of q(t) are replaced by the Cartesian position y(t). The matrices aremodified, with the manipulator Jacobian J(q) becoming involved. In the Cartesian dynamics, the controlinput is a six vector of forces, three linear forces and three torques.

  • 71-16 Computer Science Handbook

    71.5.2.2 Robot Dynamics Properties

    Being a Lagrangian system, the robot dynamics satisfy many physical properties that can be used to simplifythe design of servo-level controllers. For instance, the inertia matrix M(q) is symmetric positive definite,and bounded above and below by some known bounds. The gravity terms are bounded above by knownbounds. The Coriolis/centripetal matrix Vm is linear in q̇, and is bounded above by known bounds. Animportant property is the skew-symmetric property of rigid-link robot arms, which says that the matrix(Ṁ − 2Vm) is always skew symmetric.

    This is a statement of the fact that the fictitious forces do no work, and is related in an intimatefashion to the passivity properties of Lagrangian systems, which can be used to simplify control systemdesign. Ignoring passivity can lead to unacceptable servocontrol system design and serious degradationsin performance, especially in teleoperation systems with transmission delays.

    71.5.2.3 State-Space Formulations and Computer Simulation

    Many commercially available controls design software packages, including MATLAB, allow the simulationof state-space systems of the form ẋ = f (x, u) using, for instance, Runge–Kutta integration. The robotdynamics can be written in state-space form in several different ways. One state-space formulation is theposition/velocity form

    ẋ1 = x2ẋ2 = −M−1(x1)[Vm(x1, x2)x2 + F(x1, x2) + G(x1) + �d ] + M−1(x1)�

    (71.8)

    where the control input is u = M−1(x1)�, and the state is x = [xT1 xT2 ]T , with x1 = q, and x2 = q̇ bothn-vectors. In computation, one should not invert M(q); one should either obtain an analytic expressionfor M−1 or use least-squares techniques to determine ẋ2.

    71.5.3 Robot Servo-level Motion Control

    The objective in robot servo-level motion control is to cause the manipulator end effector to follow aprescribed trajectory. This can be accomplished as follows for any system having the dynamics Equa-tion 71.7, including robots, robots with actuators included, and robots with motion described in Cartesiancoordinates. Generally, design is accomplished for robots including actuators, but with motion describedin joint space. In this case, first, solve the inverse kinematics problem to convert the desired end effectormotion yd (t) (usually specified in Cartesian coordinates) into a desired joint-space trajectory qd (t) ∈ �n(discussed subsequently). Then, to achieve tracking motion so that the actual joint variables q(t) followthe prescribed trajectory qd (t), define the tracking error e(t) and filtered tracking error r (t) as

    e(t) = qd (t) − q(t) (71.9)r (t) = ė + �e(t) (71.10)

    with � a positive definite design parameter matrix; it is common to select � diagonal with positiveelements.

    71.5.3.1 Computed Torque Control

    One may differentiate Equation 71.10 to write the robot dynamics Equation 71.7 in terms of the filteredtracking error as

    Mṙ = −Vmr + f (x) + �d − � (71.11)

    where the nonlinear robot function is given by

    f (x) = M(q)(q̈d + �ė) + Vm(q, q̇)(q̇d + �e) + F (q, q̇) + G(q) (71.12)

  • Robotics 71-17

    FIGURE 71.13 Robot servo-level tracking controller.

    Vector x contains all of the time signals needed to compute f (·), and may be defined for instance asx ≡ [eT ėT qTd q̇Td q̈Td ]T . It is important to note that f (x) contains all the potentially unknown robot armparameters including payload masses, friction coefficients, and Coriolis/centripetal terms that may simplybe too complicated to compute.

    A general sort of servo-level tracking controller is now obtained by selecting the control input as

    � = f̂ + Kvr − v(t) (71.13)

    with f̂ an estimate of the nonlinear terms f (x), Kvr = Kv ė + Kv�e an outer proportional-plus-derivative(PD) tracking loop, and v(t) an auxiliary signal to provide robustness in the face of disturbances andmodeling errors. The multiloop control structure implied by this scheme is shown in Figure 71.13. Thenonlinear inner loop that computes f̂ (x) provides feedforward compensation terms that improve thetracking capabilities of the PD outer loop, including an acceleration feedforward term M(q)q̈d , frictioncompensation F(q, q̇), and a gravity compensation term G(q).

    This controller is a variant of computed-torque control, since the torque required for trajectory follow-ing is computed in terms of the tracking error and the additional nonlinear robot terms in f (x). An inte-grator may be added in the outer tracking loop to ensure zero steady-state error, obtaining a PID outer loop.

    71.5.3.2 Commercial Robot Controllers

    Commercial robot controllers do not implement the entire computed torque law. Most available controllerssimply use a PD or PID control loop around each joint, dispensing entirely with the inner nonlinearcompensation loop f̂ (x). It is not clear exactly what is going on in most commercially available controllers,as they are proprietary and the user has no way to modify the joint tracking loops. However, in somecontrollers (e.g., Adept Hyperdrive), there appears to be some inner-loop compensation, where someof the terms in f (x) are included in �(t). For instance, acceleration feedforward may be included. Toimplement nonlinear feedback terms that are not already built-in on commercial controllers, it is usuallynecessary to perform hardware modifications of the controller.

    71.5.3.3 Adaptive and Robust Control

    There are by now many advanced control techniques for robot manipulators that either estimate thenonlinear robot function or compensate otherwise for uncertainties in f (x). In adaptive control the

  • 71-18 Computer Science Handbook

    estimate f̂ of the nonlinear terms is updated online in real-time using additional internal controllerdynamics, and in robust control the robustifying signal v(t) is selected to overbound the system modelinguncertainties. In learning control, the nonlinearity correction term is improved over each repetition ofthe trajectory using the tracking error over the previous repetition (this is useful in repetitive motionapplications including spray painting). Neural networks (NN) or fuzzy logic (FL) systems can be used inthe inner control loop to manufacture the nonlinear estimate f̂ (x) [Lewis et al. 1995]. Since both NN andFL systems have a universal approximation property, the restrictive linear in the parameters assumptionrequired in standard adaptive control techniques is not needed, and no regression matrix need be computed.FL systems may also be used in the outer PID tracking loop to provide additional robustness.

    Though these advanced techniques significantly improve the tracking performance of robot manipula-tors, they cannot be implemented on existing commercial robot controllers without hardware modifica-tions.

    71.5.4 Robot Force/Torque Servocontrol

    In many industrial applications it is desired for the robot to exert a prescribed force normal to a given surfacewhile following a prescribed motion trajectory tangential to the surface. This is the case in surface finishing,etc. A hybrid position/force controller can be designed by extension of the principles just presented.

    The robot dynamics with environmental contact can be described by

    M(q)q̈ + Vm(q, q̇)q̇ + F(q, q̇) + G(q) + �d = � + J T (q)� (71.14)where J (q) is a constraint Jacobian matrix associated with the contact surface geometry and � (the so-calledLagrange multiplier) is a vector of contact forces exerted normal to the surface, described in coordinatesrelative to the surface.

    The hybrid position/force control problem is to follow a prescribed motion trajectory q1d (t) tangentialto the surface while exerting a prescribed contact force �d (t) normal to the surface. Define the filteredmotion error rm = ėm +�em, where em = q1d − q1 represents the motion error in the plane of the surfaceand � is a positive diagonal design matrix. Define the force error as �̃ = �d − �, where �(t) is the normalforce measured in a coordinate frame attached to the surface. Then a hybrid position/force controller hasthe structure

    � = f̂ + Kv L (q1)rm + J T [�d + K f �̃] − v (71.15)where f̂ is an estimate of the nonlinear robot function 71.12 and L (·) is an extended Jacobian determinedfrom the surface geometry using the implicit function theorem.

    This controller has the basic structure of Figure 71.13, but with an additional inner force control loop. Inthe hybrid position/force controller, the nonlinear function estimate inner loop f̂ and the robustifying termv(t) can be selected using adaptive, robust, learning, neural, or fuzzy techniques. A simplified controllerthat may work in some applications is obtained by setting f̂ = 0, v(t) = 0, and increasing the PD motiongains Kv� and Kv and the force gain K f .

    It is generally not possible to implement force control on existing commercial robot controllers withouthardware modification and extensive low-level programming.

    71.5.5 Motion Trajectory Generation

    In the section on servo-level motion control it was shown how to design real-time servo-level control loopsfor the robot joint actuators to cause the manipulator to follow a prescribed joint-space trajectory qd (t)and, if required by the job, to exert forces normal to a surface specified by a prescribed force trajectory�d (t). Unfortunately, the higher level path planner and job coordinator in Figure 71.7 do not specifythe position and force trajectories in the detail required by the servo-level controllers. Most commercialrobot controllers operate at a sampling period of 16 ms, so that they require specific desired motiontrajectories qd (t) sampled every 16 ms. On the other hand, the path planner wishes to be concerned at the

  • Robotics 71-19

    level of abstraction only with general path descriptions sufficient to avoid obstacles or accomplish desiredhigh-level jobs (e.g., move to prescribed final position, then insert pin in hole).

    71.5.5.1 Path Transformation and Trajectory Interpolation71.5.5.1.1 Joint Space vs. Cartesian Space Prescribed TrajectoriesThe job coordinator in Figure 71.7 passes required path-following commands to the virtual robot inthe form of discrete events to be accomplished, which could be in the form of commands to “move to aspecified final position passing through prescribed via points.” These prescribed path via points are given inCartesian coordinates y, are usually not regularly spaced in time, and may or may not have required timesof transit associated with them. Via points are given in the form (yi , ẏi , ti ), with yi the required Cartesianposition at point i and ẏi the required velocity. The time of transit ti may or may not be specified. Theirregularly spaced Cartesian-space via points must be interpolated to produce joint-space trajectory pointsregularly spaced at every sampling instant, often every 16 ms. It should be clearly understood that thepath and the joint trajectory are both prescribed for each coordinate: the path for three Cartesian positioncoordinates and three Cartesian orientation coordinates, and the trajectory for each of the n manipulatorjoints. If n is not equal to 6, there could be problems in that the manipulator might not be able to exactlyreach the prescribed via points. Thus, in its planning process the path planner must take into account thelimitations of the individual robots.

    Two procedures may be used to convert prescribed Cartesian path via points into desired joint-spacetrajectory points specified every 16 ms. One may either: (1) use the arm inverse kinematics to computethe via points in joint-space coordinates and then perform trajectory interpolation in joint space, or (2)perform interpolation on the via points to obtain a Cartesian trajectory specified every 16 ms, and thenperform the inverse kinematics transformation to yield the joint-space trajectory qd (t) for the servo-levelcontroller. The main disadvantage of the latter procedure is that the full inverse kinematics transformationmust be performed every 16 ms. The main disadvantage of the former procedure is that interpolation injoint space often has strange effects, such as unexpected motions or curvilinear swings when viewed fromthe point of Cartesian space; one should recall that the path planner selects via points in Cartesian space,e.g., to avoid obstacles, often assuming linear Cartesian motion between the via points. The latter problemmay be mitigated by spacing the Cartesian path via points more closely together. Thus, procedure 1 isusually selected in robotic workcell applications.

    71.5.5.1.2 Trajectory InterpolationA trajectory specified in terms of via points, either in joint space or Cartesian space, may be interpolated toobtain connecting points every 16 ms by many techniques, including interpolation by cubic polynomials,second- or third-order splines, minimum-time techniques, etc. The interpolation must be performedseparately for each coordinate of the trajectory (e.g., n interpolations if done in joint space). Cubicinterpolation is not recommended as it can result in unexpected swings or overshoots in the computedtrajectory.

    The most popular technique for trajectory interpolation may be linear functions with parabolic blends(LFPB). Let us assume that the path via points are specified in joint space, so that the inverse kinematicstransformation from the Cartesian path via points obtained from the path planner has already been per-formed. Then, the path is specified in terms of the via points (q(ti ), q̇(ti ), ti ); note that the time of transit tiof point i is specified; the transit times need not be uniformly spaced. Within each path segment connectingtwo via points, one uses constant acceleration or deceleration to obtain the required transit velocity, thenzero acceleration during the transit, then constant acceleration or deceleration to obtain the prescribedfinal position and velocity at the next via point. Sample LFPB trajectories are given in Figure 71.14. Notethat LFPB results in quadratic motion, followed by linear motion, followed by quadratic motion. Themaximum acceleration/deceleration is selected taking into account the joint actuator torque limits.

    There are standard formulas available to compute the LFPB trajectory passing through two prescribedvia points, for instance the following. In Figure 71.14 two design parameters are selected: the blend timetb and the maximum velocity v M . Then the joint-space trajectory passing through via points i and (i+1),

  • 71-20 Computer Science Handbook

    FIGURE 71.14 LFPB trajectory: (a) acceleration profile, (b) velocity profile, and (c) position profile.

    shown in Figure 71.14(c), is given by

    qd (t) =

    a + (t − ti )b + (t − ti )2c , ti ≤ t < ti + tbd + v Mt, ti + tb ≤ t < ti+1 − tbe + (t − ti+1) f + (t − ti+1)2g , ti+1 − tb ≤ t < ti+1

    (71.16)

    It is not difficult to determine that the coefficients required to pass through the i th and (i + 1)st via pointsare given by

    a = q(ti ), b = q̇(ti ), c = v M − q̇(ti )2tb

    d = q(ti ) + q(ti+1) − v Mti+12

    e = q(ti+1), f = q̇(ti+1)

    g = v Mti+1 + q(ti ) − q(ti+1) + 2tb[q̇(ti+1) − v M]2t2b

    (71.17)

    One must realize that this interpolation must be performed for each of the n joints of the robot. Then,the resulting trajectory n-vector is passed as a prescribed trajectory to the servo-level controller, whichfunctions as in Robot Servo-level Motion Control subsection to cause trajectory-following arm motion.

  • Robotics 71-21

    71.5.5.2 Types of Trajectories and Limitations of Commercial Robot Controllers

    The two basic types of trajectories of interest are motion trajectories and force trajectories. Motion speci-fications can be either in terms of motion from one prescribed point to another, or in terms of followinga prescribed position/velocity/acceleration motion profile (e.g., spray painting).

    In robotic assembly tasks point-to-point motion is usually used, without prescribing any requiredtransit time. Such motion can be programmed with commercially available controllers using standardrobot programming languages (Section 71.12). Alternatively, via points can usually be taught using atelerobotic teach pendant operated by the user (Section 71.11); the robot memorizes the via points, andeffectively plays them back in operational mode. A speed parameter may be set prior to the motion thattells the robot whether to move more slowly or more quickly. Trajectory interpolation is automaticallyperformed by the robot controller, which then executes PD or PID control at the joint servocontrol levelto cause the desired motion. This is by far the most common form of robot motion control.

    In point-to-point motion control the commercial robot controller performs trajectory interpolationand joint-level PD servocontrol. All of this is transparent to the user. Generally, it is very difficult to modifyany stage of this process since the internal controller workings are proprietary, and the controller hardwaredoes not support more exotic trajectory interpolation or servo-level control schemes. Though some robotsby now do support following of prescribed position/velocity/acceleration profiles, it is generally extremelydifficult to program them to do so, and especially to modify the paths once programmed. Various tricksmust be used, such as specifying the Cartesian via points (yi , ẏi , ti ) in very fine time increments, andcomputing ti such that the desired acceleration is produced.

    The situation is even worse for force control, where additional sensors must be added to sense forces(e.g., wrist force-torque sensor, see Section 71.7), kinematic computations based on the given surfacemust be performed to decompose the tangential motion control directions from the normal force controldirections, and then very tedious low-level programming must be performed. Changes in the surface orthe desired motion or force profiles require time-consuming reprogramming. In most available robotcontrollers, hardware modifications are required.

    71.6 End Effectors and End-of-Arm Tooling

    End effectors and end-of-arm tooling are the devices through which the robot manipulator interactswith the world around it, grasping and manipulating parts, inspecting surfaces, and so on [Wright andCutkosky 1985]. End effectors should not be considered as accessories, but as a major component in anyworkcell; proper selection and/or design of end effectors can make the difference between success andfailure in many process applications, particularly when one includes reliability, efficiency, and economicfactors. End effectors consist of the fingers, the gripper, and the wrist. They can be either standard com-mercially available mechanisms or specially designed tools, or can be complex systems in themselves (e.g.,welding tools or dextrous hands). Sensors can be incorporated in the fingers, the gripper mechanism,or the wrist mechanism. All end effectors, end-of-arm tooling, and supply hoses and cables (electrical,pneumatic, etc.) must be taken into account when considering the manipulator payload weight limits ofthe manufacturer.

    71.6.1 Part Fixtures and Robot Tooling

    In most applications the end effector design problem should not be decoupled from the part fixturingdesign problem. One should consider the wrist, gripper, fingers, and part fixturing as a single system.Integrated design can often yield innovative solutions to otherwise intractable problems; nonintegrateddesign can often lead to unforseen problems and unexpected failure modes. Coordinated design of fixturesand end effectors can often avoid the use of high-level expensive sensors (e.g., vision) and/or complexfeedback control systems that required overall coordinated control of the robot arm motion, the gripperaction, and the part pose. An ideal example of a device that allows simplified control strategies is theremote-center-of-compliance (RCC) wrist in Figure 71.17(b), if correctly used.

  • 71-22 Computer Science Handbook

    (a) (b)

    FIGURE 71.15 Angular and parallel motion robot grippers: (a) angular motion gripper and (b) parallel motiongripper, open and closed. (Courtesy of Robo-Tech Systems, Gastonia, NC.)

    (a) (b)

    FIGURE 71.16 Robot grippers: (a) center seeking gripper showing part contact by first finger and final closure bysecond finger and (b) Versagrip III adjustable three-finger gripper. (Courtesy of Robo-Tech Systems, Gastonia, NC.)

    71.6.2 Grippers and Fingers

    Commercial catalogs usually allow one to purchase end effector components separately, including fingers,grippers, and wrists. Grippers can be actuated either pneumatically or using servomotors. Pneumaticactuation is usually either open or closed, corresponding to a binary command to turn the air pressureeither off or on. Grippers often lock into place when the fingers are closed to offer failsafe action if airpressure fails. Servomotors often require analog commands and are used when finer gripper control isrequired. Available gripping forces span a wide range up to several hundred pounds force.

    71.6.2.1 Gripper Mechanisms

    Angular motion grippers, see Figure 71.15a, are inexpensive devices allowing grasping of parts eitherexternally or internally (e.g., fingers insert into a tube and gripper presses them outward). The fingers canoften open or close by 90◦. These devices are useful for simple pick-and-place operations. In electronicassembly or tasks where precise part location is needed, it is often necessary to use parallel grippers, seeFigure 71.15b, where the finger actuation affords exactly parallel closing motion. Parallel grippers generallyhave a far smaller range of fingertip motion that angular grippers (e.g., less than 1 in). In some cases, suchas electronic assembly of parts positioned by wires, one requires center seeking grippers, see Figure 71.16a,where the fingers are closed until one finger contacts the part, then that finger stops and the other fingercloses until the part is grasped.

  • Robotics 71-23

    There are available many grippers with advanced special-purpose mechanisms, including Robo-Tech’sVersagrip III shown in Figure 71.16b, a 3-fingered gripper whose fingers can be rotated about a longitudinalaxis to offer a wide variety of 3-fingered grasps depending on the application and part geometry. Fingerrotation is affected using a fine motion servomotor that can be adjusted as the robot arm moves.

    The gripper and/or finger tips can have a wide variety of sensors including binary part presence detectors,binary closure detectors, analog finger position sensors, contact force sensors, temperature sensors, andso on (Section 71.7).

    71.6.2.2 The Grasping Problem and Fingers

    The study of the multifinger grasping problem is a highly technical area using mathematical and mechanicalengineering analysis techniques such as rolling/slipping concepts, friction studies, force balance and centerof gravity studies, etc. [Pertin-Trocaz 1989]. These ideas may be used to determine the required grippermechanisms, number of fingers, and finger shapes for a specific application. Fingers are usually speciallydesigned for particular applications, and may be custom ordered from end-effector supply houses. Im-proper design and selection of fingers can doom to failure an application of an expensive robotic system. Bycontrast, innovative finger and contact tip designs can solve difficult manipulation and grasping problemsand greatly increase automation reliability, efficiency, and economic return.

    Fingers should not be thought of as being restricted to anthropomorphic forms. They can have vacuumcontact tips for grasping smooth fragile surfaces (e.g., auto windshields), electromagnetic tips for handlingsmall ferrous parts, compliant bladders or wraparound air bladders for odd-shaped or slippery parts,Bernoulli effect suction for thin fragile silicon wafers, or membranes covering a powder to distributecontact forces for irregular soft fragile parts [Wright and Cutkosky 1985].

    Multipurpose grippers are advantageous in that a single end effector can perform multiple tasks. Somemultipurpose devices are commercially available; they are generally expensive. The ideal multipurpose endeffector is the anthropomorphic dextrous hand. Several dextrous robot hands are now available and affordpotential applications in processes requiring active manipulation of parts or handling of many sorts oftooling. Currently, they are generally restricted to research laboratories since the problems associated withtheir expense, control, and coordination are not yet completely and reliably solved.

    71.6.3 Robot Wrist Mechanisms

    Wrist mechanisms couple the gripper to the robot arm, and can perform many functions. Commercialadapter plates allow wrists to be mounted to any commercially available robot arm. As an alternative toexpensive multipurpose grippers, quick change wrists allow end effectors to be changed quickly duringan application, and include quick disconnect couplings for mechanical, electrical, pneumatic and otherconnections. Using a quick change wrist, required tools can be selected from a magazine of availabletools/end effectors located at the workcell. If fewer tools are needed, an alternative is provided by inexpensivepivot gripper wrists, such as the 2-gripper-pivot device shown in Figure 71.17a, which allows one of twogrippers to be rotated into play. With this device, one gripper can unload a machine while the secondgripper subsequently loads a new blank into the machine. Other rotary gripper wrists allow one of several(up to six or more) grippers to be rotated into play. With these wrists, the grippers are mounted in paralleland rotate much like the chamber of an old-fashioned western Colt 45 revolver; they are suitable if thegrippers will not physically interfere with each other in such a parallel configuration.

    Safety wrists automatically deflect, sending a fault signal to the machine or job coordinator, if theend-of-arm tooling collides with a rigid obstacle. They may be reset automatically when the obstacle isremoved.

    Part positioning errors frequently occur due to robot end effector positioning errors, part variations,machine location errors, or manipulator repeatibility errors. It is unreasonable and expensive to requirethe robot joint controller to compensate exactly for such errors. Compliant wrists offset positioning errorsto a large extent by allowing small passive part motions in response to forces or torques exerted on the

  • 71-24 Computer Science Handbook

    (a) (b)

    FIGURE 71.17 Robot wrists. (a) Pivot gripper wrist. (Courtesy of Robo-Tech Systems, Gastonia, NC.) (b) Remote-center-of-compliance (RCC) wrist. (Courtesy of Lord Corporation, Erie, PA.)

    part. An example is pin insertion, where small positioning errors can result in pin breakage or otherfailures, and compensation by gross robot arm motions requires sophisticed (e.g., expensive) force-torquesensors and advanced (e.g., expensive) closed-loop feedback force control techniques. The compliant wristallows the pin to effectively adjust its own position in response to sidewall forces so that it slides into thehole. A particularly effective device is the remote-center-of-compliance (RCC) wrist, Figure 71.17b, wherethe rotation point of the wrist can be adjusted to correspond, e.g., to the part contact point [Grooveret al. 1986]. Compliant wrists allow successful assembly where vision or other expensive sensors wouldotherwise be needed.

    The wrist can contain a wide variety of sensors, with possibly the most important class being thewrist force-torque sensors (Section 71.7), which are quite expensive. A general rule-of-thumb is that, foreconomic and control complexity reasons, robotic force/torque sensing and control should be performedat the lowest possible level; e.g., fingertip sensors can often provide sufficient force information for mostapplications, with an RCC wrist compensating for position inaccuracies between the fingers and the parts.

    71.6.4 Robot/Tooling Process Integration and Coordination

    Many processes require the design of sophisticated end-of-arm tooling. Examples include spray paintingguns, welding tools, multipurpose end effectors, and so on. Indeed, in some processes the complexity ofthe tooling can rival or exceed the complexity of the robot arm that positions it. Successful coordinationand sequencing of the robot manipulator, the end effector, and the end-of-arm tooling calls for a varietyof considerations at several levels of abstraction in Figure 71.7.

    There are two philosophically distinct points of view that may be used in considering the robot ma-nipulator plus its end-of-arm tooling. In the first, the robot plus tooling is viewed as a single virtual agentto be assigned by an upper-level organizer/manager and commanded by a midlevel job coordinator. Inthis situation, all machine-level robot/tool coordination may be performed by the internal virtual robotmachine coordinator shown in Figure 71.7. This point of view is natural when the robot must performsophisticated trajectory motion during the task and the tool is unintelligent, such as in pick-and-placeoperations, surface finishing, and grinding. In such situations, the end effector is often controlled by simplebinary on/off or open/close commands through digital input/output signals from the machine coordina-tor. Many commercially available robot controllers allow such communications and support coordinationthrough their programming languages (Section 71.12).

    In the second viewpoint, one considers the manipulator as a dumb platform that positions the toolingor maintains its relative motion to the workpiece while the tooling performs a job. This point of viewmay be taken in the case of processes requiring sophisticated tooling such as welding. In this situation, therobot manipulator and the tooling may be considered as two separate agents which are coordinated by thehigher level job coordinator shown in Figure 71.7.

  • Robotics 71-25

    A variety of processes fall between these two extremes, such as assembly tasks which require somecoordinated intelligence by both the manipulator and the tool (insert pin in hole). In such applicationsboth machine-level and task-level coordination may be required. The decomposition of coordinationcommands into a portion suitable for machine-level coordination and a portion for task-level coordinationis not easy. A rule-of-thumb is that any coordination that is invariant from process to process should beapportioned to the lower level (e.g., do not open gripper while robot is in motion). This is closely connectedto the appropriate definition of robot/tooling behaviors in the fashion of Brooks [1986].

    71.7 Sensors

    Sensors and actuators [Tzou and Fukuda 1992] function as transducers, devices through which the workcellplanning, coordination, and control system interfaces with the hardware components that make up theworkcell. Sensors are a vital element as they convert states of physical devices into signals appropriate forinput to the workcell PC&C control system; inappropriate sensors can introduce errors that make properoperation impossible no matter how sophisticated or expensive the PC&C system, whereas innovativeselection of sensors can make the control and coordination problem much easier.

    71.7.1 The Philosophy of Robotic Workcell Sensors

    Sensors are of many different types and have many distinct uses. Having in mind an analogy with biologicalsystems, proprioceptors are sensors internal to a device that yield information about the internal state of thatdevice (e.g., robot arm joint-angle sensors). Exteroceptors yield information about other hardware externalto a device. Sensors yield outputs that are either analog or digital; digital sensors often provide informationabout the status of a machine or resource (gripper open or closed, machine loaded, job complete). Sensorsproduce inputs that are required at all levels of the PC&C hierarchy, including uses for:

    � Servo-level feedback control (usually analog proprioceptors)� Process monitoring and coordination (often digital exteroceptors or part inspection sensors such

    as vision)� Failure and safety monitoring (often digital, e.g., contact sensor, pneumatic pressure-loss sensor)� Quality control inspection (often vision or scanning laser)

    Sensor output data must often be processed to convert it into a form meaningful for PC&C purposes.The sensor plus required signal processing is shown as a virtual sensor in Figure 71.7; it functions asa data abstraction, that is, a set of data plus operations on that data (e.g., camera, plus framegrabber,plus signal processing algorithms such as image enhancement, edge detection, segmentation, etc.). Somesensors, including the proprioceptors needed for servo-level feedback control, are integral parts of theirhost devices, and so processing of sensor data and use of the data occurs within that device; then, the sensordata is incorporated at the servocontrol level or machine coordination level. Other sensors, often visionsystems, rival the robot manipulator in sophistication and are coordinated by the job coordinator, whichtreats them as valuable shared resources whose use is assigned to jobs that need them by some priorityassignment (e.g., dispatching) scheme. An interesting coordination problem is posed by so-called activesensing, where, e.g., a robot may hold a scanning camera, and the camera effectively takes charge of thecoordination problem, directing the robot where to move it to effect the maximum reduction in entropy(increase in information) with subsequent images.

    71.7.2 Types of Sensors

    This section summarizes sensors from an operational point of view. More information on functional andphysical principles can be found in Fraden [1993], Fu et al. [1987], Groover et al. [1986], and Snyder[1985].

  • 71-26 Computer Science Handbook

    71.7.2.1 Tactile Sensors

    Tactile sensors [Nichols and Lee 1989] rely on physical contact with external objects. Digital sensors suchas limit switches, microswitches, and vacuum devices give binary information on whether contact occursor not. Sensors are available to detect the onset of slippage. Analog sensors such as spring-loaded rods givemore information. Tactile sensors based on rubberlike carbon- or silicon-based elastomers with embeddedelectrical or mechanical components can provide very detailed information about part geometry, location,and more. Elastomers can contain resistive or capacitive elements whose electrical properties change as theelastomer compresses. Designs based on LSI technology can produce tactile grid pads with, e.g., 64 × 64forcel points on a single pad. Such sensors produce tactile images that have properties akin to digital imagesfrom a camera and require similar data processing. Additional tactile sensors fall under the classificationof force sensors discussed subsequently.

    71.7.2.2 Proximity and Distance Sensors

    The noncontact proximity sensors include devices based on the Hall effect or inductive devices based on theelectromagnetic effect that can detect ferrous materials within about 5 mm. Such sensors are often digital,yielding binary information about whether or not an object is near. Capacitance-based sensors detect anynearby solid or liquid with ranges of about 5 mm. Optical and ultrasound sensors have longer ranges.

    Distance sensors include time-of-flight range finder devices such as sonar and lasers. The commerciallyavailable Polaroid sonar offers accuracy of about 1 in up to 5 ft, with angular sector accuracy of about 15◦.For 360◦ coverage in navigation applications for mobile robots, both scanning sonars and ring-mountedmultiple sonars are available. Sonar is typically noisy with spurious readings, and requires low-pass filteringand other data processing aimed at reducing the false alarm rate. The more expensive laser range findersare extremely accurate in distance and have very high angular resolution.

    71.7.2.3 Position, Velocity, and Acceleration Sensors

    Linear position-measuring devices include linear potentiometers and the sonar and laser range findersjust discussed. Linear velocity sensors may be laser- or sonar-based Doppler-effect devices.

    Joint-angle position and velocity proprioceptors are an important part of the robot arm servocontroldrive axis. Angular position sensors include potentiometers, which use dc voltage, and resolvers, whichuse ac voltage and have accuracies of ±15 min. Optical encoders can provide extreme accuracy usingdigital techniques. Incremental optical encoders use three optical sensors and a single ring of alternatingopaque/clear areas, Figure 71.18a, to provide angular position relative to a reference point and angularvelocity information; commercial devices may have 1200 slots per turn. More expensive absolute opticalencoders, Figure 71.18b, have n concentric rings of alternating opaque/clear areas and require n opticalsensors. They offer increased accuracy and minimize errors associated with data reading and transmission,particularly if they employ the Grey code, where only one bit changes between two consecutive sectors.Accuracy is 360◦/2n, with commercial devices having n = 12 or so.

    Gyros have good accuracy if repeatability problems associated with drift are compensated for. Directionalgyros have accuracies of about ±1.5◦; vertical gyros have accuracies of 0.15◦ and are available to measuremultiaxis motion (e.g., pitch and roll). Rate gyros measure velocities directly with thresholds of 0.05◦/sor so.

    Various sorts of accelerometers are available based on strain gauges (next paragraph), gyros, or crystalproperties. Commercial devices are available to measure accelerations along three axes.

    71.7.2.4 Force and Torque Sensors

    Various torque sensors are available, though they are often not required; for instance, the internal torquesat the joints of a robot arm can be computed from the motor armature currents. Torque sensors on adrilling tool, for instance, can indicate when tools are dull. Linear force can be measured using load cellsor strain gauges. A strain gauge is an elastic sensor whose resistance is a function of applied strain ordeformation. The piezoelectric effect, the generation of a voltage when a force is applied, may also be

  • Robotics 71-27

    (a) (b)

    FIGURE 71.18 Optical encoders: (a) incremental optical encoder and (b) absolute optical encoder with n = 4 usingGrey code (From Snyder, W. E. 1985. Industrial Robots: Computer Interfacing and Control. Prentice–Hall, EnglewoodCliffs, NJ. With permission.)

    used for force sensing. Other force sensing techniques are based on vacuum diodes, quartz crystals (whoseresonant frequency changes with applied force), etc.

    Robot arm force-torque wrist sensors are extremely useful in dextrous manipulation tasks. Commer-cially available devices can measure both force and torque along three perpendicular axes, providing fullinformation about the Cartesian force vector F. Transformations such as Equation 71.6 allow computationof forces and torques in other coordinates. Six-axis force-torque sensors are quite expensive.

    71.7.2.5 Photoelectric Sensors

    A wide variety of photoelectric sensors are available, some based on fiber optic principles. These havespeeds of response in the neighborhood of 50 �s with ranges up to about 45 mm, and are useful fordetecting parts and labeling, scanning optical bar codes, confirming part passage in sorting tasks, etc.

    71.7.2.6 Other Sensors

    Various sensors are available for measuring pressure, temperature, fluid flow, etc. These are useful inclosed-loop servocontrol applications for some processes such as welding, and in job coordination and/orsafety interrupt routines in others.

    71.7.3 Sensor Data Processing

    Before any sensor can be used in a robotic workcell, it must be calibrated. Depending on the sensor, this couldinvolve significant effort in experimentation, computation, and tuning after installation. Manufacturersoften provide calibration procedures though in some cases, including vision, such procedures may not beobvious, requiring reference to the published scientific literature. Time-consuming recalibration may beneeded after any modifications to the system.

    Particularly for more complex sensors such as optical encoders, significant sensor signal conditioningand processing is required. This might include amplification of signals, noise rejection, conversion of datafrom analog to digital or from digital to analog, and so on. Hardware is usually provided for such purposes

  • 71-28 Computer Science Handbook

    (a) (b)

    FIGURE 71.19 Signal processing using FSM for optical encoders: (a) phase relations in incremental optical encoderoutput and (b) finite state machine to decode encoder output into angular position. (From Snyder, W. E. 1985).

    (a) (b)

    FIGURE 71.20 Hardware design from FSM: (a) FSM for sonar transducer control on a mobile robot and (b) sonardriver control system from FSM.

    by the manufacturer and should be considered as part of the sensor package for robot workcell design.The sensor, along with its signal processing hardware and software algorithms may be considered as a dataabstraction and is called the virtual sensor in Figure 71.7.

    If signal processing does need to be addressed, it is often very useful to use finite state machine (FSM)design. A typical signal from an incremental optical encoder is shown in Figure 71.19a; a FSM for decodingthis into the angular position is given in Figure 71.19b. FSMs are very easy to convert directly to hardwarein terms of logical gates. A FSM for sequencing a sonar is given in Figure 71.20a; the sonar driver hardwarederived from this FSM is shown in Figure 71.20b.

    A particular problem is obtaining angular velocity from angular position measurements. All too oftenthe position measurements are simply differenced using a small sample period to compute velocity. This isguaranteed to lead to problems if there is any noise in the signal. It is almost always necessary to employ alow-pass-filtered derivative where velocity samples vk are computed from position measurement samplespk using, e.g.,

    vk = �vk−1 + (1 − �)(pk − pk−1)/T (71.18)

    where T is the sample period and � is a small filtering coefficient. A similar approach is needed to computeacceleration.

  • Robotics 71-29

    71.7.4 Vision for Robotics

    Computer vision is covered in Chapter 43; the purpose of this section is to discuss some aspects of visionthat are unique to robotics [Fu et al. 1987, Lee et al. 1991, 1994]. Industrial robotic workcells often requirevision systems that are reliable, accurate, low cost, and rugged yet perform sophisticated image processingand decision making functions. Balancing these conflicting demands is not always easy. There are severalcommercially available vision systems, the most sophisticated of which may be the Adept vision system,which supports multiple cameras. However, it is sometimes necessary to design one’s own system. Visionmay be used for three purposes in robotic workcells: inspection and quality control, robotic manipulation,and servo-level feedback control. In quality control inspection systems the cameras are often affixed tostationary mounts while parts pass on a conveyor belt. In active vision inspection systems, cameras maybe mounted as end effectors of a robot manipulator, which positions the camera for the required shots.

    The operational phase of robot vision has six principal areas. Low-level vision includes sensing andpreprocessing such as noise reduction, image digitization if required, and edge detection. Medium-levelvision includes segmentation, description, and recognition. High-level vision includes interpretation anddecision making. Such topics are disussed in Chapter 32. Prior to placing the vision system in operation,one is faced with several design issues including camera selection and illumination techniques, and theproblem of system calibration.

    71.7.4.1 Cameras and Illumination

    Typical commercially available vision systems conform to the RS-170 standard of the 1950s, so that framesare acquired through a framegrabber board at a rate of 30 frames/s. Images are scanned; in a popular U.S.standard, each complete scan or frame consists of 525 lines of which 480 contain image information. Thissample rate and image resolutions of this order are adequate for most applications with the exceptionof vision-based robot arm servoing (discussed subsequently). Robot vision system cameras are usuallyTV cameras: either the solid-state charge-coupled device (CCD), which is responsive to wavelengths oflight from below 350 nm (ultraviolet) to 1100 nm (near infrared) and has peak response at approximately800 nm, or the charge injection device (CID), which offers a similar spectral response and has a peakresponse at approximately 650 nm. Both line-scan CCD cameras, having resolutions ranging between 256and 2048 elements, and area-scan CCD cameras are available. Medium-resolution area-scan cameras yieldimages of 256 × 256, though high-resolution devices of 1024×1024 are by now available. Line-scan camerasare suitable for applications where parts move past the camera, e.g., on conveyor belts. Framegrabbersoften support multiple cameras, with a common number being four, and may support black-and-whiteor color images.

    If left to chance, illumination of the robotic workcell will probably result in severe problems in opera-tions. Common problems include low-contrast images, specular reflections, shadows, and extraneous de-tails. Such problems can be corrected by overly sophisticated image processing, but all of this can be avoidedby some proper attention to details at the workcell design stage. Illumination techniques include spectralfiltering, selection of suitable spectral characteristics of the illumination source, diffuse-lighting tech-niques, backlighting (which produces easily processed silhouettes), structured-lighting (which providesadditional depth information and simplifies object detection and interpretation), and directional lighting.

    71.7.4.2 Coordinate Frames and Camera Perspective Transformation

    A typical robot vision system is depicted in Figure 71.21, which shows a gimball-mounted camera. Thereare illustrated the base frame (or world frame) (X, Y, Z), the gimball platform, the camera frame (x , y, z),and the image plane having coordinates of (�, υ).

    71.7.4.2.1 Image Coordinates of a Point in Base CoordinatesThe primary tools for analysis of robot vision systems are the notion of coordinate transforms and the cameraperspective transformation. Four-by-four homogeneous transformations (discussed earlier) are used, as theyprovide information on translations, rotations, scaling, and perspective.

  • 71-30 Computer Science Handbook

    FIGURE 71.21 Typical robot workcell vision system.

    FIGURE 71.22 Homogeneous transformations associated with the robot vision system.

    Four homogeneous transformations may be identified in the vision system, as illustrated in Figure 71.22.The gimball transformation G represents the base frame in coordinates affixed to the gimball platform.If the camera is mounted on a robot end effector, G is equal to T−1, with T the robot arm T matrixdetailed in earlier in Section 71.5; for a stationary-mounted camera G is a constant matrix capturing thecamera platform mounting offset r0 = [X0 Y0 Z0]T . The pan/tilt transformation R represents the gimballplatform with respect to the mounting point of the camera. This rotation transformation is given by

    R =

    cos � sin � 0 0− sin � cos � cos � cos � sin � 0

    sin � sin � − cos � sin � cos � 00 0 0 1

    (71.19)

    with �