Upload
dohanh
View
222
Download
0
Embed Size (px)
Citation preview
A COMPONENT-BASED ARCHITECTURE FOR ROBOT CONTROL
Diego Caberlon Santini∗[email protected]
Walter Fetter Lages∗[email protected]
∗Federal University of Rio Grande do SulElectrical Engineering Department
Av. Osvaldo Aranha, 10390035-190 Porto Alegre-RS, Brazil
ABSTRACT
This work deals with the specification of an open architec-ture for control of manipulator robots. The architecture de-fines policies for the use of the OROCOS framework andis specified for a generic manipulator robot with N joints,through the definition of component models to abstract thehardware and each block of the robot controller. To showits generality, the proposed architecture is used to implementtwo different controllers: an independent PID for each jointand controller with feedforward compensation. The valida-tion is made through the implementation in real-time on theJanus robot.
KEYWORDS: Open architecture, OROCOS framework,Robot control
RESUMO
Uma Arquitetura Baseada em Componentes para Con-trole de RobôsEste trabalho aborda a especificação de uma arquiteturaaberta para controle de robôs manipuladores. A arquiteturadefine políticas para o uso do framework OROCOS. A ar-quitetura é especificada para um robô manipulador genéricocom N juntas, através da definição de modelos de compo-nentes para abstrair o hardware e cada bloco do controladordo robô. Para mostrar a sua generalidade, a arquitetura pro-posta é utilizada para implementar dois controladores dife-
Artigo submetido em 10/09/2010 (Id.: 01194)Revisado em 30/10/2010, 16/02/2011Aceito sob recomendação do Editor Associado Prof. Geovany Araujo Borges
rentes: um controlador PID independente para cada junta eum controlador com compensação em feedforward. A va-lidação é feita através da implementação em tempo real norobô Janus.
PALAVRAS-CHAVE: Arquitetura aberta, OROCOS, Con-trole de robôs.
1 INTRODUCTION
The demand for robots with increased performance hasprompted the academia to the development of more complexcontrollers, such as computed torque control or feedforwarddynamic compensation. However, their application on indus-trial robots are restricted because of limitations associatedwith the architecture of conventional robotic controllers, aseach robot manufacturer uses its own proprietary interfaceand protocols (Kozlowski, 1998).
An open architecture is a proposed solution to such a prob-lem, as all aspects of design can be changed (Ford, 1994).Therefore, benefits such as reduced cost and shorter develop-ment time could be achieved by using off-the-shelf hardwareand software.
Nowadays, some robot manufacturers (Neuronics, 2010;Barrett, 2010) are selling what they call ”open source soft-ware“ robots, whose give access to the lower-level controlof motor torques. Those features can be very useful to buildcomplex controllers. However, those robots can not be saidto be open architecture robots because they depend on thespecific hardware of the manufacturer and it is not possible
398 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
to use the software of one manufacturer with the robot ofanother one. A truly open architecture should support thehardware of many robots with minor changes in software.
In academic research, many projects have been carried outto develop open architecture controllers. In Gaspar (2003),an implementation of a control architecture for manipula-tor robots is presented. This implementation is based on theRobot Control Interface (RCI) (Lloyd, 1992). However, thisapproach lacks interoperability and extensibility since theRCI is not a component-based software. In a component-based software, components can be connected together toachieve the global functionality of the robot. Each compo-nent is an executable building block with defined interfaceand functionality (Brugali and Scandurra, 2009).
In Liandong and Xinhan (2004), a component-based openarchitecture is presented. Each device of a robot is mod-eled by a component that implements its functionalityand uses the Common Object Request Broker Architecture(CORBA) (Object Management Group, 2010) to exchangedata. However, that work does not make use of a commonsystem architecture, which would allow the researchers tofocus on the problem of robot control and do not requirethem to rewrite code to fit the controllers to a communica-tion framework such as CORBA.
In Smith and Christensen (2009) the design of a high-performance robot manipulator built from off-the-shelf com-ponents is shown. It was built to fill the lack of standardsystems in robotics, on which comparative research could bedone. Even though the design procedure and the hardwareimplementation are fully described, the software does not usean open architecture.
An open architecture is fundamental to enable researchers toexchange results and port implementations from one robot toanother one. A common framework would encourage otherresearchers to use available implementations instead of de-veloping his own implementation for each robot. Eventu-ally, that would lead to the development a standard library onwhich robot software could be developed. A common frame-work not only defines how the components can interact toeach other by means of communication and synchronizationmechanisms but also provides infrastructure and functional-ity to build the system.
The ROS (ROS.org, 2010) and the Microsoft Robot Develop-ers Studio (RDS) (Microsoft, 2008) are popular frameworksamong the robotics community. Although both are based oncomponents and offer communication and other mechanismsrequired for the implementation of robot software, they donot operate in real-time and therefore are not suitable for theimplementation of low-level joint control software. Those
frameworks are more appropriate for the development of ap-plication software for robots.
The OROCOS project (Bruyninckx, 2001) is a component-based framework for robot and machine control. It follows anopen source development model, has been successfully usedin other projects (Swevers et al., 2007; Tavares et al., 2007)and has had widespread acceptance in the robotic field. How-ever, OROCOS has a slow learning curve. It can be some-what confusing because it has so many features and there-fore it is possible to implement a given functionality in manyways.
Unfortunately, OROCOS does not define a policy on how touse the available mechanisms and due to the complexity ofthe framework, it is not easy for a new user to understand allthe details of each mechanism and to decide which one is thebest one for each case.
Hence, by defining policies for using the framework re-sources, this paper proposes in the following sections an ar-chitecture for robot control based on OROCOS. The pro-posed architecture is implemented by defining new compo-nent models to represent the building blocks of a typicalrobot control system. So this paper approaches a softwareengineering problem with a focus on control systems.
It is important to observe that the component models are de-fined in a way to be independent of a specific robot hardware.This way, the user can easily migrate the software from onrobot to another one. This is achieved by abstracting the in-terface with hardware of the robot and defining the remainingof the architecture based on this abstraction.
2 THE OROCOS FRAMEWORK
OROCOS is an acronym for Open Robot Control Software.It is an open source framework for control of robots and ma-chines (Bruyninckx, 2001). The framework is based on com-ponents. A component model is a concept similar to class inC++, in that component models describe the interface, prop-erties, and behaviors of components, hence every componentmodel is implemented as a C++ class and can be compiledas a dynamic loadable library. Likewise, components are ob-jects that are instantiated from component models with par-ticular characteristic.
Each component model inherits a public interface fromthe base class (TaskContext), which defines primitivesfor component interactions: Events, Methods, Commands,Properties and Data Port, as seen in Figure 1.
An Event can have functions attached to it. Wheneverthe event is triggered, the attached functions are called.Those functions may be synchronous or asynchronous. Syn-
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 399
Data Ports
TaskContext A
updateHook()
ExecutionEngine
EventeProcessorCommandProcessor
StateMachineProcessorProgramProcessor
Peer of A
Execution Flow
Read − Write
React − Emit
Send
Call
Read − Write
Data Flow
InterfacePublic
Events
Properties
Commands
Methods
Implementation
queued
map to
Read − Write
C++ Class Methods
executes
Figure 1: OROCOS component interface.
chronous functions are executed as a part of the Event trig-gering procedure and hence execute in the same thread thattriggered the Event. Asynchronous functions are deferred tobe executed as a part of the component activity.
Methods are similar to C++ member functions, but can alsobe called from scripts. They are executed in a synchronousfashion with respect to the calling component, executing asan usual C++ function.
Commands are similar to Methods, but Commands are sentfrom a component and executed according to the activity ofthe receiving component, asynchronously with respect to thesender. Commands are queued for execution in the receivingcomponent. Figure 2 explains the difference between Com-mands and Methods, supposing the interaction between twocomponents with periodic activity with same period. WhenA calls a Method of B, it is executed immediately. When Asends a Command to B, the execution is deferred until thescheduling of B.
t
runs
to B
Command of Bruns
A B A B A B
Method of B
A calls a Methodof B
A sends a Command
Figure 2: Commands × Methods.
Properties are variables that can be read from a configura-tion file in Extensible Markup Language (XML) format andtherefore can be used to store persistent values such as con-figuration parameters of components or data that should per-sist across shutdown and power-up of the system.
In order to use the above primitives, a component should peerwith another one. By peering, a component becomes able toaccess the public interface of its peer, as shown in Figure 1.Note that peering is not a symmetric relation. If a componentA is peer of a component B, then A can use the public inter-
face of B, but B can not use the public interface of A unlessB is also a peer of A.
The Data Port is a primitive for data exchange and can beconfigured to use a FIFO buffer or not. Also, they can bewrite-only, read-only or read-write and are accessed in real-time in a thread-safe fashion. Hence, while reading a DataPort, a mutual exclusion procedure ensures that data will notchange until the end of the reading operation. Data Ports canalso be configured to trigger the activity of a component orto execute a function upon reception of data. Of course, toexchange data, Data Ports should be connected to each other.
The ExecutionEngine block, shown in Figure 1, is exe-cuted according to the activity of the component and imple-ments the processing of the scripts, Commands, Events andState Machines associated to the component.
As shown above, the OROCOS framework provides manypossibilities for interaction and communication among com-ponents and therefore, the user has to choose what communi-cation mechanism to use for implementing his system. Manyof such mechanisms are similar and could be used to replacethe others, if they were not available. However, each of thecommunication mechanisms is best tailored for some type ofcommunication task. Unfortunately, OROCOS does not de-fine a policy on how to use the available mechanisms and dueto the complexity of the framework, it is not easy for a newuser to understand all the details of each mechanism and todecide which one is best for each communication task.
3 PROPOSED ARCHITECTURE
A typical topology for a computer-based control system isshown in Figure 3.
Sensor
ReferenceController Holder Actuator
Plant
Sampler
Figure 3: Typical topology for a sampled-data control system.
The dynamic model of a manipulator robot can be repre-sented by Fu et al. (1987):
τ = D(q)q + H(q, q) + G(q) (1)
where q is the vector of angular joint positions, τ is the vec-tor of torques applied on joint, D(q) is the generalized iner-tia matrix, H(q, q) is the vector of centrifugal and Coriolisforces and G(q) is the vector of gravity forces.
For a N -joint robot, expression (1) can be rewritten as:
qi = fi(q1, . . . , qN , q1, . . . , qN , τi) (2)
400 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
where qi is the position of joint i.
By defining x =[
qT qT]T
, the model (1) can be repre-sented in a state-space form as:
x = f(x, τ) (3)
The architecture proposed in this work is based on compo-nent models for the blocks of the system shown in Figure 3and on (3). In other words, the architecture can be usedfor any system with a diagram block such as Figure 3 anda model such as (3) and not only for robots. Each compo-nent model can then be instantiated in a component by usingthe specific parameters of a given robot or system. Figure 4shows the models which are used as the base of the archi-tecture. In Section 3.2 those models are instantiated for ageneric robot.
referenceDataPorts
Controller
Eve
nt Sample
act
sensor*
Sample
DataPorts
Actuator
Eve
nt
act*
Sensor
DataPorts
Sample
Eve
nt
sensor
Peer
s
Sampler
Sample
Eve
nt
Figure 4: Component models of the proposed architecture.
The component models Sensor, Controller andActuator, are models for base components and specify in-terfaces which enforce polices on how to use the services ofthe OROCOS framework. By using those models as basesfor the components representing a specific manipulator, theuser does not have to deal with the details of the communica-tion mechanisms of the framework and is free to concentratein the details on how to specify the parameters of his specificmanipulator.
The Sampler is the component that generates the samplingrate of the control loop and synchronizes the other compo-nents. It generates an Event which is received by the othercomponents, automatically triggering a control cycle. Gener-ally, the Sampler is configured to generate periodic Events,but it could be configured to generate aperiodic Events aswell. Each component of the system is registered as a peer tothe Sampler in order to access its events interface.
The Sensor model abstracts the sensors of the system. ithas a Data Port where it writes the values which are read fromthe sensors. This Data Port is based on the state x of (2) andis represented by a vector which size depends on the numberof joints of the robot, given by:
sensor =[
q1 q2 · · · qN q1 q2 · · · qN
]T
(4)
The Sensor component model has a virtual member func-tion called sample_now(), which is called each time aEvent from the Sampler is received. This function shouldread the actual sensors of the robot and write the data on thecomponent Data Port. By default, this function just sets a flagindicating that an Event from the Samplerwas received. Ofcourse, given an actual robot, the Sensor component modelwould be derived in a specialized sensor component modelwhich would overwrite the sample_now() function to ac-tually read the sensors.
The Actuator component model abstracts the system actu-ator. Similar to the Sensor component model it has a virtualmember function called sample_now() which is called inresponse to an Event from the Sampler. However, its DataPort has a different behavior. It is a read-only port where thecomponent can read the vector of actuator values from. ThisData Port is based on the input τ of (2) and is represented byanother vector with variable size given by:
act =[
τ1 · · · τN
]T(5)
Those values should be applied in each joint of the robot.Furthermore, the Data Port of this component is configuredto trigger a virtual member function whenever a data is writ-ten (by an external component) to it. This behavior is sig-naled by an ∗ in the port called act∗. This virtual func-tion is responsible for actuate the robot with the values justwritten to the Data Port. Again, given an actual robot, theActuator component model would be derived in a spe-cialized actuator component model which would overwritethe virtual member functions to actually actuate the robot.
The Controller component model abstracts the systemcontroller. It has three Data Ports, similar to the signals thatthe controller in Figure 3 is connected to. The referenceand sensor∗ Data Ports are used for reading the refer-ence and sensor vectors, respectively. Note that a write tothe sensor∗ Data Port triggers a call to a virtual func-tion called controller_now(), which should computethe control signal and write it to the act Data Port. Onceagain, this model would be derived in a specialized controllerfor the actual robot. Similar to the preceding componentmodels, this component model has a virtual function calledsample_now() which is connected to the Event from theSampler.
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 401
In a typical control system, the cycle begins with a read ofsensor and then, this measure is used to compute the con-trol value which is applied in the actuator. The architec-ture follows the same principle. The Sample Event just en-ables the Controller and Actuator and does not causeany activity, these components will be activated by their re-spectively Data Ports: sensor∗ and act∗. However, theSample Event enables Sensor and should then writes thesensor measurements to the sensor Data Port. This writewill activate the Controller which will generate the actand will activate the Actuator.
3.1 The Joint Component Model
In order to represent each joint of a robot there is a Jointcomponent model. A Joint component should be associ-ated with another component representing the hardware ofthe joint. Hence, the Joint component model does not rep-resent the hardware of the joint, but just the joint itself. Thisway, the concept of a joint of the manipulator is kept inde-pendent of the supporting hardware, thus enabling all jointsto have a common interface even if their hardware is not thesame. Figure 5 shows the interface of this component model.
brakeApply
motorSet
motorOn
brakeRelease
encoderRead
motorOff
brakeApplybrakeRelease
Properties:PeerNameBrakeActuatorSignSensorSignGearRatioInitialPosSampler
Dat
aPo
rts Index
PositionVoltage
Displacement
SampleAct
VoltageActuator
sensorReadactuatorSetactuatorOffactuatorOn
Met
hods
Joint
Cal
led
Met
hods
Eve
nts
IndexSensor*DisplacementSensor*PSfrag replacements
Properties:
Figure 5: The Joint component model.
When a Joint component is created, it should be connectedto a component which represents the hardware of the joint.The component which represents the hardware should pro-vide the following Methods:
actuatorOn: turn the joint actuator on
actuatorOff: turn the joint actuator off
actuatorSet: apply value to the joint
sensorRead: read the sensors of the joint
brakeApply: apply the brakes to the joint
brakeRelease: release the brakes.
Each Joint component has the following Properties, whichdescribe the hardware of the joint.
PeerName: Name of the component which implements thehardware access.
Brake: Indicates whether the joint has a brake.
ActuatorSign: Used to compensate for wiring differ-ence on the joint hardware, such that when an positivevalue is applied, the joint moves in the positive direc-tion.
SensorSign: Similar to ActuatorSign, but used tocompensate for sensor wiring, such that a positive valueis read when the joint moves in the positive direction.
GearRatio: Ratio between the joint motion and the sensorreadings.
InitialPos: Initial position of the joint.
Sampler: Indicate whether the Joint component shouldtry to connect to the Sample and Act Events.
The Joint component model exports methods which aresimilar to the ones it imports from a peer specified by thePeerName Property. This way, it is possible to com-mand the joint of the robot without to command the hard-ware directly. The motorOn and motorOff Methodsare directly mapped on the imported actuatorOn andactuatorOff Methods. The same occur with the meth-ods used to handle the brakes. However, they are onlyexported if the Brake Property confirms that the specificjoint has a brake. The motorSet Method is mapped ontothe actuatorSet Method by compensating for the wiringaccording to the ActuatorSign Property. Finally, thesensorRead Method returns the displacement of the jointof the robot, compensating for the wiring according to theSensorSign Property and the reduction between the jointaxis and the sensor axis by using the GearRatio Property.
There are three Data Ports for the Joint component to con-nect to the component which is specified by the PeerNameProperty and to publish the status of the joint. TheVoltageActuator Data Port is used to send a voltageto the component specified by the PeerName Property toapply to the actuator, while the Voltage Data Port is usedto receive the value the Joint should apply to its actuator.
402 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
Note that a write to this Data Port should be mapped to a callto the actuatorSet Method of the hardware component.
The DisplacementSensor∗ Data Port receives the dis-placement of the motor axis from the component specifiedby the PeerName Property. A write to this port triggers anevent which computes the displacement of the joint, by usingthe GearRatio and SensorSign Properties and updatesit by writing to the Displacement Data Port. The dis-placement of the joint is also integrated with the initial con-dition given by the InitialPos Property and updated bywriting to the Position Data Port. In a similar way, thevalue in the IndexSensor∗ Data Port received from thecomponent specified by the PeerName Property is used toupdate the value made available at the IndexData Port. ThesensorRead Method imported from the component spec-ified by the PeerName Property should cause the updateof IndexSensor∗ and DisplacementSensor∗ DataPorts.
The Sample Event triggers a call to the sensorReadMethod, which updates the Displacement, Positionand Index Data Ports. The Act Event triggers aread from the Voltage Data Port and a write to theVoltageActuator Data Port. These Events enables thesynchronized and parallel reading and actuation of all joints.
3.2 Connecting the Base Components
Since the Joint component model represents a single jointof the robot and the interfaces of Sensor and Actuatorcomponents are based on vectors of variables for the wholerobot, there is a need for multiplexing the signals from theN Joints of the robot to form the vector required by theSensor. In a similar way, there is need for demultiplex-ing the vector at the output of the Actuator to form theactuation signal to be applied to each one of the Joints.
The SensorNMux component model (see Figure 6), ex-tended from Sensor component, multiplexes the 2N DataPorts to form a vector as given by (4). Hence, theSensorNMux component has N Data Ports for position(Position1∗, ..., PositionN∗) and N Data Ports for dis-placement (Displacement1∗, ..., DisplacementN∗).Again, the ∗ is used to indicate that a write to those Data Portstriggers an Event which stores the written value. After all 2NData Ports have been written to following a Sample Event,the sensor Data Port is updated with the correspondingvector.
The ActuatorNDemux component model (see Figure 7),extended from Actuator component, demultiplexes theact vector, given by (5), and writes the values to the N
Voltage1, ..., VoltageN Data Ports. Then, if a Sample
Dat
aPo
rts
Eve
nts
Sample
sensor
SensorNMux
Position1*Displacement1*
...
PositionN*DisplacementN*
Figure 6: SensorNMux component model
Event has occurred since the last Act Event, a new Act isgenerated. This event should force all actuators to drive thehardware at the same time.
Dat
aPo
rts
Eve
nts
Sample
ActuatorNDemux
Eve
nts
Act
Voltage1
...
VoltageN
act*
Figure 7: ActuatorNDemux component model.
The SensorNMux and ActuatorNDemux are genericcomponent models to multiplex and demultiplex data fromsensors and actuators, respectively. In the proposed archi-tecture they are connected to with N Joint components asshown in Figure 8. CardN is the component which imple-ments the actual access to the hardware of the robot. Thedashed lines indicate the peering of the components, whilethe solid lines indicate the data flow through the Data Ports.
It is important to note that the component models describedabove are generic and can be used to implement a system forany robot, independent of the number of joints. Althoughan electric actuator has been assumed, the architecture is ex-
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 403
PSfrag replacements
Card01
CardN
Joint1
JointN
SensorNMux
ActuatorNDemux
Sampler
...
...
......
. . .
τ
q q
Robot
Figure 8: Actuator, Sensor and Joint connections.
tensible to other types of actuators (hydraulic, pneumatic) aslong as they can be interfaced with a computer.
An instantiation of the models for a robot with two jointsis shown in Figure 9. The following components are in-stantiated from the corresponding component models de-fined above: Card01, Card02, Joint1, Joint2,Sensor2Mux, Actuator2Demux and Sampler. Notethat the controller is not included yet.
The sensor read/actuator write cycle can be described as fol-lows:
1. The Sample Event is generated and the Sensor2Muxand Actuator2Demux components will start to waitfor something to be written in their Data Ports:DisplacementX∗ and Act∗ respectively. At thesame time, the Joint1 and Joint2 componentswill receive the same Event and will call their ownencoderRead Methods.
2. Each Card component will execute theencoderRead Method which will write the ap-propriate displacement and index status into the DataPorts of each Joint component. This will cause theupdate of Position, Displacement and IndexData Ports, as explained in Section 3.1, which areconnected to Data Ports of Sensor2Mux component.
3. After all DisplacementX∗ Data Ports ofSensor2Mux are updated, the sensor is writ-ten to and the sensor read cycle is done.
4. At other side, when a data is written to the Act∗
Data Port of Actuator2Demux, the Voltage1and Voltage2 Data Ports, which are connected toVoltage Data Ports of JointX components, are up-dated and the Act∗ Event is generated.
5. Again, the JointX components catch this Event andcall their own motorSet Methods, which actually ac-tuate its joint, concluding the actuator write cycle.
3.3 An Independent PID for Each Joint
Similar to the Sensor and Actuator component modelsin Figure 4, the Controller component model should beextended to implement the desired control law. In this sec-tion, the Controller component model is extended to im-plement an independent PID controller for each joint of therobot. This scheme treats each joint of the robot as a simplejoint servomechanism, thus neglecting the coupling amongjoints.
To implement this control strategy, at first, a generic PIDcomponent model will be implemented. This componentmodel can be instantiated with appropriate gains for eachjoint of the robot. Then, the Controller will be extendedto interact with the N PID components.
3.3.1 Component Model for a PID Controller
A PID controller with saturation has three gains: the pro-portional gain Kp, the integral gain Ki and the differentialgain Kd. The saturation is represented by the values u andu, meaning the minimum and maximum value for the con-troller output. In discrete time form, the PID can be writtenas (Astrom and Wittenmark, 1984):
u[k] = u[k − 1] + kp(e[k] − e[k − 1])
+kie[k] + kd(e[k] − 2e[k − 1]
+e[k − 2]) , u ≤ u[k] ≤ u
u[k] = u , u[k] > u
u[k] = u , u[k] < u
(6)where e[k] is is the difference between the reference and theoutput value.
Figure 10 shows the interface of the PID component model.Note that, by defining the PID parameters as Properties, dif-ferent PID components can be instantiated from the samemodel. The component is activated when a value is writeto its Sen∗ Data Port. Then, the error value e[k] is computedas difference between Ref and Sen∗ Data Ports, and by us-ing (6), u is computed and written into the Out Data Port.In the end, the values of e[k − 2] and e[k − 1] are updatedto e[k − 1] and e[k], and the component will be waiting foranother write to its Sen∗ Data Port.
3.3.2 ControllerNPID Component Model
This component model extends the Controller compo-nent model, which is a MIMO controller, to use N SISO PIDcontrollers as implemented by the PID component model.Figure 11 shows the interface of the ControllerNPIDcomponent model.
404 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
Port
asde D
ados
brakeApply
motorSet
motorOn
brakeRelease
encoderRead
motorOff
brakeApply
motorSet
motorOn
brakeRelease
encoderRead
motorOff
brakeApplybrakeRelease
brakeApplybrakeRelease
Propriedades:PeerNameBrakeActuatorSignSensorSignGearRatioInitialPosSampler
PropertiesPeerNameBrakeActuatorSignSensorSignGearRatioInitialPosSampler
Dat
aPo
rts
Peer
s
Dat
aPo
rts
Peer
s
Sampler
Sample
Eve
nt
Peer
s
Dat
aPo
rts
Peer
sE
vent
s
Sample
Sample
sensor
Index
PositionVoltage
Displacement
SampleAct
brakeApplybrakeRelease
Index
PositionVoltage
Displacement
SampleAct
brakeApplybrakeRelease
actuatorOnactuatorOffactuatorSetsensorRead
VoltageActuator
actuatorOnactuatorOffactuatorSetsensorRead
DisplacementSensorIndexSensor
actuatorOnactuatorOffactuatorSetsensorRead
VoltageActuator
actuatorOnactuatorOffactuatorSetsensorRead
DisplacementSensorIndexSensor
Joint1
Met
hods
Actuator01
Met
hods
Dat
aPo
rts
Joint2
Met
hods
Actuator02
Dat
aM
etho
dsPo
rts
Sensor2Mux
Actuator2Demux
Eve
nts
Cal
led
Met
hods
Eve
nts
Cal
led
Met
hods
Eve
nts
Eve
nts
Act
Voltage1
Voltage2
act*
Position1*Displacement1*
Position2*Displacement2*
VoltageActuator*
DisplacementSensor*IndexSensor*
VoltageActuator*
DisplacementSensor*IndexSensor*
Figure 9: Instantiation for a robot with two joints.
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 405
RefOut
PID
Properties:
Dat
aPo
rts
Sen*
PSfrag replacements
kpkikduuPID component model
Properties
Data
Ports
Figure 10: PID component model.
Dat
aPo
rts
Eve
nts
Sample
ControllerNPID
referenceact
Sen1Ref1Out1*
...
SenNRefNOutN*sensor*
Figure 11: ControllerNPID component model.
This component model has Data Ports to communicate withN PID components. It demultiplexes the reference andsensor vectors and write the values to the 2N Data Ports(Ref1,...RefN) and (Sen1,...SenN). Also, it acts as a mul-tiplexer, by collecting the outputs of the N PID componentswritten to the Out1,...OutN Data Ports, and writing them asa vector to the act Data Port after a Sample Event. Theblock diagram of this extension is shown in Figure 12.
Figure 13 shows the connections for a system with two PIDcontrollers. The control cycle of Figure 13 can be describedas follows:
1. The Sample Event is generated and theController2PID component starts to wait forsomething to be written to its sensor∗ Data Port.
PSfrag replacements
PID01 PIDN
ControllerNPID
Sampler
. . .
τq q
qd qd qd
Figure 12: Block diagram of the ControllerNPID exten-sion.
Sampler
Sample
Eve
nt
Peer
s
Dat
aPo
rts
Eve
nts
Sample
Port
s RefOut
RefOut
Sen2Ref2
PID1
Dat
a
Properties:
PID2
Properties:
Dat
aPo
rts
Controller2PID
Sen1Ref1Out1*
referenceact
sensor*OutN*
Sen*
Sen*
PSfrag replacements
kp
kp
ki
ki
kd
kd
u
u
u
u
Figure 13: Complete system for a two joint robot with inde-pendent PID controllers.
2. When a write to sensor∗ occurs, theController2PID reads its reference andsensor∗ Data Ports, demultiplexes its into Ref1,Ref2, Sen1 and Sen2. Then, it waits for all of itsOutX∗ Data Ports to be updated.
3. Each PID component react to the write of each Sen∗
and update each Out Data Port.
4. When Out1∗ and Out2∗ are updated, theController2PID multiplexes this two DataPorts and writes the act vector.
406 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
3.4 PID with Feedforward Compensation
An independent PID for each joint strategy may work at lowspeeds, but generally present a poor performance. Theseproblems arise because the coupling among joints and grav-ity forces are neglected. Better results can be achieved byexplicit exploring the knowledge about the dynamic modelof the robot.
In this section the Controller component model is ex-tended to implement a PID control with feedforward com-pensation, similar to the PD with feedforward compensationpresented in Craig (1989). However, here an integral termis included to reduce steady-state and modeling errors. Thiscontrol strategy is represented by the block diagram shownin Figure 14. This strategy consists of a PID feedback plusa feedforward computation of the the nominal robot dynam-ics (1) along the desired joint position trajectory.
PID ++
Robot
Model
+
−outputreference
PSfrag replacements
qd qd qd
q q
ud
δu u
Figure 14: Controller with feedforward compensation.
It is convenient to rewrite the robot model (1) as:
x(t) = f(x(t), u(t)) (7)
If the desired states xd(t) are known along the desired trajec-tory, then the torque ud(t), are also known, can be computedby (1) and satisfy:
xd(t) = fa(xd(t), ud(t)) (8)
where fa is the robot model.
The result of (7) minus (8) is
δx(t) = h(δx(t), δu(t)) (9)
where δx(t) = x(t) − xd(t), δu(t) = u(t) − ud(t) andh(·, ·) represents the difference between the robot f(·, ·) andthe robot model fa(·, ·).
If the state x(t) is close to desired state xd(t), then a PIDcontrol law, as defined by (6), can be used to make δx(t)converge towards zero.
3.4.1 The Robot Model Component
This component implements the inverse model of therobot (1). Given the position, velocity and acceleration of the
joints it computes the desired torque of the robot. This com-ponent model is shown in Figure 15 and its internal work-ing is similar to the PID component model, however it com-putes (1) instead of (6). The Data Port In∗ receives a vec-tor with the desired position, velocity and acceleration of therobot and computes, by using (1), the torque, which is writtento the Out Data Port.
Out
Model
Dat
aPo
rts
In*
Figure 15: Robot Model component.
3.4.2 ControllerNPIDFF Component Model
The ControllerNPIDFF extends the Controllercomponent model to implement a PID with feedforwardcompensation. Therefore, in order to implement the PID con-troller with feedforward compensation it is only necessary tochange the component model that Controller is instan-tiated from to ControllerNPIDFF. Figure 16 shows theinterface of the ControllerNPIDFF component model.
Dat
aPo
rts
Eve
nts
Sample
...
In_modelOut_model*
Sen1Ref1Out1*
SenNRefNOutN*
referenceact
sensor*
PSfrag replacements
ControllerNPIDFF
Figure 16: ControllerNPIDFF component model.
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 407
The implementation is similar to the ControllerNPIDcomponent model. It writes a vector with the robot position,velocity and acceleration to the In_modelData Port, whichtriggers the robot Model component to compute the robottorque, which is received by the ControllerNPIDFFthrough the Out_model∗ Data Port. Then it writes the Nvalues of sensor to its SenX Data Ports, which triggers thePID component. In the end, the torque values are them addedto the values received from the PID of each joint through theOutN∗ Data Port and written to the Act Data Port, to beused by the system actuators.
4 EXPERIMENTAL RESULTS
The architecture presented in Section 3 was implemented inthe vision system of Janus robot. It consists of two linksconnected in series by two rotational joints. Two camerasare attached to the far-end of the chain. Each joint is drivenby a DC motor and has an incremental quadrature encoderand a reference index inductive sensor.
Figure 17: The vision system of Janus.
Each joint also has an actuator card which uses a CANbuswith 1Mbit/s for real-time data transfer to a host PC. Moredetails about the hardware and software for this card can befound in Santini and Lages (2008). This card, named as AIC,is modeled as a component which has an interface compat-ible with the one defined by Joint component model, asshown in Section 3.1.
4.1 Independent PID for Each Joint
Figure 18 shows the block diagram for an independent PIDcontroller for each joint. Each component of Figure 18 is in-stantiated from a component model as shown in Table 1 andall peering and Data Port connection as shown in Figures 9and 13 are performed. The hatched components have theirexecution in real-time and the arrows represent the informa-tion flow between components.
Table 1: Characteristics of system.Component Model Activity Priority Period
Sampler Sampler PeriodicActivity 1 10 ms
aic01 AIC SequentialActivity — —aic02 AIC SequentialActivity — —Joint1 Joint SequentialActivity — —Joint2 Joint SequentialActivity — —Sensor Sensor2Mux NonPeriodicActivity 2 —BridgeRef BridgeRef NonPeriodicActivity 97 —Generator nAxisGenPos PeriodicActivity 98 10 ms
Controller Controller2PID NonPeriodicActivity 3 —pid1 PID SequentialActivity — —pid2 PID SequentialActivity — —Actuator Actuator2Demux NonPeriodicActivity 3 —Reporter FileReporting PeriodicActivity 100 10 ms
TaskBrowser TaskBrowser NonPeriodicActivity 255 —
In order to test the proposed architecture, a sequence of com-mands, given by Table 2, is executed. Each command is sentat its respective time. The desired and measured position foreach joint, are shown in Figure 19.
Table 2: Sequence of test commands.
Time Command
5s moveTo(array(1.0,1.0),10)17s moveTo(array(2.0,2.0),9)28s moveTo(array(3.0,1.0),8)38s moveTo(array(1.0,3.0),7)47s moveTo(array(4.0,4.0),5)54s moveTo(array(0.0,0.0),5)
To evaluate the computing performance of the proposed ar-chitecture, the activity of each component in a full control cy-cle is measured. The Figure 20 shows the timeline of actionsstarted by the Sample Event. As can be seen, the computingof the control loop takes less than 1 ms, which is adequatefor most robots.
At t0 = 0, the Sample Event is generated by Sampler andthe components which are connected to this Event are calledin a synchronous way. Figures 9 and 13 shows that Joint1,Joint2, Sensor, Controller and Actuator are con-nected to Sample Event.
First, the Joint1 executes and calls the SensorReadMethod from aic1 card. This process takes about 259 µs
which is the time of sensing the system using the AIC cardsthrough CANBus. Then, at f1, the Displacement andPosition Data Ports of Joint1 are updated.
After that, the process is done again for the second joint un-til f2. So Sensor, Controller and Actuator enabletheir respective Data Ports: Position1∗, Position2∗,Displacement1∗, Displacement2∗, sensor∗ andact∗.
As Joint1 and Joint2 update their Data Ports, whichare connected to Sensor Data Ports, the Sensor com-
408 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
BridgeRefTaskBrowser Generator
Reporter
��������������������������������������������������������������������
���������������������������������������������������
���������������������������������������������������
������������������������������������������������������������
���������������������������������������������
���������������������������������������������
����������������������������
���������������������������������������������
���������������������������������������������
� � � � � � � � � � � � � � � � � � � � �
���������������������������������������������
������������������������������������������������������������
���������������������������������������������
���������������������������������������������
sensor
aic02
aic01 Joint2
Joint1 Sensor
Actuator
Controller
pid2
pid1
controller
trajectoryuser interface
data log
actuator
Figure 18: Block diagram for an independent PID for each joint.
ponent is activated and joins Position1∗, Position2∗,Displacement1∗, Displacement2∗ into sensorData Port. This process takes about 20 µs and is done atf4 when Controller is activated.
Until f5, the Controller splits the data in the sensor∗
Data Port in position and displacement of each joint, interactswith two PID components and writes the act Data Port tothe Actuator. This is done in approximately 30 µs.
Then, the Actuator is activated and splits the act∗ intothe Voltage1 and Voltage2 Data Ports which are con-nected to Joint1 and Joint2. In the end, the Actuatoremits the Act Event at t5 taking approximately 10 µs.
The Act Event activates Joint1 and Joint2 to effec-tively actuate over each joint of robot. This takes about150 µs. Note that the aic01 cards takes less time thanaic02 by hardware issues. After f6, all the real-timecomponent have executed and the other components like,Generator and Reporter can execute.
Note that, by using the proposed architecture, the control cy-cle is performed in 724 µs for two joints, which means about362 µs for each joint. In (Santini and Lages, 2008) a similarcontroller was implemented by using a monolithic structure,with a single loop that reads the sensor, computes the controllaw and drives the actuator. The control loop was performedin 331 µs for a single joint. Hence, the proposed architec-ture imposes a small overhead on the system. However, tobe adapted to another robot, the monolithic implementationrequires a rewrite of the program source code, while the ar-chitecture proposed here requires just the setting of the jointparameters on its configuration files. There is no need tochange the program source code, with the associated com-plexity and the possibility of introduction of bugs.
4.2 PID with Feedforward Compensation
In order to implement the PID controller with FeedforwardCompensation, just a minor change is needed on the basic in-dependent PID controller for each joint. It is only necessaryto change the instantiation of the Controller componentfrom ControllerNPID to ControllerNPIDFF. SinceControllerNPIDFF uses the model of the robot, as de-tailed in section 3.4.2, two more components are added to thesystem, as shown in Table 3.
Table 3: Characteristics of the new components for the PIDcontroller with feedforward compensation.
Component Model Activity Priority Period
Controller ControllerNPIDFF NonPeriodicActivity 3 —Model Model SequentialActivity — —
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 409
0 10 20 30 40 50 60 700
0.5
1
1.5
2
2.5
3
3.5
4
PSfrag replacements
MeasuredDesired
Posi
tion
(rad
)
Time (s)
(a) Joint 1.
0 10 20 30 40 50 60 70−1
−0.5
0
0.5
1
1.5
2
2.5
3
3.5
PSfrag replacements
MeasuredDesired
Posi
tion
(rad
)
Time (s)
(b) Joint 2.
Figure 19: Position with PID controller.
Joint1
aic01
Joint2
Sampler aic02
Sensor
Actuator
pid1
pid2
t0
Others
Controller
PSfrag replacements
t1 t2 t3 t4 t5 t6
f1 f2 f3 f4 f5 f6
t1 = 259µst2 = 517µst3 = 535µs
t4 = 566µst5 = 574µst6 = 724µs
Figure 20: Timeline of action during a control cycle.
0 10 20 30 40 50 60 70−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
PSfrag replacements PIDPID with feedforward
Err
or(r
ad)
Time (s)
(a) Joint 1.
0 10 20 30 40 50 60 70−0.12
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
PSfrag replacements PIDPID with feedforward
Err
or(r
ad)
Time (s)
(b) Joint 2.
Figure 21: Position error.
The same sequence of commands shown in Table 2 were ex-ecuted with this new controller. The curves for the desiredand measured position can not be visually distinguished fromthose for the PID controller shown in Figure 19. However,the curves for the position error, shown in Figure 21, re-veal that the trajectory tracking performance for the PID withfeedforward compensation is better than an independent PIDcontroller for each joint. Nonetheless, since the point hereis the architecture for implementation of controllers and nottheir tuning, it is important to note that no effort was madein order to optimally tune the controllers. Therefore, the rel-ative performance of each controller is valid for the settingsused in this particular experiment and should not be regardedas not as representative of that type of controller.
410 Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011
Regarding the timing of the control loop computation, thereis a 10 µs increase in the computing time, due to the needto compute the robot model. However, a monolithic im-plementation is subject to the same increase in comput-ing time. In this particular case, that happens becausethe model component has a SequentialActivity,which means that it executes in the context of its caller (theControllerNPIDFF), just like a function call in a mono-lithic implementation.
5 CONCLUSION
This paper presented an architecture for control of manipu-lator robots based on components. The approach is modularand enables the reuse of components developed earlier, thusshortening the development time and enabling the replace-ment of system components without the need to understandthe whole system. Furthermore, the proposed architecturedefines some polices on the use of the resources availablefrom the OROCOS framework, therefore making it easier forbeginners to start to use the system.
The flexibility of the architecture was demonstrated by im-plementing two control strategies with minor changes: In-dependent PID controllers and a controller with feedforwardcompensation. The architecture is also independent of robotand can be easily extended to other systems.
The measured times show that the architecture does not in-troduce significant computing time in the control loop. Thisallows for the use of the proposed architecture in real-time.
REFERENCES
Astrom, K. J. and Wittenmark, B. (1984). Computer Con-trolled Systems: Theory and Design, Prentice Hall Pro-fessional Technical Reference, Indianapolis.
Barrett (2010). Barrett technology, inc. Avail-able: <http://www.barrett.com/robot/products-arm.htm>.
Brugali, D. and Scandurra, P. (2009). Component-basedrobotic engineering (part i) [tutorial], Robotics & Au-tomation Magazine, IEEE 16(4): 84–96.
Bruyninckx, H. (2001). Open robot control software: theorocos project, Proceedings of the IEEE InternationalConference on Robotics and Automation, Vol. 3, Piscat-away: IEEE Press, Seoul, pp. 2523–2528.
Craig, J. J. (1989). Introduction to Robotics Mechanics andControl, second edn, Addison-Wesley.
Ford, W. (1994). What is an open architecture robot con-troller?, Proceedings of the IEEE International Sympo-
sium on Intelligent Control, Piscataway: IEEE Press,Columbus, pp. 27–32.
Fu, K. S., Gonzales, R. C. and Lee, C. S. G. (1987). RoboticsControl, Sensing, Vision and Intelligence, IndustrialEngineering Series, McGraw-Hill, New York.
Gaspar, M. D. (2003). Uma biblioteca configurável paracontrole tempo real de robôs manipuladores, Dis-sertação (mestrado), Universidade Federal de MinasGerais, Belo Horizonte.
Kozlowski, K. (1998). Modelling and Identification inRobotics, Springer-Verlag New York, Inc., Secaucus,NJ, USA.
Liandong, P. and Xinhan, H. (2004). Implementation of a pc-based robot controller with open architecture, Proceed-ings of the IEEE International Conference on Roboticsand Biomimetics, Piscataway: IEEE Press, Shenyang,pp. 790–794.
Lloyd, J. E. (1992). RCI reference manual, Technical re-port, McGill Research Centre for intelligent Machines,McGill University, Montréal.
Microsoft (2008). Robotics developer studio. Available:<http://www.microsoft.com/robotics>.
Neuronics (2010). Neuronics AG - linux robot. Avail-able: <http://www.neuronics.ch/cms_de/web/index.php?id=364>.
Object Management Group (2010). The corba website.Available: <http://www.corba.org/>.
ROS.org (2010). ROS. Available: <http://www.ros.org>.
Santini, D. C. and Lages, W. F. (2008). A distributed robotcontrol architecture using RTAI, Proccedings of theTenth Real-Time Linux Workshop, Centro Universitariodel Norte, Universidad de Guadalajara, Colotlán, Mex-ico, pp. 1–5.
Smith, C. and Christensen, H. I. (2009). Constructing a highperformance robot from commercially available parts,Robotics and Automation Magazine 16: 75–83.
Swevers, J., Verdonck, W. and De Schutter, J. (2007). Dy-namic model identification for industrial robots, Con-trol Systems Magazine 27(5): 58–71.
Tavares, D. M., Aroca, R. V. and de Paula Caurin, G. A.(2007). Upgrade of a scara robot using orocos, Pro-ceedings of the 13th IASTED International Conferenceon Robotics and Applications, Calgary: ACTA Press,Würzburg, pp. 252–257.
Revista Controle & Automação/Vol.22 no.4/Julho e Agosto 2011 411