22
Autonomous Capture of a Tumbling Satellite Ioannis Rekleitis, Eric Martin, Guy Rouleau, Régent L’Archevêque, Kourosh Parsa, and Eric Dupuis Canadian Space Agency Space Technologies 6767 route de l’Aéroport Longueuil, QC J3Y 8Y9, Canada e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] Received 5 June 2006; accepted 12 February 2007 In this paper, we describe a framework for the autonomous capture and servicing of sat- ellites. The work is based on laboratory experiments that illustrate the autonomy and remote-operation aspects. The satellite-capture problem is representative of most on-orbit robotic manipulation tasks where the environment is known and structured, but it is dy- namic since the satellite to be captured is in free flight. Bandwidth limitations and com- munication dropouts dominate the quality of the communication link. The satellite- servicing scenario is implemented on a robotic test-bed in laboratory settings. The communication aspects were validated in transatlantic tests. © 2007 Canadian Space Agency 1. INTRODUCTION Over the past few decades, robots have played an in- creasingly important role in the success of space mis- sions. The Shuttle Remote Manipulator System, also known as Canadarm, has made the on-orbit mainte- nance of assets such as the Hubble Space Telescope possible. On the International Space Station ISS, Canadarm2 has been a crucial element in all construc- tion activities. Its sibling, named Dextre, will be es- sential to the maintenance of the ISS. JAXA also dem- onstrated the use of a robotic arm in the ETS-VII space servicing demonstration mission in 1998–1999 Kasai, Oda & Suzuki, 1999. In light of the missions currently being planned by space agencies around the world, the coming years will only show an increase in the number and the criticality of robots in space missions. Examples include the Orbital Express mission of the U.S. De- fence Advanced Research Project Agency DARPA Journal of Field Robotics 24(4), 1–XXXX (2007) © 2007 Canadian Space Agency Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/rob.20194

Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] Received 5

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

Autonomous Capture of aTumbling Satellite

Ioannis Rekleitis, Eric Martin, Guy Rouleau,Régent L’Archevêque, Kourosh Parsa, andEric DupuisCanadian Space AgencySpace Technologies 6767 route de l’AéroportLongueuil, QC J3Y 8Y9, Canadae-mail: [email protected]: [email protected]: [email protected]: [email protected]: [email protected]: [email protected]

Received 5 June 2006; accepted 12 February 2007

In this paper, we describe a framework for the autonomous capture and servicing of sat-ellites. The work is based on laboratory experiments that illustrate the autonomy andremote-operation aspects. The satellite-capture problem is representative of most on-orbitrobotic manipulation tasks where the environment is known and structured, but it is dy-namic since the satellite to be captured is in free flight. Bandwidth limitations and com-munication dropouts dominate the quality of the communication link. The satellite-servicing scenario is implemented on a robotic test-bed in laboratory settings. Thecommunication aspects were validated in transatlantic tests. © 2007 Canadian Space Agency

1. INTRODUCTION

Over the past few decades, robots have played an in-creasingly important role in the success of space mis-sions. The Shuttle Remote Manipulator System, alsoknown as Canadarm, has made the on-orbit mainte-nance of assets such as the Hubble Space Telescopepossible. On the International Space Station �ISS�,Canadarm2 has been a crucial element in all construc-tion activities. Its sibling, named Dextre, will be es-

sential to the maintenance of the ISS. JAXA also dem-onstrated the use of a robotic arm in the ETS-VIIspace servicing demonstration mission in 1998–1999�Kasai, Oda & Suzuki, 1999�.

In light of the missions currently being plannedby space agencies around the world, the comingyears will only show an increase in the number andthe criticality of robots in space missions. Examplesinclude the Orbital Express mission of the U.S. De-fence Advanced Research Project Agency �DARPA�

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

Journal of Field Robotics 24(4), 1–XXXX (2007) © 2007 Canadian Space AgencyPublished online in Wiley InterScience (www.interscience.wiley.com). • DOI: 10.1002/rob.20194

Page 2: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

�Whelan, Adler, Wilson, & Roesler, 2000�, and theConeXpress Orbital Life Extension Vehicle �CX-OLEV™� of Orbital Recovery �Wingo et al., 2004�.

One important area for the application of spacerobotics is autonomous on-orbit servicing �OOS� offailed or failing spacecrafts. A common characteristicto most OOS missions is the necessity to approachand capture the spacecraft to be serviced. Because thecommunication link between the ground operatorand the servicer will be subject to latency, bandwidthlimitations, and communication drop-outs, someamount of on-board autonomy will be required toperform the rendezvous and capture in a safe and ef-ficient manner.

In addition, the commercial viability of such op-erations will require the usage of an efficient processfor the planning, verification, and execution of opera-tions. In this paper, we describe the laboratory experi-ments that verify the feasibility of our approach toperform autonomous missions by demonstrating thisaspect of the process. In particular, we report on theuse of a manipulator system named CART, which hastwo 7-degree-of-freedom arms to demonstrate an au-tonomous capture of a tumbling satellite. As shownin Figure 1, a mock-up satellite, the target, is mountedon one arm while the second arm equipped with arobotic hand, the chaser, approaches and captures thetarget.

In the next section, we present related work. Sec-tion 3 discusses an outline for a typical OOS mission,which provides the motivation for the research re-ported in this paper. Section 4 provides an overviewof the autonomous aspects of the work. Trajectory

planning for the two satellites is outlined in Section 5.Section 6 contains the experimental results, and thelast section presents our conclusions.

2. RELATED WORK

For many years, robots such as Canadarm and Cana-darm2 have been used in space to service expensivespace assets �Stieber, Sachdev & Lymer, 2000�.Canada has also developed another robot called Dex-tre for the ISS; Dextre is to be launched in 2007 andwill be used to perform maintenance tasks. Othercountries are also developing robots for the ISS: TheEuropean Space Agency �ESA� has developed the Eu-ropean Robotic Arm �ERA� �Didot, Oort, Kouwen &Verzijden, 2001�, and the Japanese Space Agency hasdeveloped the JEMRMS �Sato & Doi, 2000�.

In order to speed up the acceptance of OOS andto decrease operational costs, a few technology dem-onstration missions have already been or will soon beconducted. Each mission demonstrates some of thetypical operations described in Section 3. As early as1989, JPL demonstrated in a lab the capture of a ro-tating satellite �Wilcox, Tso, Litwin, Hayati & Bon,1989� using a camera system �Gennery, 1992�. Japanfirst conducted the ETS-VII mission in 1998–1999 �Ka-sai et al., 1999�. ETS-VII involved the capture of a tar-get satellite using a chaser satellite equipped with arobotic arm. Both satellites were launched together tominimize risks associated with the rendezvous por-tion of the mission. The robotic capture was per-formed while the two satellites were still tied usingthe latching mechanism, again for reducing the risks�Yoshida, 2003; Yoshida, 2004�. The mission goal wassuccessfully accomplished. In the framework of thismission, future work is also discussed for a non-cooperative satellite �Yoshida et al., 2004�.

DARPA is currently funding the development ofthe Orbital Express mission to be launched in 20061

�Potter, 2002�. This mission intends to prove the fea-sibility of OOS and refueling. The Orbital Express’sservicer spacecraft ASTRO is equipped with a roboticarm to perform satellite capture and ORU exchangeoperations. Recently, the US Air Force Research Labdemonstrated key elements of extended-proximityoperations with the XSS-11 mission �Grossman &Costa, 2003; Lewis, 2004�. A mission by NASA with

1Orbital Express may be already launched at the time ofpublication.

Figure 1. A dual manipulator system that simulates thetracking and capture scenario; the manipulator on the leftis equipped with a hand, and the manipulator on the rightis mounted by a mock-up satellite.

2 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 3: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

similar objectives, DART, flew in 2005 �Rumford,2003�. The objective was to perform an autonomousrendezvous; unfortunately, the mission failed.

The first commercial mission could be realized byOrbital Recovery Limited, who are developing thetechnologies to permit life extension of spacecraft us-ing their CX-OLEV™. This spacecraft, as explainedby �Wingo et al., 2004�, is designed to mate with anythree-axis stabilized spacecraft and would have suf-ficient supplies to keep a 3000-kg parent spacecraft ina geo-stationary orbit for up to an additional 10 yearsof life. The first mission has been planned for 2008.

The TEChnology SAtellites for demonstrationand verification of Space systems �TECSAS� is a mis-sion proposed by DLR �Sommer, 2004; Martin, Du-puis, Piedboeuf & Doyon, 2005�. The objectives of themission are to prove the availability and advancedmaturity of OOS technologies, and the mastering ofthe capabilities necessary to perform unmanned on-orbit assembly and servicing tasks. For TECSAS, aservicer satellite carrying a robotic subsystem and aclient satellite would be launched together. The mis-sion intends to demonstrate the various phases re-quired for an OOS mission: far rendezvous, close ap-proach, inspection fly around, formation flight,capture, stabilization and calibration of the coupledsystem, flight maneuvers with the coupled system,and manipulation on the target satellite. This missionis currently being redefined.

There have also been several spacecrafts de-signed for transporting logistics to the ISS such asRussia’s Progress, Europe’s ATV �Boge & Schreu-telkamp, 2002�, and Japan’s HTV �Kawasaki, Imada,Yamanaka & Tanaka, 2000�. Many key technologiesrequired for OOS have already been or will be dem-onstrated with these missions.

Because of the unfortunate Columbia accident,NASA had considered using a robotic mission to res-cue the ailing Hubble Space Telescope. A rescue mis-sion using robotic arms derived from the ISS’s Dextrewas tentatively selected to replace the batteries, gy-roscopes, and possibly a scientific instrument of theHubble. A de-orbiting module was also to be carriedby the chaser spacecraft to the orbit. This module wasto be attached to the Hubble for the purpose of de-orbiting the Hubble at the end of its life �King, 2004�.However, at the time of writing this paper, this mis-sion has been cancelled.

Other missions are being considered to demon-strate spacecraft servicing technologies. One of themis SUMO, which is sponsored by DARPA and ex-

ecuted by the Naval Center for Space Technology atthe Naval Research Laboratory. �Bosse et al., 2004�state that the purpose of the program is to demon-strate the integration of machine vision, robotics,mechanisms, and autonomous control algorithms toaccomplish autonomous rendezvous and also thegrapple of a variety of interfaces traceable to futurespacecraft servicing operations. However, at the timeof writing this paper, this demonstration mission, ini-tially planned for 2008, is still unapproved, althoughlaboratory work is being done to develop the tech-nologies. Another mission is CESSORS, which is cur-rently being planned by Shenzhen Space TechnologyCenter of China. According to �Liang, Li, Xue &Qiang, 2006�, included in the mission will be the de-tection, fly-around, and autonomous rendezvous andcapture of a floating target, as well as the tele-operation of the robotic manipulator mounted on thechaser satellite; the authors, however, do not specifyany time frame for the mission.

To determine the technology readiness level ofspace servicing technologies, CSA closely studied themissions mentioned above. These missions have ei-ther occurred, are being conducted, or are in the plan-ning phase. The operations involved in these mis-sions fit the typical descriptions given in Section 3.Each operation may be performed in one of three dif-ferent modes: manual, semi-autonomous, and au-tonomous. In the manual mode, an operator is re-sponsible for conducting the mission by sendingelementary commands or by using hand controllers.In the semi-autonomous mode, an operator is still re-sponsible for performing the operation, but part ofthe operation is automated using scripts that containthe elementary commands or by using higher-levelcommands decomposed automatically into elemen-tary commands. Finally, in the autonomous mode,the operation is performed fully autonomously witha minimal number of interventions from the operator.The operator sends only high-level commands, e.g.,“Capture.” An operator may supervise the missionand be ready to send an abort command if needed.

Table I lists, to the best of the authors’ knowledge,all relevant space-servicing missions. The operationsperformed in each mission are identified, differenti-ating if they were performed manually, semi-autonomously, or autonomously. A “C” is used to in-dicate an operation performed on a cooperative

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 3

Journal of Field Robotics DOI 10.1002/rob

Page 4: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

Table I. Comparative study of OOS missions.

4 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 5: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

satellite,2 while “NC” is used for an operation involv-ing a non-cooperative spacecraft. A dash indicatesthat the operation was not performed during thatparticular mission. A question mark indicates infor-mation that cannot be confirmed because part of theinformation is classified. Finally, we use a shadingscheme to differentiate operations already demon-strated �gray� from those planned to be demonstratedin the future �light gray�. Dark gray is used to indicatean operation that was planned to be demonstrated ina particular mission, but that was not successful. Thisnotation is summarized in Table II for easy reference.3

Over the last several years, several control archi-tectures have been proposed to address differentproblems associated with space robotics. In particu-lar, several architectures have been developed to ad-dress the issues associated with ground control. Sincethe usual mode of operation for space robotics in thepast has been tele-operation �Hirzinger, Brunner, Di-etrich & Heindl, 1993� with direct operator control orsupervision, most of the approaches have not focusedon the implementation of autonomy. In the late1990’s, the Canadian Space Agency �CSA� and theirindustrial partner, MD Robotics, developed the Intel-ligent, Interactive Robotic Operations �IIRO� frame-work �Dupuis, Gillett, Boulanger, Edwards & Lipsett,1999�, which allowed the tele-operation of remoteequipment in operational settings. The Remote Op-erations with Supervised Autonomy �ROSA� archi-tecture was a follow-on to IIRO and addressed the is-sue of scripted control and basic autonomy

requirements �Dupuis and Gillet, 2002; Dupuis,Gillett, L’Archevêque & Richmond, 2001�. ROSA hasbeen used as the basis for the development of theground control station for the robotic elements on theOrbital Express satellite-servicing mission. Similar ar-chitectures were developed in Europe at the sametime. The Modular Architecture for Robot Control�MARCO� developed by DLR addresses similar is-sues in the context of tele-operation, ground control,and tele-presence �Brunner, Landzettel, Schreiber,Steinmetz & Hirzinger, 1999�. The MARCO architec-ture and its relatives have been used on several mis-sions including ROTEX and ETS-7. Two other archi-tectures were also developed under the leadership ofthe European Space Agency: FAMOUS �Fontaine,Steinicke & Visentin, 1996� and DREAMS also con-centrated on the issues associated with the tele-operation of robots in space. In both cases, special at-tention was dedicated to the issues surroundingplanning, verification, and execution of command se-quences. Similarly, in the US, the JPL has developeda set of tools for rover ground control from planningto post-flight analysis. This tool, called RSVP �RoverSequencing and Visualization Program� has beenused with the Mars exploration rovers �Maxwell,Cooper, Hartman, Wright & Yen, 2004�.

Despite the wealth of research in autonomous ro-botics and in control architectures for space robots,relatively little has been done to address specificallythe needs of autonomous space robots. NASA/JPLhave been developing/proposing three different ar-chitectures for applications with a higher degree ofautonomy in the last few years: FIDO, CAMPOUT,and CLARAty. FIDO �Hoffman, Baumgartner, Hunt-sberger & Schenker, 1999� is a three-layer software ar-chitecture for real-time control of single rover sys-tems equipped with scientific payloads. CAMPOUT�Huntsberger et al., 2003� is a control architecture forthe real-time operation of multiple rovers. CLARAty�Nesnas, Wright, Bajracharya, Simmons & Estlin,2003; Volpe et al., 2001� is a proprietary architecturethat is becoming a requirement for many missions.The main goal of CLARAty is to provide a systematicframework for treating all the different vehicles/robots/instruments involved in Mars missions.CLARAty is object oriented and according to the pub-lications provides a high degree of reusability. In asimilar effort, the Laboratoire d’Analyse etd’Architecture des Systèmes �LAAS� has developed asoftware architecture for autonomy �Alami, Chatila,Fleury, Ghallab & Ingrand, 1998�. This architecture is

2In this paper, a cooperative satellite is defined as a satellite de-signed to be serviced and not tumbling. On the other hand, anon-cooperative satellite is understood as one either not designedto be serviced or that it is tumbling.3It is also important to note that the Hubble robotic repair/de-orbitMission was not included in Table I since, as of the time of writingthis paper, it is considered cancelled. The same applies to theSUMO mission since it has yet to be approved for a spacedemonstration.

Table II. Legend for Table I.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 5

Journal of Field Robotics DOI 10.1002/rob

Page 6: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

composed of two main levels: a decision level whoserole is to make plans and supervise their executionand a functional execution level whose role is to carryout low-level actions.

In parallel with these efforts, the Canadian SpaceAgency has been developing the Autonomous Robot-ics and Ground Operations �ARGO� software suite�Dupuis, L’Archevêque, Allard, Rekleitis & Martin,2006�. ARGO provides a framework for the integra-tion of the space operations process from planning topost-flight analysis. The objective of ARGO is to re-duce operational costs and increase efficiency by pro-viding operator aids and permitting the implemen-tation of a level of autonomy appropriate to theapplication.

3. PHASES OF A TYPICAL OOS MISSION

A typical on-orbit servicing mission is conducted in aseries of operations. Some of these operations such asrendezvous and docking are vehicular, namely, theyare performed by the chaser satellite, while otherssuch as capture and berthing are robotic and are to beperformed by the robotic arm of the chaser satellite.Although the current goal of our work is only the au-tonomous capture of a tumbling satellite, we will alsodescribe some other critical phases of a typical on-orbit servicing mission in order to provide the readerwith an insight into the intricacies involved in OOSmissions.

3.1. Long-Range Rendezvous

During this phase, the chaser satellite has completedits orbit phasing and has entered the same orbit asthe target satellite or a slightly lower orbit, calleddrift orbit. At this point the chaser is within a dis-tance of 5 km to 300 m of the target, as seen in Fig-ure 2�a�.

The chaser satellite is mainly guided and navi-gated using absolute navigation aids, e.g., sun andearth sensors, star trackers, and GPS, with some helpfrom relative navigation equipment such as radarsensors or lidar. The attitude match between the twosatellites in this phase is not a very important factor.Actions in this phase include acquiring and updat-ing the orbit knowledge of the target spacecraft; syn-chronizing the mission time-line; and achieving thenecessary position, velocity, and angular velocity of

the chaser satellite with respect to the target space-craft for the subsequent close-range rendezvous anddocking4 or capture.

3.2. Short-Range Rendezvous

When the chaser satellite gets within 300 m of thetarget satellite, the next phase, i.e., the short rangerendezvous, starts �see Figure 2�b��. This phase is ac-tive for distances from 300 m to several meters. Theoperation has to be controlled by relative-navigationtechniques via sensing the relative position, attitude,and velocities of the target satellite directly. Thechaser satellite has to reduce not only the distancebut also the relative attitude as well as the relativetranslational and angular velocities of the two satel-lites. Depending on the design of the docking or therobotic capture interface, the required accuracies inposition, orientation, and translational and angularvelocities are in the order of 0.01 m, 1 deg, 0.01 m/s,and 1 deg/s, respectively.

The operation can be autonomously controlled

4As explained by �Fehse, 2003�, docking is the process wherebythe guidance, navigation, and control system �GNC� of one space-craft controls the state of that spacecraft to bring it into entry tothe docking interface of a second spacecraft.

Figure 2. �a� Long-range rendezvous; �b� short-rangerendezvous.

6 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 7: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

using onboard vision sensors and the control systemof the chaser. The ground control system is only re-sponsible for monitoring the operation and provid-ing emergency safety measures. The transition toand from the short-range rendezvous mode undernominal conditions is driven by the relative distanceand velocity between the two satellites as providedby the on-board sensors. It is possible to exit theshort-range rendezvous mode upon encounteringanomalous conditions such as losing visual contactwith the target or incorrect approach rates, whichwill require a transition to error-recovery modes.

3.3. Station Keeping

Before attempting any contact operations, the chaserhas to ensure that it is in a safe trajectory with re-spect to the target satellite. This is achieved duringthe formation-flight phase. During this phase, thechaser satellite is in very close proximity of the tar-get spacecraft such that the target satellite is withinthe reach of the chaser robotic arm. The relative po-sition and orientation of the chaser satellite with re-spect to the target satellite must be strictly controlledin order to avoid collisions.

Because the satellites are close to each other dur-ing this phase, the communication delays and black-outs could result in damage to either or both space-crafts through a collision or contamination by aplume from the propulsion system. It is thereforeimperative to close a control loop on board to main-tain a safe distance and to deal with any anomalousconditions such as drift of the satellites or blindingof the vision sensor.

3.4. Capture

The capture operation is the phase of the missionwith the highest risk because it involves contact be-

tween the two spacecrafts, and because it requires atimely cooperation of the control systems on bothsatellites. In this phase, as shown in Figure 3, thechaser’s robotic arm approaches the free-floating tar-get satellite and grasps it. More specifically, the op-eration includes the following steps:

„1… Power on and unfold the robotic arm.„2… The arm maneuvers toward the target sat-

ellite and aligns its end-effector with thegrapple interface.

„3… Upon completing alignment of the robotend-effector and the grapple interface, therobotic arm goes into either force-controlmode, or limping mode, namely, the jointsbecome passive.

„4… The chaser may also deactivate its Attitudeand Orbit Control System �AOCS�, so that itbecomes free-floating. The AOCS of the tar-get should also be deactivated before cap-ture.

„5… The robot end-effector then grasps the targetspacecraft through the grapple interface.

„6… Upon the completion of firm grappling, therobotic arm returns to its position-controlmode to prevent collisions between thechaser and the captured satellite. The chasershould then reactivate its AOCS for the sta-bilization of the coupled system.

Before capture, the target satellite must be in safe-hold mode �SHM�, freely floating in its orbit. Visualservoing of the arm is used for alignment. After beingcaught, the motion of the target spacecraft should becompletely controlled by the robotic arm. Anomaliesthat can be encountered during this phase include thepossibility of the target spacecraft drifting out of thecapture envelope of the manipulator; the blinding ofthe vision sensor; the reduction of the distance be-tween the two spacecrafts below an acceptable limit;and a failed capture, which may induce tumbling ofthe target satellite. This phase is at the core of our ex-periments.

3.5. Securing the Target Satellite

After the target satellite is captured, different proce-dures can be followed to ensure that the two space-crafts fly together without putting any undue stresson the link between the two of them. When servicingis completed a reversal of the securing procedure isperformed to release the target spacecraft.Figure 3. Capture operation.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 7

Journal of Field Robotics DOI 10.1002/rob

Page 8: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

If a robotic arm was used to conduct the capturemaneuver, then securing the target satellite can bedone by berthing it to the chaser spacecraft. The ro-botic arm maneuvers the target satellite toward theberthing interface and makes the required align-ment. Then, the arm pushes the target satelliteagainst the chaser satellite until the two parts of theinterface of the two satellites are latched. A lockingmechanism is then activated to rigidly fix the twosatellite bodies to each other, creating one com-pound body.

During berthing, the chaser satellite may leaveits Attitude and Orbit Control System �AOCS� activeso as to compensate for some disturbances due tothe contact in the interface or due to the motion ofthe manipulator. If the chaser satellite is in free-floating condition during berthing, it should reacti-vate its AOCS immediately afterwards to stabilizethe compound system.

The reverse of the berthing operation is calledde-berthing, whereby the two satellites are detachedand moved away from each other by the roboticarm. The chaser satellite may or may not be underactive AOCS control during de-berthing, but the tar-get satellite must be in safe-hold mode with itsAOCS deactivated. De-berthing is considered suc-cessfully completed only after the two satellites areat a safe distance, but still connected by the roboticarm.

In the absence of a robotic arm, securing the tar-get satellite can also be done by docking. Duringdocking, the chaser satellite makes a final closingand makes a physical contact with the free-floatingtarget satellite using its momentum �or relative ve-locity� through a docking interface. As a result, thetwo satellites are physically rigidly attached to-gether. The target satellite must be in the safe-holdmode �SHM� such that the body of the satellite isfree-floating in the orbit. The reverse operation,called undocking, is considered as successfully com-pleted when the spacecrafts have been separatedand are at a safe distance.

3.6. Service Operations

There are two major operations that are consideredfor on-orbit servicing of satellites: inspections andorbital replacement unit �ORU� operations.

ORU replacement operations are typically car-ried using a robotic arm after the target satellite iscaptured and securely berthed to the chaser satellite.

Potential operations include replacing a battery or afuel tank, installing a refuelling interface, and open-ing and closing a hinged door, among others.

Inspection can also be done robotically after thetarget satellite is secured or during the station keep-ing phase by flying the chaser around the target sat-ellite. Such maneuvers are quite risky because of thepossibility of collision between the two spacecraft ordamage through impingement of the chaser’s thrustplume.

3.7. Release

This is the reverse of the capture operation. The op-eration includes the following steps: releasing thegrapple interface from the robot end-effector, andmoving the arm away from the target satellite toavoid collision with the released satellite. Once therelease is completed, the chaser is ready to performan orbital maneuver to move away from the targetsatellite.

4. AUTONOMY

In our experiments, we demonstrated the fully au-tonomous capture of a free-flyer object using the ro-botic system equipped with two 7-degree-of-freedomarms shown in Figure 1. One arm emulated the free-flyer dynamics, while the other acted as the chaser ro-bot equipped with a SARAH hand, developed by La-val University �Laliberté, Birglen & Gosselin, 2002�.The Laser Camera System �LCS� from Neptec wasused to determine the pose of the target; this infor-mation is employed by the trajectory planner of thechaser robot. Moreover, two surveillance cameraswere used to provide video output to the remote op-erator. A hierarchical finite state machine engine wasused to coordinate the autonomous capture task andto provide feedback to the operator. The only role ofthe remote operator was to initiate the capture bysubmitting the high-level command and to monitorthe chaser robot while performing the task. The se-quencing of the events, i.e., approach, fly together,and capture, was fully autonomous. In case of anemergency, the operator could send a halt or abortcommand to the autonomy engine.

4.1. Encoding Autonomy

The central tool for the autonomous operation is theCanadian Space Agency’s Cortex Toolbox, which is

8 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 9: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

used to implement command scripts and sets of re-active behaviors. Cortex has been developed to easethe development of such behavior, which rapidly be-comes labor intensive even for simple systems whenusing low-level programming languages. Cortex isbased on the Finite State Machine �FSM� formalism,which provides a high-level way of creating, modi-fying, debugging, and monitoring reactive au-tonomy engines. Two advantages of this representa-tion are its intuitiveness and the ease with which itcan be graphically constructed and monitored byhuman operators.

In general, FSM’s are used to represent a systemusing a finite number of configurations, called states,defined by the system parameters or its current ac-tions. The system can transition from one state to an-other based on its current state, conditions, and out-side events. Conditions on transitions are oftenreferred to as guards and are implemented as state-ments that can be evaluated as being either true orfalse. Outside events, called triggers, make the FSMevaluate its transition’s guards and enable a transi-tion to occur.

The concept of hierarchical FSM’s allows a high-level FSM to invoke a lower-level FSM. This pro-vides the capability to implement hierarchical de-composition of a high-level task into a sequence oflower-level tasks. If the FSM is implemented in amodular fashion, it allows for the implementation oflibraries that provide the operator with the possibil-ity of reusing an FSM in other applications.

The use of an intuitive graphical representationof FSM’s, shown in Figures 4�a� and 4�b�, allows thedeveloper and the operator alike to concentrate onthe problem to be solved instead of concentrating onthe programming skills to implement the solution.Moreover, the use of hierarchical FSM’s can addressa wide spectrum of autonomy levels, from sense-and-react behaviors relying on sensor inputs to high-level mission scripts.

FSM’s are assembled graphically using states,sub-FSM’s, junctions,5 and transitions �Figure 4�a��.The user can add states �blue ellipses� or sub-FSM’s�yellow rectangles� from the panel on the left-handside to the current FSM shown in the right-hand sidepanel by drag-and-drop. Transitions between statesare added by dragging the mouse between twostates/FSM’s and are represented by arrows. Theoperator can provide JAVA code snippets for state

actions and the transition’s guard expressions. In ad-dition, he can define the inputs, local variables, andoutput parameters of each sub-FSM. The user canalso assign a priority to each transition to enforce theorder in which their guards are tested during execu-tion. It is worth noting that Cortex provides the userwith the capabilities of a periodic trigger that wouldbe activated in user-set intervals. Components fromother FSM’s can also be graphically “cut-and-pasted” to the current FSM to allow for the reuse ofexisting FSM’s.

The simple FSM presented in Figure 4 illustratesa generic behavior. After some initializations, the be-havior is being executed in the main sub-FSM; in theevent of an error condition “Error,” the flow of ex-ecution moves to a recovery state. When the error isfixed, Cortex returns back to the “Execute” sub-FSM;at the event of successful completion, Cortex movesto the “CleanUp” sub-FSM to tidy-up the termina-tion of the behavior and then terminates. Please notethat by clearly naming the states, FSM’s, and transi-tions, a new user can easily follow the flow of execu-tion and understand the logic of an existing Cortexengine without knowledge of the low-level details.

Figure 5 illustrates the implementation of theoperations sequence described in Section 3 using theFinite State Machine formalism. Each discrete phaseof the operation sequence is represented by a state orsub-FSM as mentioned before. Transitions betweenphases are represented by arrows linking the statesof sub-FSM’s. As can be seen in Figure 5, the Cortexengine would revert to an earlier state in the event ofa failure at the current state. For example, if duringthe “CloseRangeApproach” the target satellite driftsout of range, then Cortex would transition back tothe “ShortRangeRendezvous” state and attempt asecond approach. For simplicity, we have omitted alloperator-triggered transitions from Figure 5. Suchtriggers would be present in order to allow for theground operator to take over the process in case ofan unforeseen emergency.

In our experimental setup, Cortex was usedthroughout the process as the chaser manipulatorand the target satellite were only a few meters apart.Transitions between phases of the operation are trig-gered by sensory events. The Cortex engine consid-ers anomalies such as the possibility of the targetspacecraft to drift out of the capture envelope of themanipulator �through translation or rotation�; blind-ing of the vision sensor or loss of sight; reduction of5Junctions are constructs that are used to combine transitions.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 9

Journal of Field Robotics DOI 10.1002/rob

Page 10: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

Figure 4. The Cortex GUI: �a� FSM editing environment. The user can drag and drop states and sub-FSM’s from the leftpanel to the current FSM and then add transitions. Ellipses �blue� represent states, rectangles �yellow� represent sub-FSM’s, and arrows represent transitions from one state/FSM to another. �b� FSM monitoring environment. The user canactivate manual or periodic triggers in the left panel and monitor the flow of the autonomy engine at the panel in theright.

10 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 11: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

the distance between the manipulator and the targetsatellite below an acceptable, safe limit; or failedcapture.

Figure 6 shows a Cortex implementation of thelater phases of a typical OOS scenario after an au-tonomous far rendezvous has been performed. Theoperator has overriding control of pausing the pro-

cess using a soft-stop and then restarting it, or com-pletely terminating the process by using a hard-stop.The actual capture sequence is itself a state machineof three stages: approach, align, and capture. Wehave used the same sub-FSM “SoftStop” to handlepossible error conditions such as vision failures orsatellite misalignment, and operator interruptions.

Figure 5. A high-level plan for an OOS mission, implemented using the Cortex framework.

Figure 6. The top-level Cortex diagram of the autonomous-capture scenario.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 11

Journal of Field Robotics DOI 10.1002/rob

Page 12: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

In all cases, the chaser arm moves back to a safeposition and resets itself with respect to the targetsatellite, then the chaser waits for the LCS to reestab-lish contact and to start tracking the target satellite;this procedure is independent of the cause of theinterruption. In contrast, the “HardStop” case indi-cates a more drastic failure; thus, there will be noattempt to capture the satellite.

The Cortex toolbox has been applied success-fully to encode autonomy behaviors for planetaryexploration tasks here at CSA. It is also part of alarger suite of applications named ARGO that in-cludes remote operation-monitoring software andGround Control Station capabilities; see �Dupuis etal., 2006� for more information. Next, we briefly dis-cuss the Ground Control Station features used in ourexperiments.

4.2. Operations and Monitoring

One very important aspect of space robotics is thecapability to remotely monitor a device and to oper-ate it. In our architecture, the Ground Control Sta-tion �GCS� is used first to develop as well as monitorthe command scripts and autonomous behaviors us-ing the Cortex Toolbox. In this context, the GCS isconnected to a simulator of the remote system,which includes the dynamics of the two satellites,the robot arm, and the vision system. The simulatoris then used to validate the command scripts and theautonomous behaviors associated with each phaseof the mission. Depending on the predictability ofthe parameters triggering transitions in the Cortexengine, some phases can be simulated in a very de-terministic manner. For example, the manipulationof ORU’s is performed in a very controlled and staticenvironment, as the target satellite itself is subject tovery little uncertainty. On the other hand, otherphases of the mission, such as the capture of thetarget satellite, will be subject to unpredictable fac-tors such as initial conditions on the satellite atti-tude. The formal validation of the portions of thecommand script and autonomous behaviors associ-ated with these events is still an open question; inpractice, however, validation is achieved by using abroader range of parameters to systematically verifythe plausible outcomes of the mission.

Once the verification is completed, the GCS isreconfigured to connect to the real system. The vali-dated Cortex behaviors and scripts are then up-loaded to the system for execution. The synergy of

the Cortex autonomy engine and the GCS allows foroperator intervention at different phases of an op-eration without hindering the autonomous nature ofthe operation.

In our experiments, we used the GCS, shown inFigure 7, to remotely monitor and command thetwo-arm system from various remote cities, includ-ing one from a different continent �trans-Atlantic�over the Internet. Due to the autonomous nature ofthe experiment, the operator gave only high-levelcommands. More importantly, the operator was ableto monitor the safe execution of the experimentsand, for demonstration purposes, to halt and restartthe experiment as if he had detected a fault.

5. TRAJECTORY PLANNING

This section presents the algorithms developed togenerate the motion for both the target and the chasersatellites. In our scenario, we assume that the targetsatellite moves at a speed slower than the chaserarm’s velocity limits. With this assumption, a strategyinvolving a progressive reduction of the distance be-tween the target and the chaser is acceptable. Thisimplementation is presented in Section 5.2 after pre-senting the generation of the target satellite motion inSection 5.1. Finally, in Section 5.3, the redundancyresolution scheme implemented to realize a desiredmotion of the end-effector in an optimal manner isoutlined.

5.1. Target Satellite

Many satellites use momentum wheels to stabilizeand to control their attitude. When the attitude-control system fails, due to friction, the angular mo-mentum stored in the wheels is transferred to thesatellite body over time. This momentum transfercauses tumbling in the satellite. At the same time,the satellite is under the action of small external,nonconservative moments, which make the satellitetumbling motion occur mostly about its major prin-cipal axis �Kaplan, 1976�, i.e., the axis about whichthe satellite moment of inertia is maximum.

Therefore, to mimic the tumbling motion of asatellite, we assume that the satellite is initially ro-tating about an axis very close to its major principalaxis. Then, to create the trajectory, we solve the Eulerequations assuming no external moments. It shouldbe noted that, even though there are dissipative mo-

12 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 13: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

ments acting on the satellite, they would not haveany significant effect over the short period of captureand the angular momentum of the satellite would beconserved. The ensued motion of the target satellitecan be characterized as a major rotation about a spinaxis with small precession and nutation �Goldstein,1980�. This is similar to the motion described in �Na-gamatsu, Kubota & Nakatani, 1996�.

In order to make the problem of tracking themoving satellite handle more interesting and to beable to simulate capture for larger satellites, we im-posed an elliptic translational motion on the centerof the satellite mock-up. Our mock-up is a two-thirds model of a Canadian micro-satellite calledQuickSat.

By varying the initial conditions of the Eulerequations, we have generated different sets of pos-sible trajectories for the target satellite. These sce-narios start from simple slow motions, but graduallygrow to faster motions with relatively larger nuta-tion angles and precession rates. A sample trajectoryis shown in Figure 8.

Since physical restrictions prevent the robotjoints from indefinite rotations, we could only gen-erate a continuous momentum-conserving motionfor 120 s. Therefore, to continue the motion, the mo-tion of the satellite mock-up is slowed down towardthe end and reversed. By doing so, we can move thesatellite as long as we want, without running intothe joint limits or the robot singularities.

Figure 7. A snapshot of the Ground Control Station.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 13

Journal of Field Robotics DOI 10.1002/rob

Page 14: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

5.2. Chaser Manipulator

This section describes how the trajectory of thechaser manipulator is generated. The Laser CameraSystem �LCS�, shown in Figure 9, and the softwareCAPE for Collision Avoidance and Pose Estimation,both from Neptec �Ruel, English, Anctil & Church,2005�, are used to generate the pose �position andorientation� of the target satellite with respect to theLCS frame.

The LCS sensor is particularly suited for space

application as it is immune to harsh and/or chang-ing lighting conditions �Ruel et al., 2005�. The LCS iscapable of handling solar inference, and it has beensuccessfully flown on the Space Shuttle Discovery�STS-105� in 2001 and is part of all space shuttle mis-sions since its return to flight, as it is used to inspectits underneath tiles. The range data from the LCSsensor are processed using Neptec’s proprietarysoftware CAPE and the pose of the satellite is ob-tained. The pose estimation method is model based,using a CAD model of the target satellite and amodified version of the Iterative Closest Point algo-rithm. For more information, see �Ruel et al., 2005�.

The pose is calculated at about 2 Hz with a de-lay of 0.5 s on the pose of the object at a given in-stant. The location of the LCS sensor with respect tothe manipulator inertial frame was calculated usingthe kinematic model of the robot arm holding thetarget satellite. Consequently, the actual pose of thetarget satellite in the inertial frame could readily becalculated.

An extended Kalman filter is used to filter theLCS raw measurements and to provide a smoothedpose of the target satellite every millisecond. TheKalman filter also included a predictor to estimatethe pose of the target satellite 0.5 s forward in time.This predictor is used to compensate for the 0.5 sdelay in obtaining the pose using the LCS sensor.

Based on the filtered poses of the capture frameFC attached to the capture handle, and the toolframe FT attached to the grasping device �see Figure10�, a Cartesian velocity command is generated inthe tool frame FT of the manipulator to progres-sively reduce the distance between them. Generatingthe command in the tool frame provided the oppor-

Figure 8. The Cartesian trajectory of the target satellite.

Figure 9. Laser Camera System from Neptec.

Figure 10. CAD models of Quicksat and SARAH show-ing reference frames.

14 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 15: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

tunity to independently activate and assign each de-gree of freedom to a different task. In the experi-ments, the activation of the tasks is based on thedistance between the grasping device and the targetsatellite. Each of these tasks is described in the fol-lowing subsections and generates a part of the de-sired end-effector twist as

tdesired = � rdesired

�desired�

= �vx vy vz �x �y �z �T.

�1�

At the completion of these tasks, with all the corre-sponding controllers active, the chaser manipulatoris positioned very close to the target and the algo-rithm switches to realize the capture.

5.2.1. Tracking

The first task is tracking the target. If a camera ismounted on the end-effector of the chaser manipu-lator, the goal of this task is to position the visualfeatures in the center of the camera field of view. Thepan-tilt motion of the manipulator uses two rota-tional degrees of freedom and is generated using

�y = − kRytan−1 �rz/rx� , �2�

�z = − kRztan−1 �ry/rx� , �3�

where kRyand kRz

are control gains for their respectiveaxes, and rx, ry, and rz are the components of �rT/C�T,namely, the position of the capture frame FC with re-spect to the tool frame FT. It can be calculated as

�rT/C�T = RTT��rC�w − �rT�w� , �4�

where the position of the capture frame rC is the out-put of the Kalman filter discussed above, and rT andRT

T are calculated from the kinematics model of thechaser manipulator. These quantities are all ex-pressed in the base frame Fw.

5.2.2. Initial Approach

When the task of tracking is initiated, the manipula-tor end-effector orients itself toward the target.Based on that assumption, the desired distance be-tween the grasping device and the target is con-trolled by the translational degrees of freedom thatmove the end-effector forward and backward. Thecommand is computed from

r = KRTTRC��rT�C − �rdes�C� , �5�

where RT and RC are respectively the rotation matri-ces representing the orientation of the tool frame FTand the capture frame FC with respect to the baseframe Fw; K is the control gain matrix defined asK=diag�kx ,ky ,kz�; �rT�C is the position of the toolframe expressed in the capture frame; and, finally,�rdes�C is the desired tool-frame position relative tothe capture frame.

5.2.3. Translation Alignment

In order to avoid undesirably large end-effector ve-locities, the grasping device is aligned perpendicularto the target only when they are close to each other.This task uses the two remaining translational de-grees of freedom. Equation �5� shows how the trans-lational velocity command is generated.

In Eq. �5�, the gains kx, ky, and kz are not usedsimultaneously. At first, the approach gain �kx� is ac-tivated; then, when the distance becomes small, thealignment gains �ky and kz� are activated. When thethree tasks are active, the grasping device is posi-tioned directly in front of the target. Depending onthe grasping device used for the capture, a final rollalignment may be required.

5.2.4. Roll Alignment

In our experiment, the SARAH hand is used tograsp a cylindrical handle. The last task is to alignthe hand so that the handle fits between the fingersof the hand. To that end, the remaining rotationaldegree of freedom is used:

�x = − kRx�xT/C

�6�

where kRxis the control gain and �xT/C

is the orienta-tion of the capture frame FC about the x axis of thetool frame FT.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 15

Journal of Field Robotics DOI 10.1002/rob

Page 16: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

5.2.5. Capture

With all tasks activated, one can achieve the captureby adjusting the desired relative pose, �rdes�C, to havethe handle in the middle of the hand and then clos-ing the SARAH hand fingers.

5.3. Redundancy Resolution

Both arms of the CART testbed used in our experi-ments are kinematically redundant serial manipula-tors with 7 degrees of freedom. If the redundancy isnot resolved in an optimal manner, the manipulatorswill not use their workspace efficiently, i.e., the jointswill often reach their limits. In the present section,we present the algorithms implemented for the con-trol of the arms to ensure that they always generatethe desired end-effector motion in an optimal man-ner. This is of particular importance for the creationof a realistic motion emulating a tumbling satellite.

For this work, the kinematic redundancy-resolution scheme that is used is based on the Gra-dient Projection Method �GPM� described in �Naka-mura, 1991�. This method relies on the evaluation ofa cost function representing a performance-criterionfunction of the joint positions. The gradient of thiscost function projected onto the null space of themain-task Jacobian is used to produce the motionnecessary to minimize the specified cost function.

The velocity command is computed using thefollowing equation:

q = J#x + �1 − J#J��− kSM�p�q�

�q� , �7�

where x is the m-dimensional end-effector task ve-locity vector, q is the n-dimensional joint velocityvector q, J is the Jacobian matrix, kSM is a controlgain that must be tuned, J# is the Moore-Penrose in-verse of the Jacobian matrix, and 1 is the n�n iden-tity matrix. The first term in Eq. �7� is the minimum-norm solution, and it only affects the end-effectortask, while the second term is the gradient projectionthat controls the self-motion of the manipulatorwithout affecting the end-effector task x.

The cost function p�q� defined as

p�q� = m�q�w�q� �8�

is the product of two independent cost functionsused to avoid singularities and joint limits, as de-fined next in Eqs. �9� and �10�.

The cost function used to avoid joint limits andsingularities is based on the work of �Nelson andKhosla, 1993� and is computed as

w�q� = 1 − exp �− kjti=1

n �qi − qimin��qimax − qi��qimax − qimin�2 � ,

�9�

where qimin and qimax are the limits for joint i, and kjtis a parameter controlling the shape of the cost func-tion. With this equation, the value of w�q� is close toone when far from joint limits, and it equals to zeroat the joint limits.

To measure the distance from a singularity, themost commonly used measure is the manipulabilityindex as defined by �Yoshikawa, 1985� which can becomputed as the product of the singular values, �i,of the Jacobian J:

m�q� = i=1

n

�i. �10�

Using this redundancy resolution algorithm, the per-formance of both CART manipulators was greatlyimproved. This improvement will be illustrated inSection 6 by presenting some experimental results.

6. EXPERIMENTAL RESULTS

A variety of experiments were performed in our labo-ratory. Figure 11 demonstrates a sequence of imagesfrom one of the autonomous capture scenarios per-formed. In Figures 11�a�–11�c�, the chaser manipula-tor approaches the target maintaining a constant ori-entation; the thumb of the hand is at the bottom.When the target is close enough, the chaser manipu-lator adjusts the hand orientation to match that of thehandle on the target. This can be seen in the next threepictures, Figures 11�d�–11�f�, where the SARAH handis aligned with the handle and follows it. Finally, Fig-ure 12�a� shows the capture of the mock-up satelliteby the chaser. Figures 12�b� and 12�c� display theviews of the satellite from the LCS point-of-view�pov� as well as the attempted match between theLCS data and the CAD model. As can be seen in Fig-ure 12�c�, the LCS scan pattern �a rosette� providesvery sparse data, which is enough to provide a reli-

16 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 17: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

able tracking, even when the view of the target sat-ellite is occluded by the robotic arm performing thecapture.

Moreover, as mentioned in Section 4, the experi-ments were initiated and monitored remotely overthe Internet from a variety of locations; some experi-ments were performed from Munich in Germany andfrom Denver, CO, in the USA. The “Capture” com-mand was sent from these remote locations; the ac-

tual, low-level, commands to the robot were gener-ated locally in Montreal, Canada, by the CortexAutonomy Engine described in Section 4 using thescenario of Figure 6. The remote operator had accessto a Cortex and a GCS window, and also to visualfeeds from two cameras. The bandwidth require-ments, though not particularly high, were proven tobe an obstacle in one instance, especially due to thepresence of tight security and firewalls. In all the ex-

Figure 11. �a�–�e� The chaser satellite approaching the target satellite and tracking its motion. �f� The hand in place toperform capture.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 17

Journal of Field Robotics DOI 10.1002/rob

Page 18: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

periments, the chaser manipulator firstly approachedthe target, and then followed the target satellite main-taining a safe constant distance. In the end, the com-mand was issued for the arm to perform the final ap-proach and to grasp the handle mounted on themock-up satellite using the SARAH hand.

The use of an active vision system, such as theLCS, eliminates the errors due to illumination varia-tions. In particular, the LCS was robust with respectto different lighting conditions. Different trajectorieswere tested for the target satellite. The combination ofthe target satellite, the version of the LCS, and thesoftware used imposed some limitations on the speedof the target satellite for robust tracking. In particular,there were some configurations where the trackingfailed for higher speeds.

In order to make the experiment more challeng-ing, we sometimes placed an object between the LCSand the target satellite. The LCS reported to the Cor-tex Autonomy Engine the loss of the target and Cor-tex stopped the chaser arm and then guided it awayto a safe pose. This “artificial” error condition was in-cluded both during the tracking phase and during thefinal capture phase.

Some of the tracking experimental data are plot-ted in Figure 13, where the trajectory of the x, y, andz coordinates of a point 15 cm in front of the center ofthe capture handle is shown. The dotted �black� linesshow the desired tracking position calculated fromthe raw data provided by the LCS. The solid �blue�lines are the outputs of the Kalman filter, namely, thedesired tracking position. Lastly, the dash-dotted�red� lines show the actual coordinates of the end-effector of the chaser, which was commanded to trackthe desired tracking position.

The effect of the Gradient Projection Method is il-lustrated in Figure 14. This figure shows the jointangles, q1 and q7, respectively, being the base and thewrist-roll joint angles, computed when the tumblingsatellite trajectory described in Section 5.1 is com-manded to the end-effector. The solid line representsthe joint space trajectory of the robot when the GPMis activated, and the dashed line is the trajectory withGPM deactivated. The resulting joint motion isclearly different depending on whether the GPM so-lution is added to the minimum-norm solution or not.In the case where the GPM is not used, the minimum-norm solution results in joint angles approachingtheir limits. In particular, joint angles q2 and q6 reachtheir limit values in approximately 40 s, thus trigger-ing the manipulator’s stop.

Figure 12. �a� The SARAH hand just before grasping thetarget satellite; �b� the target satellite �LCS-pov� at a stableposition; and �c� the match between the LCS data andCAD-model for satellite shown in �b�.

18 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 19: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

7. CONCLUSIONS

In this paper, we describe a framework for the au-tonomous capture and servicing of satellites. A typi-cal on-orbit servicing scenario was tested in the labo-ratory to validate some portions of autonomy andvision-based servoing aspects. The task selected fortesting was the autonomous capture of a tumblingsatellite, which is one of the most challenging tasks tobe performed in any OOS mission.

The experiments were run on the CSA’s CARTtest-bed, which has two 7-DOF manipulators. Thetarget tracking was performed using the Laser Cam-era System developed by Neptec, Inc. The autonomywas implemented using the CSA’s Cortex autonomyengine. The results of the experiments showed suc-cessful capture of the satellite mockup with minimaloperator intervention. Trans-Atlantic tests were con-ducted over the Internet with limited bandwidth andoccasional communication drop-outs.

Future extensions of this work include the devel-

opment of a secondary vision system to increase theoperational robustness, together with ongoing im-provements in the visual tracking software. The de-velopment of different trajectories of the target satel-lite, including random motions, would help us studythe capture scenario of an erratically moving satellite.Testing the capture scenario for different amounts ofnoise inserted in the tracking data is also part of ourfuture work.

The successful use of the autonomy engine Cor-tex in different space robotic scenarios such as plan-etary exploration and OOS has motivated the devel-opment of a second version of Cortex. The newversion, currently under testing, will be fully inte-grated with the Eclipse Java development environ-ment, following the Object-Oriented paradigm. Oneof the new features is the concept of FSM inheritance,which provides the ability to inherit from an FSM thatimplements a behavior and just encode some varia-

Figure 13. Experimental results; tracking of the capture handle.

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 19

Journal of Field Robotics DOI 10.1002/rob

Page 20: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

tion. Furthermore, new features such as parallelFSM’s operating concurrently and greater modularitywill be also implemented.

During our experiments, we successfully demon-strated the development of key features necessary foran OOS mission. Features such as visual tracking ro-bust to lighting variations, long-distance monitoringand managing of missions, and autonomous captureand release of tumbling satellites will be at the core ofany on-orbit robotic mission of the future.

REFERENCES

Alami, R., Chatila, R., Fleury, S., Ghallab, M., & Ingrand, F.�1998�. An architecture for autonomy. InernationalJournal of Robotics Research 17�4�, 315–337.

Boge, T., & Schreutelkamp, E. �2002�. A new commandingand control environment for rendezvous and dockingsimulations at the EPOS facility. In Proc. 7th Int.Workshop on Simulation for European Space Pro-grammes �SESP�, Noordwijk.

Bosse, A.B., Barnds, W.J., Brown, M.A., Creamer, N.G.,Feerst, A., Henshaw, C.G., Hope, A.S., Kelm, B.E.,Klein, P.A., Pipitone, F., Plourde, B.E., & Whalen, B.P.�2004�. SUMO: Spacecraft for the universal modifica-tion of orbits. In Proc. SPlE, Spacecraft Platforms andInfrastructure, Vol. 5419, pp. 36–46.

Brunner, B., Landzettel, K., Schreiber, G., Steinmetz, B., &Hirzinger, G. �1999�. A universal task-level groundcontrol and programming system for space robotapplications—the marco concept and its applicationsto the ets-vii project. In Proc. Fifth International Sym-posium on Artificial Intelligence, Robotics and Auto-mation in Space �iSAIRAS 99�, pp. 217–224, ESTEC,Noordwijk, The Netherlands.

Didot, F., Oort, M., Kouwen, J., & Verzijden, P. �2001�. TheERA system: Control architecture and performance re-sults. In Proc. 6th Int. Symposium on Artificial lntelli-gence Robotics and Automation in Space �i-SAIRAS�,Montreal, Canada.

Dupuis, E., & Gillet, R. �2002�. Remote operation with su-pervised autonomy. In 7th ESA Workshop onAvanced Space Technologies in Robotics and Automa-tion �ASTRA 2002�, Noordwijk, The Netherlands.

Dupuis, E., Gillett, G.R., Boulanger, P., Edwards, E., & Lip-

Figure 14. Joint angles during the tumbling satellite trajectory, with and without the GPM.

20 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Page 21: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

sett, M.G. �1999�. Interactive, intelligent remote opera-tions: Application to space robotics. In Proceedings ofthe SPIE Telemanipulator and Telepresence Technolo-gies VI, Boston, MA.

Dupuis, E., Gillett, R., L’Archevêque, R., & Richmond, J.�2001�. Ground control of international space stationrobots. Machine Intelligence and Robotic Control3�3�, 91–98.

Dupuis, E., L’Archevêque, R., Allard, P., Rekleitis, I., &Martin, E. �2006�. Intelligent space robotics, chapter AFramework for Autonomous Space Robotics Opera-tions �pp. 217–234� San Antonio, Texas: TSI Press.

Fehse, W. �2003�. Automated rendezvous and docking ofspacecraft. Cambridge Aerospace Series �No. 16�.Cambridge: Cambridge University Press.

Fontaine, B., Steinicke, L., & Visentin, G. �1996�. A reusableand flexible control station for preparing and execut-ing robotics missions in space. In International Sym-posium on Space Mission Operation & Ground DataSystems �SpaceOps 96�, Munich, Germany.

Gennery, D. B. �1992�. Visual tracking of known three-dimensional objects. Int. J. Comput. Vision 7�3�, 243–270.

Goldstein, H. �1980�. Classical mechanics, 2nd ed. Read-ing, MA: Addison-Wesley.

Grossman, E., & Costa, K. �2003� Small, experimental sat-ellite may offer more than meets the eye. Inside ThePentagon.

Hirzinger, G., Brunner, B., Dietrich, J., & Heindl, J. �1993�.Sensor-based space robotic-rotex and its teleroboticfeatures. IEEE Transactions on Robotics and Automa-tion 9�5�, 649–663.

Hoffman, B.H., Baumgartner, E.T., Huntsberger, T.A., &Schenker, P.S. �1999�. Improved state estimation inchallenging terrain. Autonomous Robots 6�2�, 113–130.

Huntsberger, T., Pirjanian, P., Trebi-Ollennu, A., Nayar,H.D., Aghazarian, H., Ganino, A., Garrett, M., Joshi,S., & Schenker, P. �2003�. Campout: a control architec-ture for tightly coupled coordination of multirobotsystems for planetary surface exploration. IEEE Trans-actions on Systems, Man and Cybernetics, Part A33�5�, 550–559.

Inaba, N., & Oda, M. �2000�. Autonomous satellite captureby a space robot: world first on-orbit experiment onJapanese robot satellite ets-vii. In Proceedings. ICRA’00. IEEE International Conference on Robotics andAutomation, Vol. 2, pp. 1169–1174, San Francisco, CA.

Kaplan, M.H. �1976�. Modern spacecraft dynamics andcontrol. New York: Wiley.

Kasai, T., Oda, M., & Suzuki, T. �1999�. Results of the ETS-7mission, rendezvous docking and space robotics ex-periment. In Proc. 5th Int. Symposium on ArtificialIntelligence, Robotics and Automation In Space �i-SAIRAS�, Noordwijk, The Netherlands.

Kawasaki, O., Imada, T., Yamanaka, K., & Tanaka, T.�2000�. On-orbit demonstration and operations plan ofthe H-II transfer vehicle �HTV�. In The 51th Int. Astro-nautical Congress, Rio de Janeiro, Brazil.

King, D. �2004�. Saving Hubble. In The 55th Int. Astronau-tical Congress, Vancouver, Canada.

Laliberté, T., Birglen, L., & Gosselin, C. �2002�. Underac-tuation in robotic grasping hands. Japanese Journal ofMachine Intelligence and Robotic Control, Special Is-sue on Underactuated Robots 4�3�, 77–87.

Lewis, J. �2004�. Space weapons in the 2005 US defensebudget request. In Workshop on Outer Space and Se-curity, Geneva, Switzerland.

Liang, B., Li, C., Xue, L. & Qiang, W. �2006�. A Chinesesmall intelligent space robotic system for on-orbit ser-vicing. In Proc. IEEE/RSJ Int. Conf. Intelligent Robotsand Systems, pp. 4602–4607, Beijing, China.

Martin, E., Dupuis, E., Piedboeuf, J.-C., & Doyon, M.�2005�. The TECSAS mission from a Canadian per-spective. In Proc. 8th International Symposium on Ar-tificial Intelligence and Robotics and Automation inSpace �i-SAIRAS�, Munich, Germany.

Maxwell, S., Cooper, B., Hartman, F., Wright, J., & Yen, J.�2004�. The design and architecture of the rover se-quencing and visualization program �rsvp�. In Pro-ceedings of the 8th International Conference on SpaceOperations, Montreal, Canada.

Nagamatsu, H., Kubota, T., & Nakatani, I. �1996�. Capturestrategy for retrieval of a tumbling satellite by a spacerobotic manipulator. In Proc. IEEE Int. Conf. Roboticsand Automation �ICRA�, pp. 70–75, Minneapolis, MN.

Nakamura, Y. �1991�. Advanced robotics: redundancy andoptimization. Reading, MA: Addison-Wesley.

Nelson, B., & Khosla, P. �1993�. Increasing the tracking re-gion of an eye-in-hand system by singularity and jointlimit avoidance. In Proc. IEEE Int. Conf. Robotics andAutomation, Vol. 3, pp. 418–423.

Nesnas, I., Wright, A., Bajracharya, M., Simmons, R., &Estlin, T. �2003�. Claraty and challenges of developinginteroperable robotic software. In International Con-ference on Intellligent Robots and Systems �IROS�, pp.2428–2435, Las Vegas, Nevada.

Potter, S. �2002�. Orbital express: Leadilng the way to anew space architecture. In Proc. Space Core Tech Con-ference, Colorado Spring, CO.

Ruel, S., English, C., Anctil, M., & Church, P. �2005�.3DLASSO: Real-time pose estimation from 3D data forautonomous satellite servicing. In Proceedings of the8th International Symposium on Artificial Intelli-gence, Robotics and Automation in Space −iSAIRAS,Munich, Germany.

Rumford, T.E. �2003�. Demonstration of autonomous ren-dezvous technology �DART� project summary. Proc.SPIE, 5088, 10–19.

Sato, N., & Doi, S. �2000�. JEM remote manipulator system�JEMRMS� human-in-the-loop test. In Proc. 22nd Int.Symp. Space Technology and Science, Morioka, Japan.

Sommer, B. �2004� Automation and robotics in the germanspace program Unmanned on-orbit servicing �OOS� &the TECSAS mission. In Proc. the 55th Int. Astronau-tical Congress, Vancouver, Canada.

Stieber, M., Sachdev, S., & Lymer, J. �2000�. Robotics archi-tecture of the mobile servicing system for the interna-

Rekleitis et al.: Autonomous Capture of a Tumbling Satellite • 21#1

Journal of Field Robotics DOI 10.1002/rob

Page 22: Autonomous Capture of a Tumbling Satellite · 2017. 11. 7. · e-mail: Regent.LArcheveque@space.gc.ca e-mail: Kourosh.Parsa@space.gc.ca e-mail: Erick.Dupuis@space.gc.ca Received 5

tional space station. In Proc. 31st Int. Symp. on Robot-ics �ISR�, Montreal, Quebec.

Volpe, R., Nesnas, I., Estlin, T., Mutz, D., Petras, R. & Das,H. �2001�. The claraty architecture for robotic au-tonomy. In Proceedings of the 2001 IEEE AerospaceConference, Big Sky, MT.

Whelan, D., Adler, E., Wilson, S., & Roesler, G. �2000�.DARPA Orbital Express program: effecting a revolu-tion in space-based systems. In Proc. SPIE, Small Pay-loads in Space, Vol. 4136, pp. 48–56.

Wilcox, B., Tso, K., Litwin, T., Hayati, S., & Bon, B. �1989�.Autonomous sensor based dual-arm satellite grap-pling. In NASA-Conference on Space Telerobotics,Pasadena, CA.

Wingo, D.R. �2004�. Orbital Recovery’s responsive com-mercial space tug for life extension missions. In Proc.the AIAA 2nd Responsive Space Conference, Los An-geles, CA.

Yoshida, K. �2003�. Engineering test satellite vii flight ex-periments for space robot dynamics and control:Theories on laboratory test beds ten years ago, now inorbit. The International Journal of Robotics Research22�5�, 321–335.

Yoshida, K. �2004�. Contact dynamics and control strategybased on impedance matching for robotic capture of anon-cooperative satellite. In Proc. 15th CISM-IFT0MMSymp. on Robot Design, Dynamics and Control - Ro-mansy, St-Hubert, Canada.

Yoshida, K., Nakanishi, H., Ueno, H., Inaba, N., Nishi-maki, T., & Oda, M. �2004�. Dynamics, control, andimpedance matehing for robotic capture of a non-cooperative satellite. RSJ Advanced Robotics 18�2�,175–198.

Yoshikawa, T. �1985�. Manipulability of robotic mecha-nisms. Int. J. of Robotic Research 4�5�, 3–9.

22 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob