14
6th Responsive Space Conference April 28–May 1, 2008 Los Angeles, CA ORS Mission Utility and Measures of Effectiveness James R. Wertz Microcosm, Inc. 6th Responsive Space Conference RS6-2008-1003

ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

6th Responsive Space Conference April 28–May 1, 2008

Los Angeles, CA

ORS Mission Utility and Measures of Effectiveness James R. Wertz Microcosm, Inc.

6th Responsive Space Conference RS6-2008-1003

Page 2: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

1AIAA/6th Responsive Space Conference 2008

6th Responsive Space Conference, Los Angeles, CA, April 28–May 1, 2008

ORS Mission Utility andMeasures of Effectiveness*

James R. Wertz, Microcosm, Inc.†

Abstract

This paper briefly summarizes the traditionalspace system mission utility analysis process inorder to establish a framework for utility analysisfor Operationally Responsive Space (ORS). Wethen define both general and specific ORSMeasures of Effectiveness (MoEs) in thefollowing broad categories:

• Cost

• Responsiveness

• Coverage

• Data Quality or Quantity

• Risk

• Flexibility

• Goal-Oriented

These are adjusted somewhat with respect to thetraditional utility categories of performance, cost,risk, and schedule to reflect the fact that ORSmissions are not duplicative of traditionalmissions, but complementary to them.

ORS mission utility is particularly challenging inpart because traditional missions typically have aconstant, long-term purpose (e.g., provide 0.25meter resolution images of any point on theEarth’s surface within 48 hours), whereas ORSmissions, by their very nature, are intended torespond to dynamic world events (e.g., provideappropriate coverage of hurricane Katrina or therecent flare-up in Kenya). This makes utilitymeasures inherently more challenging. Nonethe-less, demonstrating and quantifying missionutility is key to funding ORS missions in anenvironment of severely constrained budgets andis critical to the future success of ORS.

1. Introduction toMission Utility Analysis

This section summarizes the background andneed for Mission Utility analysis. For a detaileddiscussion of this process for traditional spacemissions see Wertz [1999].

In looking at the ability of a space mission toachieve its requirements and objectives, there aretwo broad types of measures:

A Performance Assessment measures how wellthe system meets its quantitative goals andrequirements. This is engineering oriented —i.e., what is the maximum resolution or what isthe area search rate that can be achieved.

In contrast, Mission Utility is a quantitativeexpression of how well the system meets itsoverall mission objectives. It is user oriented —i.e. how many lives are saved, how will it affectthe outcome of the battle or war, or what is theaverage annual savings if this system were to beimplemented?

There are two distinct objectives of missionutility analysis:

• Provide feedback for the mission design

• Provide quantitative information for thedecision-making community.

Utility analysis addresses the basic questions fordecision making, such as whether it is better todevelop a particular ORS-Sat or to simply putthe money into buying more UAVs.

Although it is equally true of all types ofmissions, the Warfighter Panel at RS5 taught usan important lesson for ORS: The space systemdoesn’t matter. What matters is what it can dofor the warfighter or the victim. They are notreally interested in the “how” of space missions,just as I don’t really care about how far apart cell

___________________

*Copyright, 2008, Microcosm, Inc.† Micrcosm, Inc., 4940 West 147th Street, Hawthorne, CA

90250-6708. Email: [email protected]

Page 3: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

2AIAA/6th Responsive Space Conference 2008

phone towers have to be. If I have cell phonecoverage when I need it, I’m happy with theservice. Otherwise I’m not. The warfigherdoesn’t really need another piece of equipmentthat will serve mostly as something to put underthe wheels of the truck when it gets stuck.

Ultimately, ORS is not about launch systems orspacecraft. It’s about what we can do to helpthe warfighter, or the cop in New Orleans, atlow cost, by tomorrow morning or, better, thisafternoon. That’s what Mission Utility isintended to measure.

2. Mission Utility Analysisfor ORS

One of the current elements of the ongoingOperationally Responsive Space (ORS) debate iswhether and to what extent Responsive Spacesystems have sufficient utility to warrant thefunding required to implement them. Tradi-tionally, Measures of Effectives (MoEs) orFigures of Merit (FoMs) have been used toquantify the performance, which can then becompared to the cost of implementation. Therehave been some initial attempts to do this forORS with both confusing and misleading results,as discussed in the next section.

ORS Mission Utility is particularly challenging.Traditional missions typically have a constant,long-term purpose. For example, we may want toprovide 0.25 meter resolution imagery anywhereon the Earth within 48 hours. We then set aboutdesigning a space system to achieve this.

In contrast, ORS missions are intended torespond to changing world events. For example,we want to provide appropriate coverage of thewar in Iraq or to re-establish communicationsthat have been wiped out by a storm in Africa.This makes it difficult to assess Mission Utility.How do we measure the potential usefulness of asystem in meeting warfigher or victim needs thatresult from unknown future events?

While this ambiguity in goals is challenging, itisn’t unique. Ultimately, what is the objective ofthe Hubble Space Telescope? The real goal is todiscover things about the Universe that havenever been discovered. This makes it even morechallenging than ORS to measure mission utility.

While we may not be able to specify what isneeded for ORS missions with precision, we cancertainly estimate what types of systems willlikely have high utility and what impact they canhave on potential outcomes. We do know thatunanticipated events will occur. So the realquestion for ORS is, Will we be as prepared aspossible for the next hurricane, tsunami, orbiological, chemical, nuclear, or non-nuclearattack on a major world city? The process ofdemonstrating and quantifying mission utility inthese circumstances is key both to funding ORSmissions in a severely constrained budgetenvironment and to funding the right ORSmissions.

3. Lessons Learned:MoEs that Don’t Work

I was involved some time ago with designing asmall constellation of satellites that neededfrequent, but not continuous, coverage. Wedecided to use the Average Gap Duration (AGD)as our primary MoE and created a small programto calculate various gap statistics.

We initially created a 4-satellite constellationthat had an AGD of approximately 60 minutes,which we believed was a bit longer than wouldwork well for this application.

We next tried a 6-satellite constellation, whichwe re-optimized to provide the best coveragepossible. Having done this, we ran our statisticsprogram and found that the AGD was now about90 minutes. It seemed to us unlikely that ourcustomer would be delighted with paying for twomore satellites to go from an average gap of 60minutes to an average gap of 90 minutes!

Our first assumption was that we had made amistake in our calculations, but we reviewedthem with some care and found that they wereindeed correct. Here is what had happened:

• With 4 satellites, we had many gaps, somelong and some short

• With 6 satellites, the coverage was muchbetter and filled in nearly all of the shortgaps — what was left was a small number oflong gaps that resulted in the increase in theaverage gap duration.

What had occurred was not an error in thecomputations, but a more fundamental error in

Page 4: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

3AIAA/6th Responsive Space Conference 2008

our choice of MoE. Consider, for example, asystem that has a single gap of 4 hours durationand a second system that has a gap of 4 hoursplus 10 gaps of 1 hour each. The second sys-tem will have an average gap of only 1.3 hours,but is clearly not 3 times better than the firstsystem.

MoEs should be, and often are, the basis formaking fundamental system decisions. Thewrong MoEs can be very misleading and canlead to fundamentally incorrect decisions.

A less blatant, but similar result has occurred forORS. At RS5, a paper was presented for whichthe major MoE was chosen as the cost per hourof coverage [Tomme, 2006, 2007]. The basicnumerical results were:

• SIGINT $43,000/hour

• Communications $61,000/hour

• Imagery $429,000/hour

These large values led the author to concludethat ORS was too expensive to be worthwhile.

The basic problem with this analysis is that thisisn’t the right MoE. Consider the example of a35 mm camera bought for $100 and used to takeone critical photograph of the terrain or enemypositions. At an exposure time of 1/125 sec, thecost of taking that photograph is $45 million/observation hour. Clearly, this is an extravagantwaste of taxpayers money! The military shouldnever be allowed to buy cameras.

Of course, one might argue that the camera willbe used for many more photographs than the onecritical photo. We could take thousands ofphotos and, therefore, reduce the cost perobservation hour to some more reasonablenumber. But, in the end, our goal isn’t to take asmany photos as possible, but to get the one photothat we really need. In many respects, the realpurpose of ORS is to avoid having to takethousands of photos, and to try to put as fewresources as possible into getting the one, or afew, photos that provide the information that theend user really needs.

What matters for ORS isn’t the cost/hour, but thecost of the system and what it lets us achieve thatwe couldn’t otherwise achieve. Our real questionis whether it’s worth $20 million (the cost/mission used in the reference paper) to

• Find Osama bin Laden and pinpoint hislocation now

• Find victims of a hurricane, tsunami, orenemy ambush in time to rescue them

• Fill in for a destroyed satellite that waswatching Chinese military maneuvers on theTibetan border

• Provide secure supplementary communica-tions for a Special Forces team in centralAfrica

The goal of ORS is specifically not to blanketthe Earth with coverage. It is to get the rightdata to the right people, when they need it.

4. Selecting MoEs for ORS?

There are four basic requirements for choosingthe right MoEs [Wertz, 1999]:

• Clearly related to mission objectives

• Quantifiable (but not necessarily easily)

• Understandable by and meaningful todecision-makers

• Sensitive to the system design

Note that the most important measures of whatwe want to do — save lives, win battles, reducelosses — can not be quantified with precision.Nonetheless, we want to be able at least toestimate these values, just as we estimate thelives and property saved in justifying firefightingbudgets in any large city.

Traditionally, MoEs fall into 4 broad categories:

• Performance

• Cost

• Risk

• Schedule

In order to find MoEs appropriate for ORS wewant to adjust these categories to reflect thegoals of ORS that are intended to be com-plementary, not duplicative, of traditionalmissions. While there may be many ways to dothis, I believe a reasonable breakdown of ORSutility categories is as follows:

• Cost

• Responsiveness

• Coverage

• Data Quality or Quantity

Page 5: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

4AIAA/6th Responsive Space Conference 2008

• Risk

• Flexibility

• Goal-Oriented MoEs

As we will see below, the last category is the onethat is both the most important and the mostdifficult to quantify.

5. The ORS MoEs

Clearly, the choice of specific MoEs will dependon the mission and the definitions of some mayalso be mission-dependent. Nonetheless, I be-lieve the list below includes the most likelyMoEs for most ORS missions.

5.1 Cost

ORS missions are typically intended to fulfill aspecific function, such as provide imagery,communications, or weather data for a well-defined area or activity. Therefore, mission costis likely to be more important than the cost perunit. There are two mission costs that areimportant:

Non-Recurring Engineering (NRE) = Develop-ment Cost. This is the full cost of developing theinitial system, excluding the actual manufac-turing of the first unit. In cost modeling, the costof building the first unit is called the TheoreticalFirst Unit (TFU) cost.

Recurring Cost per Mission. This is the ongoingcost of building and launching subsequent flightsof identical units. It includes:

• Launch

• Spacecraft bus

• Payload

• Operations (typically normalized to 1year)

When multiple units are built, the cost per unit isnormally represented by a learning curve, whichis a decreasing exponential representing the ideathat building two of anything is less than twiceas expensive as building one. (See, for example,Wertz [1999] for formulas for modeling NRE,recurring costs, and learning curves for multipleunits.)

For ORS, we assume that most operationalsystems will include a non-recurring develop-

ment phase, then a recurring phase in whichmultiple units are built and launched, or built andstored for later launch-on-demand. For ORSsystems both costs and manufacturing pro-cedures for communications constellations suchas ORBCOMM, Iridium, or Globalstar are morerelevant than the cost of building one-of-a-kindspacecraft (See, for example, Molnau, et al.[1999], Burgess [1996], Brunschwyler, et al.[1996], or Champlin [1993].) For a discussion ofthe cost of maintaining systems for later launch-on-demand, see Wertz [2004].

Two other cost MoEs are potentially relevant,but typically will not be as meaningful:

Cost per Year. This MoE will not be meaningfulif the goal is to achieve a particular objective,such as support of a particular military or rescuemission.

Cost per Unit of Performance. In this MoE,“performance” should be one of the goal-oriented MoEs, rather than one of the technicalmeasures for the reasons discussed in Sec-tion 3 above. The cost per data bit is a bad ideain most cases — it’s results we want, not datavolume.

5.2 Responsiveness

This is, of course, the fundamental objective ofORS and, therefore, should be a major MoE.The principal distinction here will be betweenTier 1 and 2 responsiveness, which are con-cerned with how quickly we can provide data tothe end user, and Tier 3 which is concerned withhow rapidly systems can be developed. (Fordefinitions of Tiers 1, 2, and 3, see the DoD ORSReport to Congress [2007].)

Tier 1 and 2 Responsiveness MoEs are discussedin detail with formulas and examples in Wertz[2005]. The most relevant of these MoEs for themission as a whole are:

Total Response Time (TRT). This is the timefrom when a new request for data is made untilthe data is given to the requestor or end user.(For example, an unforecast tsunami hitsSouthern CA and the LAX Westin. The TRT isthe time from when a request for the data neededfor rescue is made by someone with theappropriate authority until it is delivered to thosewho can use it.)

Page 6: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

5AIAA/6th Responsive Space Conference 2008

For Tier 2, the TRT is given by:

Total Response Time (Tier 2) =Preparation Time + Weather Delay +Launch Window Delay + InsertionTime + Orbit Response Time + DataReturn Time.

where the various terms are defined by Wertz[2005]. For Tier 1, TRT will be given by:

Total Response Time (Tier 1) =Preparation and Command Time +Orbit Maneuver Time (if applicable) +Orbit Response Time + Data ReturnTime.

The assumption is often made that the TRT forTier 1 will necessarily be less than that for Tier2, but that is not necessarily the case. Tier 1systems are typically deployed in Sun syn-chronous orbits so as to cover nearly the wholeworld, since there is no way of knowing wherefuture events of interest will occur. Therefore, asingle satellite may well have a revisit to aparticular location of once every 1–3 days(depending primarily on the altitude, swathwidth, and whether we are interested only indaylight observations). A Tier 2 system willmost likely be launched into either a Fast AccessOrbit or Repeat Coverage Orbit in order toprovide both more rapid and more frequentobservations of the specific location of interest[Wertz, 2005]. Thus, in the example mission inSection 7, the TRT for a Tier 2 system will beapproximately 11 hours, which may or may notbe longer than what would be needed for a Tier 1asset.

Other potentially important measures of Tier 1and Tier 2 responsiveness are:

Mean Revisit Time. This is the mean timebetween revisits to the target. Depending on theapplication, we may be interested in the meanrevisit over an entire day, or during daylight orduring hours of defined coverage. The MeanResponse Time is similar, but includes bothrequest and response delay times. The MeanResponse Time is more useful in that it is moremeaningful to the end user. As discussed byWertz [1999], the Mean Response Time wouldnot only be more meaningful to the end user(“Mr. President, with this system, the averagetime from when you request the data until it ishanded to you will be x.x hours.”), but it avoidsthe major pitfall described in the first example in

Section 3. This is the major reason that the MeanResponse Time was initially created as an MoE.

Time Late (TL). The TL measures how “stale”the data is when it is delivered to the end user.Since the actual communications time from lowEarth orbit to a local user will typically be muchless than 50 msec, the TL will most often bedominated by communications delays associatedwith having to store, process, or interpret thedata for the end user. (The policeman rowingthrough the streets of New Orleans is most likelynot interested in the raw data from a satellite. Hewants to know the street address where peopleare trapped on a roof.)

Tier 3 Responsiveness MoEs will be thefollowing:

Development or Build Time. There are twotimes of interest here. The Development Time isthe time required to develop and build a first unitof a particular type of spacecraft such that it isready to be put into storage for launch-on-demand. In addition, there is the idea of what isoften called the “6-day spacecraft” in which allof the various parts and subassemblies exist instorage and a spacecraft is built-on-demand andthen launched. In this case, the Build Timewould be the time required to define, assem-ble, and test a spacecraft to meet a particulardemand.

Technology Insertion Time. If a new sensor orcapability is developed, it may not be necessaryto build an entire space system from scratch inorder to implement it. For example, assume thatan observation spacecraft has been built and is instorage awaiting launch-on-demand. It may wellbe feasible to upgrade the sensor or focal planearray on these existing assets in storage in orderto insert the new technology. This becomesmuch more likely if the spacecraft are builtaccording to Plug-and-Play standards. (See, forexample, Graven et al. [2006, 2007, 2008] andHansen et al. [2007].) Thus, the time required toinsert a new technology may be extremely short,relative to the traditional approach of having tofund, define, develop, build, and test an entirenew space system. The capacity to rapidly takeadvantage of new technology is one of the majorpotential advantages of Responsive Space andcan serve as an important mechanism for rapidlyleveraging newly created technology to benefitthe warfighter.

Page 7: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

6AIAA/6th Responsive Space Conference 2008

5.3 Coverage

Coverage is strongly related to responsiveness inthat the more often an area is seen, the shorterthe response time will be in getting the desireddata to the end user. Therefore, coverage MoEsare, to a degree, interchangeable with respon-siveness MoEs.

Orbit Response Time (ORT). This is the timefrom arrival in orbit until the first observation ofthe target. For some systems this could includetime required for check-out, calibration, andoutgassing of the spacecraft. For a well-designedresponsive spacecraft that is ready to go when itreaches orbit, the ORT is the time from launchuntil the first pass over the target. For the FastAccess Orbit, this will typically be between 0.5and 2 orbits (i.e., 45 to 180 minutes). For mostother orbits, the mean ORT will typically be 6 to12 hours, depending on whether a daylight passis required. (See Wertz [2005].)

The ORT depends only on the orbit parametersand the geometrical relationship between thelaunch site and the target, and not on the time ofday. (Once the spacecraft has been launched, theorbit follows a well-defined path with respect tothe surface of the Earth.) Thus, the only impactof the time of day at which the spacecraft islaunched is to change the time of day of theobservation of the target. For responsive Earthobservations, the launch window delay discussedabove serves only to adjust the time of day of thetarget observations. For subsequent days, assum-ing a prograde responsive orbit, the observationtime will move earlier by about 30 minutes perday due to both the motion of the Earth in itsorbit and, more importantly, the rotation of theorbit plane due to the Earth’s oblateness. (See,for example, Wertz [2001].)

Observations per Day. Depending on themission, this may be per satellite or for thesystem as a whole, if there are multiple satellites.Similarly, we may wish to count the entirenumber of observations per day or only thosethat occur during daylight.

As shown in Figure 1, the number ofobservations per day per satellite will dependprimarily on the orbit inclination, altitude, andminimum working elevation angle. For detailedcomputations, see Wertz [2008]. The broadconclusion from this data is that a prograde

Repeat Coverage Orbit provides 3 to 10 times asmany observations per day as does the traditionalSun synchronous orbit.

Figure 1. Orbits/Day of Coverage vs. Latitude forRepeat Coverage Orbit (RCO) and Sun Synchro-nous Orbit (SSO). (from Wertz [2008].)

Number of Spacecraft. This is the number ofspacecraft required to achieve a given level ofcoverage or frequency of observations. In a well-designed responsive system, this will be anumber that can change as the needs evolve.Thus, a single satellite would normally providethe initial observations and assessment. As cir-cumstances evolve, the decision can be madelater either to increase the frequency of observa-tions (for example, from once every 90 minutesto once every 45 minutes by launching anothersatellite 180 deg out of phase in the same orbit)or extend the span of observations to include alarger fraction of the day.

5.4 Data Quality or Quantity

Resolution at Nadir. For imaging missions, thebasic measure of the image quality is theresolution at nadir [Wertz, 1999]. An alternativemeasure often used by image analysts andscientists is the National Imagery Inter-pretability Ratings Scale (NIIRS) which goesfrom 0 (worst) to 9 (best) and is based on theability to interpret the resulting images. (SeeFiete [1999].)

A major advantage of Responsive Space systemsin terms of data quality is the ability of thesystem to fly lower because it is not trying tocover the entire world and does not need a 10 or15 year life. Ultimately, as shown in Figure 2,altitude is a remarkably low cost substitute foraperture. The 2.4 meter diameter Hubble SpaceTelescope looking down from 800 km will have

Page 8: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

7AIAA/6th Responsive Space Conference 2008

essentially the same resolution as an 0.7 meteraperture instrument flying at 250 km. However,Hubble initially cost about $2.5 billion in today’sdollars, whereas an 0.7 meter diameter in-strument will cost several million dollars.

Figure 2. Resolution at Nadir as a Function ofAperture and Altitude. (from Wertz [2008].)

Bandwidth or Number of Simultaneous Users.For a communications mission such as BlueForce Tracking or Supplementary Communi-cations, the basic measure of performance iseither the bandwidth or the number ofsimultaneous users. In today’s space systems,there is an ever increasing demand for morebandwidth as information transfer becomes moreand more important. Initially, we are likely toneed only latitude and longitude and simplecommanding in the field. As systems andprocesses become more sophisticated, theamount of data needed grows, such as a newrequirement for secure cellular communications.Of course, modern cell phones nearly all havecameras such that there will be enormous valuein transmitting images and, in due course, colorimages with high resolution. Thus, it will beimportant for Responsive Space Communica-tions to provide continuously increasing datarates.

5.5 Risk

Because Responsive Space systems are muchlower cost than traditional missions, we expectthat they should be able to tolerate a higher levelof risk. In addition, launch-on-demand impliesthat there is more than one system available forimmediate or nearly immediate launch, thusfurther mitigating the consequences of a launchfailure. Nonetheless, today’s space systems aredramatically risk averse and that mindset will not

easily change. Thus, risk will continue to be animportant MoE, even if the amount of acceptablerisk is larger than for traditional missions.

Probability of Successful Deployment. Thistakes into account both the probability of asuccessful launch and the probability that thespace system will work successfully on orbit.Historically, both launch systems and spacecrafthave worked better after any problems have beenresolved with the first few to be launched.Launch vehicles have a long-term probability ofsuccess of about 90% and are likely to be thedominant factor in the probability of successfuldeployment.

Probability of a Successful Mission. This MoEtakes into account the potential for executing aback-up launch if either the launch vehicle orspacecraft fails on the first launch. Given thepotential for a back-up launch, the probability ofa successful mission should be very high.However, when a failure occurs, launches havehistorically been stopped for anywhere fromseveral months to a year or more in order toinvestigate the failure. There has not yet beenenough experience with ORS missions todetermine whether this paradigm will continue orwhether, like missiles or airplane attacks, severalopportunities will be permitted in the hope ofensuring mission success.

Ability to Reconstitute Assets. A key purpose ofORS systems is to allow the potential toreconstitute assets that have been destroyed orotherwise failed on orbit. The Chinese ASATtest has made it clear that American space assetsare vulnerable to enemy attack. Note that“reconstituting assets” does not necessarily, oreven usually, mean a replacement of identicalcapability on-orbit. Generally, the assumption isthat ORS assets used to replace traditionalNational assets will be significantly less capableand shorter lived. However, this is made up inpart by allowing them to be focused on aparticular region or event of interest and havingsome capability on orbit is better than nocapability.

5.6 Flexibility

Orbit Agility. Because of the very high delta Vrequirements to change orbits, most spacecraftwill do orbit maintenance from time to time, butotherwise remain in a fixed orbit for most of

Page 9: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

8AIAA/6th Responsive Space Conference 2008

their operational life. (It takes more delta V tochange a spacecraft from a polar orbit to anequatorial orbit than it does to launch it into orbitin the first place. Under ordinary circumstancesit is effectively impossible to turn a spacecraftaround and fly it the other way, as can be easilydone with aircraft.) However, Col. Robert New-berry has introduced the concept of Agi leSpacecraft to refer to a spacecraft that can makemodest changes in orbit parameters to meetspecific mission needs.

An example of the mission utility of orbit agilitycomes from the discussion above of thedependence of resolution on altitude. At 200km, a spacecraft would normally have anoperational lifetime of only a few days. How-ever, consider a spacecraft in a 400 km circularorbit with a nominal lifetime of 1–2 years and aresolution at nadir of 0.8 meters. For a delta Vexpenditure of 60 m/s, the spacecraft can be putinto a 200 km x 400 km elliptical orbit with aresolution of 0.4 meters at perigee, thusproviding twice the nominal resolution over aparticular area of interest. When finished, thespacecraft can be returned to a higher perigee or,if near the end of life, brought down in a positivedeorbit.

Observations Time Flexibility. A subset ofspacecraft agility is the ability to adjust thetiming of observations over an area of interest.This is most easily done by making a smallchange in altitude, which changes the period and,therefore, the arrival time and azimuth over thetarget. This can be used for several purposes:

• To keep the enemy off balance such that thesatellite does not arrive where or when it isexpected

• To adjust the arrival conditions to provide abetter view, such as bringing the targetnearer to nadir as the satellite flies over

• To adjust the arrival time to correspond to aplanned event, such as a planned attack orthe arrival of another satellite for simul-taneous or adjacent viewing

The delta V required to adjust the arrivalconditions will be inversely proportional to thetime allotted to make the adjustment. In low-Earth orbit, every 13.7 m/s of delta V willchange the mean altitude by 24.4 km and theperiod by 30 sec. Integrated over a day, thischange can be substantial. For example, assume

we see a particular location once per day. A13.7 m/s delta V will shift the next day’s pass(15 orbits later) by 7.5 minutes either sooner orlater.

Multi-Mission Utility. This refers to the potentialto use the same asset for more than one mission.A typical responsive space mission will cover aspecific location on Earth which the satellite maysee only a few times per day. This means that thesame satellite could potentially work in someother part of the world without interfering withits primary mission.

Similarly, if the primary mission lasts a fewweeks or months, there may be the potential forusing the space asset for the next major worldevent, or simply to supplement other spaceassets. If the spacecraft is sufficiently agile, thenit may be able to change the orbit to a degree toallow optimal coverage of a latitude other thanthat for which it was launched. (An inclinationchange requires a delta V of approximately130 m/s per deg of change. This implies thatinclination changes will be both modest andinfrequent.)

5.7 Goal-Oriented MoEs (GO-MoEs)

Most of the MoEs above are measures oftechnical performance. They are engineeringparameters that describe how well the systemworks. But our real objective in any operationalspace mission, irrespective of whether it is anORS mission, is to do something for the end useror for the people who have put up the money tofund the system. These broad objectives arerepresented by the Goal-Oriented MoEs (GO-MoEs). Quantifying these is typically challeng-ing and most likely cannot be done withprecision. Nonetheless, these are the MoEs thatare the most important in terms of providingfeedback to the system designer or decidingwhether to build the system in the first place.

For example, suppose we are consideringwhether to use a newer, lighter-weight, moreefficient, but more expensive solar array for anORS surveillance satellite. If a careful analysisshows no impact on the GO-MoEs, then the rightanswer is to stay with the option that is lowercost. We only want to spend extra money onthose features that will have a positive impact onthe end objectives of the mission.

Page 10: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

9AIAA/6th Responsive Space Conference 2008

How can the choice of solar array affect thenumber of lives saved or outcome of the battle?There are several possible ways:

• The lighter solar array can reduce thespacecraft mass, allowing more propellant,increasing the agility, and, therefore, allow-ing better coverage.

• The more efficient solar array will besmaller, therefore reducing the drag, allow-ing the spacecraft to fly lower, and improv-ing the resolution.

• The more efficient solar array can alsoprovide more power to either improve thecommunications link or allow the system tooperate either longer or in a different theateron the same orbit.

These are all technical performance measures,but it is relatively easy to see how these canpotentially affect the GO-MoEs and, therefore,have a positive impact on the overall mission.Irrespective of the merits or demerits of bettersolar arrays, the point is that what we really wantto measure is the impact of potential systemchanges on the end user, the victim, or theorganization that has funded the system.

There are 4 basic sub-categories of GO-MoEs:

• Measures of Success

• Impact on Outcome

• Measures of Utility

• Level of Preparedness

Each of these is discussed below.

Measures of Success. For a great many ORSmissions the Measures of Success are theNumber of Lives Saved and the Money orProperty Saved. While these are exceptionallydifficult to quantify, they are really what many ofthe ORS missions are all about.For thewarfighter, the real measure of how much thesystem helps will be its ability to reduce bothcombatant and civilian casualties and to savemoney and property that would otherwise bedestroyed or consumed. Much the same appliesto using ORS for natural disasters. Could wehave saved lives or reduced the amount ofdamage by having better surveillance or commu-nications after hurricane Katrina or the SE Asiatsunami?

Impact on Outcome. If an ORS mission waslaunched in response to a particular event or anewly identified need, did it have an impact onthe outcome of that event? Could it have a majorimpact on the outcome of future events? Forexample, if the Chinese choose to destroy one ormore American satellites in advance of a majormilitary activity, could the presence of ORS-developed gapfiller satellites prevent an invasionof Tibet, Taiwan, or Korea? Perhaps even moreimportant, could the known presence of the ORSgapfiller capability prevented the attack on theAmerican surveillance satellites in the firstplace?

For natural disasters, could we change thefundamental outcome by having ORS systemsavailable? Hurricane Katrina is a particularlygood example here. Its difficult to envision alocation in which space surveillance or supple-mentary communications would be less neededthan in CONUS, where the communicationsinfrastructure and aircraft availability is as goodor better than anywhere in the world. But some-how Katrina managed to immobilize a largeportion of that ground and air infrastructure.Planes that were flying were needed for othertasks and not available for damage assessment.Much of the communications infrastructurewas destroyed. The first photos published inAviation Week of the aftermath of the hur-ricane were taken by NigeriaSat, part of theSurrey small satellite Disaster MonitoringConstellation.

It is, of course, possible that some Americansatellites had, or could have had, rapid highquality images of New Orleans and workablecommunications channels. But, if that was thecase, they weren’t available to the policeman inthe street or those who were trying to coordinatedisaster relief. This re-emphasizes the basiccharacter of ORS Mission Utility — It isn’t thedata or the processes that exist that matter, it’swhat gets to the warfighter, the end user, or thevictim or what is used for their benefit thatmatters. To have utility, the data has to get tothe people who need it in time for them to makeuse of it.

Measures of Utility. Strongly related to theMeasures of Success and Impact on Outcome arethe Measures of Utility. Specifically, how willthe mission impact our ability to:

Page 11: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

10AIAA/6th Responsive Space Conference 2008

• Predict

• Protect

• Respond

• Retaliate

• Restore

• Rescue

• Contain

• Limit Collateral Damage

Of these, the one that is least related to the priordiscussion is the ability to predict, since wetypically think of ORS missions being launchedin response to world events. A predictive missionthat has substantial utility is a Wind Lidar thatcan directly measure winds at various altitudes[Wertz, et al., 2004]. This is critical to all of themilitary services as well as many natural or man-made disasters. For example, where will thewinds take a volcano plume or airborne chemi-cals from a civil disaster?

In the same season as Katrina, Hurricane Ritaentered the Gulf of Mexico and it was uncertainfor some time what path Rita would follow. Thismade preparation both difficult and expensive.Figure 3 shows the coverage of an ORS WindLidar system in a single day over the region ofthe Gulf.. This type of wind data would signifi-cantly improve the capacity to predict the trackof the storm and, therefore, make better, safer,and more economical preparations.

Figure 3. Wind Lidar Data Over the Gulf ofMexico in One Day. 6 successive orbits providedata every 90 minutes for 9 hours, allowingmuch better prediction of the path of storms ortoxic clouds. (from Wertz [2008].)

Level of Preparedness. How ready are we torespond to unforeseen world events or changingconditions on the battlefield or in the war onterror? In one sense, we are, by definition,unprepared for events that are unforeseen. How-ever, just because we don’t know the nature ofthe event, does not mean that we can’t beprepared for it. Irrespective of the nature of theproblem or event, the types of services that weneed are largely the same — surveillance tounderstand what has happened, communicationsto work with those who have been impacted, anddata such as winds or successive images topredict how the situation will evolve. In a sense,ORS provides a new tool to be able to respond tochanging world events, specifically when wedon’t know the nature of those events.

6. Evaluating MoEsSome of the MoEs are straightforward toevaluate while others, typically those that aremore useful, are far more challenging. Speci-fically, the technical performance measures —such as response time, resolution, data rates, orcoverage — can be evaluated analytically or withsimple numerical methods. Cost and riskparameters are harder to know with certainty, butcan be modeled with well known methods andmodels, although the fact that they are well-known does not necessarily make them correct,particularly when the paradigm under which theORS systems are built differs dramatically fromthe traditional space mission methods andprocesses. Both Wertz [1999] and the referencescited in each of the sections above provide moreextended discussions of the evaluation oftechnical performance MoEs, cost, and risk.Coverage evaluation methods are covered indetail by Wertz [2001].

By far the most challenging to evaluate, and themost important, are the GO-MoEs. In mostcases, the only realistic way to estimate these iseither via simulations or scenario evaluations.

For simulations, there is an important distinctionfor the evaluation of MoEs. System Simulationstend to concentrate primarily on technicalparameters of the system itself — i.e., power,communications bandwidth, coverage, resolu-tion, and similar parameters. In contrast, MissionSimulations model these parameters at more of atop level and provide more detailed models of“outside parameters” that may dramaticallyaffect the outcome, but have little to do with the

Page 12: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

11AIAA/6th Responsive Space Conference 2008

system itself, such as weather or the evolution ofground events. Thus, we may spend a great dealof time optimizing coverage patterns to minimizegaps, when in reality, gaps in coverage may wellbe dominated by weather patterns over the targetarea.

One of the major characteristics of a goodmission simulation is that it should provide goodanimation or other visible output that creates amore intuitive feel for what is causing theresults. Recall the problems discussed in Sec-tion 3 resulting from an excessive reliance onstatistical results. Statistics can, at times, be veryvaluable, but they rely on the idea that theunderlying data has a Gaussian or randomdistribution. Neither orbit patterns nor weatherare Gaussian. Therefore, in addition to statisticalresults, we want analytic values, animations,simple approximations, and scenario evaluationsto gain a “feel” for what is occurring so as not bemisled by purely numerical simulations.

Finally, we also want scenario evaluations inwhich we look at the impact of the ORS systemon specific real or projected events. These canbe done by a combination of simulations of theevent, analysis techniques, and discussion withexperts in that particular field. The evaluation ofwhat might have been achieved in past realevents is one of the most effective methods ofevaluating mission utility.

7. Example: The HawaiiDisaster of 2010

In late October 2010, an earthquake and resultingtsunami struck Japan with damage to all of themajor airports and most homes, businesses, andinfrastructure. Communications were badly dis-rupted and it was difficult to assess the damageor to find people that needed help, needed to bemoved, had been washed out to sea, or were inimminent danger due to potential aftershocks.

By order of the President, HawaiiSat waslaunched from KSC 10 hours after theEarthquake hit to provide both visible and IRimages. The satellite carried an 0.5 meter dia-meter imager. Following a 4 deg inclinationchange at the equator, the satellite was placed ina 250 km altitude circular orbit at an inclinationof 24.5 deg. The ground swath of HawaiiSat’sfirst 5 orbits are shown in Figure 4. The firstimages of the damage at a low elevation angle

were returned on the first orbit about 80 minutesafter launch. The second pass, 90 minutes later,flew almost directly over the islands and gavevery good images of the damage and allowed thefirst estimate of the drift rate and direction ofdebris floating at sea by comparison with imagesfrom the first orbit. The next two orbits providedimages at shallow angles for the big island andmore nearly overhead for the rest of the islands.The fifth orbit again passed nearly overhead,6 hours after the first orbit. 5 or 6 orbits ofimaging occurred on subsequent days, movingabout 35 minutes earlier per day.

Figure 4. Swath of the First 5 Orbits for theHypothetical HawaiiSat Mission. Hawaii will bevisible on 5 or 6 orbits/day with the observationtimes moving approximately 35 minutes earlierper day.

The spacecraft parameter and cost estimates forthe above system are based on the FrequentRevisit Surveillance mission discussed by Wertz[2006] and Cox, et al. [2005, 2006]. Based onthese parameters, the HawaiiSat MoEs are:

• Cost– NRE $40 M– Recurring cost/mission $17.3 M

• Responsiveness– Total Response Time 11.3 hr– Mean Revisit Time 85 min for

6 to 7.5 hrs– Time Late 5 min

• Coverage– Orbit Response Time 80 min– Observations/Day 5 to 6– Number of Spacecraft 1

Page 13: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

12AIAA/6th Responsive Space Conference 2008

• Data Quality– Nadir Resolution 0.3 m

• Risk– Prob. of Succ. Deployment 90%– Prob. of Succ. Mission 99%

• Flexibility TBD

• GO-MoEs– Need a mission simulation and discussion

with emergency response personnel toevaluate these critical measures

While we don’t yet have all of the numbers weneed, it seems clear that ORS provides a newcapability to Respond, Restore, Rescue, Contain,and Limit Collateral Damage. With ORS, wewill be prepared to save lives and property.

8. ConclusionsJust as they are for traditional missions, MissionUtility and MoEs are critical to the success ofORS missions, both in the process of makingdesign decisions and in demonstrating utility vs.cost in what will remain a very constrainedbudget environment. While the technical perfor-mance measures are relatively easy to quantify, itis the Goal-Oriented MoEs (GO-MoEs) that areimportant in evaluating how well ORS missionsmeet their broad objectives. These will typicallyneed to be evaluated by a simulation, missionassessment, or both.

The choice of an ORS mission is likely todepend not only on the nature of the world eventto which it is responding, but also on outsidecircumstances that may dictate how best torespond. For example, for either disaster reliefor battlefield monitoring, visible light sur-veillance will provide high resolution, broadcoverage, and rapid, relatively easy interpreta-tion. However, if nighttime coverage is critical,we may choose to change to IR or visible/IRcombined. If there are heavy clouds forecast forthe next several days, then a synthetic apertureradar (SAR) mission is likely to be the mosteffective.

Finally, the key issue for ORS mission utility isthat it isn’t the existence of data or processesthat matters, it’s what gets to the warfighter, theend user, or the victim or what is used for theirbenefit that matters. To have utility, the datahas to get to the people who need it in time forthem to make use of it, typically under circum-stances that are far less than ideal.

If we can do this, then ORS will have a clear roleto play in future space missions.

9. References

Brunschwyler, John, Kyle Kelly, and DimDubota, 1996, “A Non-Standard ProductionApproach to Satellite Production ” presented atthe 10thannual AIAA/USU Conference on SmallSatellites, Logan, Utah, September 1996.

Burgess, G. E., 1996. “ORBCOMM,” Sec 13.2in Reducing Space Mission Cost, ed. by J. R.Wertz and W. J. Larson, Dordrecht, theNetherlands and El Segundo, CA; KluwerAcademic and Microcosm Press, pp. 487–505.

Champlin, Cary, 1993. “IridiumTM/SM Sat-ellite: A Large System Application of Design forTestability,” International Test Conference 1993,Paper No. 18.2, pp. 392–398.

Cox, C. and S. Kishner, 2005. “ReconnaissancePayloads for Responsive Space.” Paper No. RS3-2005-2002, presented at 3rd Responsive SpaceConference, Los Angeles, CA, April 25–28,2005.

Cox, C., D. Flynn, and S. Kishner, 2006.“Reconnaissance Payloads for ResponsiveSpace.” Paper No. RS4-2006-5003, presented at4th Responsive Space Conference, Los Angeles,CA, April 24–27, 2006.

Department of Defense, 2007. “Plan forOperationally Responsive Space, A Report toCongressional Defense Committees,” April 17,2007. (available at www.ResponsiveSpace.com)

Fiete, Robert D., 1999. “Image Quality andλFN/p for Remote Sensing Systems, ” Opt. Eng.38(7), July 1999, pp. 1229–1240.

Graven, Paul, Richard Van Allen, L. JaneHansen, Jon Pollack, 2006. “Achieving Respon-sive Space: The Viability of Plug-and-Play inSpacecraft Development,” presented at the 29thannual AAS Guidance and Control Conference,Breckenridge, CO, Feb. 4–8, 2006.

Graven, Paul, and L. Jane Hansen, 2007. “AGuidance, Navigation and Control (GN&C)Implementation of Plug-and-Play for ResponsiveSpacecraft,” presented at the AIAA Infotech atthe Aerospace 2007 Conference and Exhibit,May 7–10, 2007, Rohnert Park, CA.

Page 14: ORS Mission Utility and Measures of Effectiveness · • Quantifiable (but not necessarily easily) • Understandable by and meaningful to decision-makers • Sensitive to the system

AIAA RS6-2008-1003

13AIAA/6th Responsive Space Conference 2008

Graven, Paul, Yegor Plam, L. Jane Hansen, SethHarvey, 2008. “Implementing Plug-and-PlayConcepts in the Development of an AttitudeDetermination and Control System to SupportOperationally Responsive Space,” presented atthe IEEE Aerospace Conference, Big Sky,Montana, March 1–8, 2008.

Hansen, L. Jane, Jon Pollack, Paul Graven,Yegor Plam, 2007. “Plug-and-Play for Creating‘Instant” GN&C Solutions,” presented at the21st annual AIAA/USU Conference on SmallSatellites, Logan, Utah, Aug. 13–16, 2007.

Molnau, W., J. Olivieri, and C. Spalt, 1999.“Designing Space Systems for Manufactur-ability,” Sec. 19.1 in Space Mission Analysis andDesign, 3rd edition, ed. by J. R. Wertz and W. J.Larson, Dordrecht, the Netherlands and ElSegundo, CA; Kluwer Academic and MicrocosmPress. pp. 745–764.

Riehl, Ken. “A Historical Review ofReconnaissance Image Evaluation Techniques,”SPIE Vol. 2829, pp. 322–334.

Tomme, E., 2006. The Strategic Nature of theTactical Satellite. Maxwell AFB, AL. AirpowerResearch Institute.

Tomme, E., 2007. “Tactical Satellites and thePerceived Competition With Non-Orbital C4ISRCapabilities.” Paper No. RS5-2007-7007, pre-sented at 5th Responsive Space Conference, LosAngeles, CA, April 23–26, 2007.

Wertz, J.R., and Larson, W.J., 1999. SpaceMission Analysis and Design, 3rd ed. Dordrecht,the Netherlands and El Segundo, CA; KluwerAcademic and Microcosm Press.

Wertz, J.R., 2001. Mission Geometry; Orbit andConstellation Design and Management.Dordrecht, The Netherlands and El Segundo,CA; Kluwer Academic and Microcosm Press.

Wertz, J.R., 2004. “Responsive Launch VehicleCost Model” Paper No. RS2-2004-2004.presented at the 2nd Responsive SpaceConference, Los Angeles, CA, April 19–22,2004.

Wertz, J.R., C. Nardell, and M. Dehring, 2004.“Low-Cost, Low-Risk Global TroposphericWind Measurements from Space,” presented atthe Working Group on Space-Based LidarWinds, Sedona, AZ, Jan. 27-29, 2004.

Wertz, J.R., 2005. “Coverage, Responsiveness,and Accessibility for Various ‘ResponsiveOrbits’.” Paper No. RS3-2005-2002, presented at3rd Responsive Space Conference, Los Angeles,CA, April 25–28, 2005.

Wertz, J.R., Van Allen, R.E., and Shelner, C.J.,2006. “Aggressive Surveillance as a KeyApplication Area for Responsive Space.” PaperNo. RS4-2006-1004, presented at the 4th

Responsive Space Conference, Los Angeles, CA,April 24–27, 2006.

Wertz, J.R., 2008. “Responsive Space MissionAnalysis and Design,” Hawthorne, CA. Micro-cosm, Inc. short course notes, Feb., 2008.