290

The Generalized Information Network Analysis Methodology for

  • Upload
    ledang

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Generalized Information Network Analysis Methodology for

The Generalized Information Network Analysis Methodology

for Distributed Satellite Systems

by

Graeme B. Shaw

Submitted to the Department of Aeronautics and Astronauticson October 16, 1998, in partial ful�llment of the

requirements for the degree ofDoctor of Science

A systematic analysis methodology for distributed satellite systems is developed that is

generalizable and can be applied to any satellite mission in communications, sensing or nav-

igation. The primary enabler is that almost all satellite applications involve the collection

and dissemination of information and can thus be treated as modular information process-

ing networks. This generalization allows the adoption of the mathematics for information

network ow, leading to a logical classi�cation scheme for satellite systems. The bene�ts

and issues that are characteristic of each system class are identi�ed, in terms of their capa-

bility, performance and cost. The quantitative analysis methodology speci�es measurable,

unambiguous metrics for the cost, capability, performance and adaptability. The Capabili-

ties are characterized by four quality of service parameters that relate to the isolation, rate,

integrity and availability of the information transferred between origin-destination pairs

within a market. Performance is the probability of satisfying the user's requirements for

these parameters. The Cost per Function metric is the average cost incurred to provide

satisfactory service to a single user, and Adapatability metrics are sensitivity indicators.

Validation of the methodology is provided by a comprehensive quantitative analysis of the

NAVSTAR Global Positioning System, in which the calculated capabilities agree with mea-

sured data to within 3%. The utility of the methodology for comparative analysis is high-

lighted in a rigorous competitive assessment of three proposed broadband communication

satellite systems. Finally, detailed architectural trades for a distributed space based radar

are presented to demonstrate the e�ectiveness of the the methodology for conceptual design.

The generalized information network analysis methodology is thus identi�ed as a valuable

tool for space systems engineering, allowing qualitative and quantitative assessment of the

impacts of system architecture, deployment strategy, schedule slip, market demographics

and technical risk.

Thesis Supervisor: David W. Miller, Assistant Professor of Aeronautics and Astronautics

3

Page 2: The Generalized Information Network Analysis Methodology for

4

Page 3: The Generalized Information Network Analysis Methodology for

Acknowledgments

Two or so days ago, I defended my thesis. I passed. And now, I sit here all alone in my

o�ce, at just gone 1am (in the morning), and I have to try to voice my thanks to all the

people who assisted me over the past �ve years. It is appropriate, I think, that I write these

acknowledgments during these early hours, since practically the whole thesis was conceived

and written during the quiet hours between midnight and �ve. Before discussing speci�c

people, I should say straight away that the actual concepts introduced in this thesis, and

the motivation to pursue them, were the products of continuous collaboration with my

colleagues and friends. I guess this is very �tting, given the subject matter.

It is conventional to begin the acknowledgments with a paragraph thanking the thesis

advisor, and afterwards go on to credit personal friends for their help. This unwritten rule is

being bent a little here, for very good reasons. First of all, I have actually had two advisors,

and secondly, and much more importantly, they are also my friends. I will introduce them

as they were to me, chronologically.

Professor Daniel Hastings became my advisor the day I arrived in the United States,

and he has been looking out for me ever since. His genuine care for his students is absolutely

remarkable. Even after becoming the Chief Scientist of the Air Force, Dan would always

be willing to �nd the time to talk or meet, answer my questions, and quite unintentionally,

cheer me up. I believe he taught me everything I know about the \big picture". The trust

he showed in my abilities was as important to me as anything else I experienced over my

�ve years at MIT. It was an honor to be Dan's student.

In August 1997, with at least whole year left before I could �nish my ScD, Dan left MIT

to take the post of Chief Scientist of the Air Force. At that time I was adopted by Professor

David Miller, someone who would quickly become one of my closest friends. Dave is just a

great guy and an exceptional advisor. His empathy for his graduate students is surpassed

only by his boundless enthusiasm and admirable capacity to contribute to their work. The

di�erent perspective that Dave brought to this research was sometimes frustrating, almost

always enlightening. Dave respected me as both a friend and a peer, and in doing so, gained

my respect as my boss. I have so much enjoyed the last year as Dave's student, and my

thesis is better for it.

My o�ce mate and and closest friend, Raymond Sedwick, deserves a medal for tolerating

me for the last three or four years. Everyone knows my quirkyness, but only Ray has had

to deal with it day in and day out. His qualities as an engineer, a scientist, a teacher, a

�x-it man, and a friend are immeasurable. He is Abbott to my Costello, Lennon to my

McCartney. He is my brother, my buddy, and I love him.

Just thinking about Joe Sin�eld brings a smile to my face. He has become a very good

5

Page 4: The Generalized Information Network Analysis Methodology for

friend over the past few years, sharing many common interests, namely, breakfast, brunch,

lunch, afternoon tea, dinner and supper. We supplement the meals with long philosophical

discussions about work, women, and the joys of all-you-can-eat seafood, and sometimes we

even do stu� (lifting, softball, golf). Oh yeah . . . he's a pretty good engineer as well.

Jen Rochlis' sel ess friendliness, good looks and uncanny penchant for remembering

movie lines means she is a lot of fun to be around. Without even trying too much, she

makes everyone around her feel good about themselves. I also have to thank Jen for proof-

reading this thesis. She thinks I need more commas and longer sentences. Maybe.

Nicole Casey is a little ray of sunshine. Her bubbly personality, and happy smile are

addictive, and I am forever touched by her thoughtfulness. She apparently knows some

Chemistry, but the impressive thing is her swing, and she plays a mean second base!

Karen Willcox is like my sister. We have been close from the day she arrived at MIT and

I can talk to her about anything. Karen and I bonded very well because of our nationalities;

New Zealand and England are surprisingly similar in culture. Unfortunately, Karen is the

wrong size. She's too small to avoid getting hurt when she plays rugby, and apparently too

tall to date short guys like me!

Jim Soldi and Guy Benson, two members of the \old guard", are also very special people

to me. The memories of the many laughs we have had de�ne my MIT experience. I always

knew that hanging with General Zod and the Lion (\stick `em up, stick `em up") would

hurt tomorrow, but, boy, would it be fun tonight. I was so pleased, and very grateful, that

each of them could be at my defense.

I shared an apartment with Carlin Vieri throughout my whole time at MIT. Between

the work, the parties, the wine and the women, we somehow managed to forge a solid

friendship. Carlin tolerated my bad habits and messy lifestyle, and I tolerated his excellent

wine selection, good cooking, and impeccable tidiness. What a guy!

Salma Qarnain brightened my days up as soon as she walked into the lab. She also

helped me out with the GPS section of this thesis. I owe Salma big-time, but since I am

going to be working (and living) with her, I am sure she will have plenty of opportunities

to recover payment. Douglas Wickert is the smartest guy I ever met, and one of the nicest.

Doug's Masters thesis laid down all the groundwork for the studies of distributed space

based radar in this thesis. Similarly Greg Yashko, Edmund Kong, Cyrus Jilla and John

Enright all contributed signi�cantly to the thesis through collaboration or discussion. The

coolest thing about these guys is that they were de�ning the state of the art, were just down

the hall, and were always good for a laugh. They made for a very exciting and fun place to

work.

Sharon-Leah Brown is o�cially the �scal manager of the lab, but uno�cially she is

everyone's mother. I hope she realizes how important she is to all the students (and sta�)

6

Page 5: The Generalized Information Network Analysis Methodology for

around here. I will certainly miss Sharon when I leave. Other people I should mention are

Mike Fife, Greg Gi�n, Greg Dare, Marika, Ed Peikos, Angie Kelic, Lihini, Kirsten, Lee,

Jake, Kurt and the members of the softball teams. All you people helped make the time I

spent at MIT the best years of my life.

I want to thank Dana for supporting me emotionally through so many years of hard and

self-absorbed work; I doubt that I would have been able to do this ScD without her. Dana

is my best friend in the whole world, and I can never repay her for the unconditional love

she has given me.

My uncle John and Aunty Beryl have helped me cope with the �nancial burdens of

almost nine years of university, and for that I am eternally grateful. My brother Christopher

has also supported me throughout, giving me money, encouragement, and most importantly,

love.

Finally, I thank my mother. Her sel ess devotion to my well-being is unbelievable. I

have always been able to rely on my Mum, and the love and help she has given me is the

largest factor behind my success. I hope that when I have children, I am able to give them

even a fraction of what she gave me. I love you Mummy.

I dedicate this thesis to my grandfather, who I am sure is watching all of this with a

great deal of pride. I did it, Grandad, I did it.

7

Page 6: The Generalized Information Network Analysis Methodology for

8

Page 7: The Generalized Information Network Analysis Methodology for

Contents

1 Introduction 27

1.1 The Bottom Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.2 Satellite Systems in the New Millennium . . . . . . . . . . . . . . . . . . . . 27

1.3 Background: Analyses in Systems Engineering . . . . . . . . . . . . . . . . 29

1.3.1 The Systems Engineering Process . . . . . . . . . . . . . . . . . . . . 30

1.3.2 Modeling and Simulation . . . . . . . . . . . . . . . . . . . . . . . . 35

1.4 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

1.5 Content of the Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

I Development of the Generalized Analysis Methodology 39

2 Generalized Characteristics of Satellite Systems 41

2.1 Distributed satellite systems . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.2 Abstraction to Information Networks . . . . . . . . . . . . . . . . . . . . . . 42

2.3 Satellite System Classi�cations . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.3.1 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.3.2 Architectural Homogeneity . . . . . . . . . . . . . . . . . . . . . . . 48

2.3.3 Operational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3 Distributed Satellite Systems 51

3.1 To Distribute or not to Distribute? . . . . . . . . . . . . . . . . . . . . . . . 51

3.1.1 Signal Isolation Improvements . . . . . . . . . . . . . . . . . . . . . 53

3.1.2 Rate and Integrity Improvements . . . . . . . . . . . . . . . . . . . 54

3.1.3 Availability Improvements . . . . . . . . . . . . . . . . . . . . . . . . 59

3.1.4 Reducing the Baseline Cost . . . . . . . . . . . . . . . . . . . . . . . 65

3.1.5 Reducing the Failure Compensation Cost . . . . . . . . . . . . . . . 73

3.2 Issues and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

3.2.1 Modularity Versus Complexity . . . . . . . . . . . . . . . . . . . . . 77

9

Page 8: The Generalized Information Network Analysis Methodology for

3.2.2 Clusters and Constellation Management . . . . . . . . . . . . . . . . 81

3.2.3 Spacecraft Arrays and Coherence . . . . . . . . . . . . . . . . . . . 84

3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4 Development of the Quantitative Generalized Information Network Anal-

ysis (GINA) Methodology 89

4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

4.2 Satellite Systems as Information Transfer Networks . . . . . . . . . . . . . . 90

4.2.1 De�nition of the Market . . . . . . . . . . . . . . . . . . . . . . . . . 90

4.2.2 Functional Decomposition and Hierarchical Modeling . . . . . . . . 91

4.3 The Capability Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.3.1 Signal Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.3.2 Generalized Signal Isolation and Interference . . . . . . . . . . . . . 94

4.3.3 Information Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4.3.4 Information Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4.3.5 Information Availability . . . . . . . . . . . . . . . . . . . . . . . . . 103

4.4 Calculating the Capability Characteristics . . . . . . . . . . . . . . . . . . . 104

4.4.1 Example Capability Calculation for a Ka-Band Communication Satellite106

4.5 Generalized Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

4.5.1 Time Variability of Performance . . . . . . . . . . . . . . . . . . . . 111

4.6 Calculation of the Generalized Performance . . . . . . . . . . . . . . . . . . 111

4.6.1 Example Performance Calculation for a Ka-Band Communication

Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

4.7 The Cost per Function Metric . . . . . . . . . . . . . . . . . . . . . . . . . 116

4.8 Calculating the Cost per Function Metric . . . . . . . . . . . . . . . . . . . 117

4.8.1 The System Lifetime Cost . . . . . . . . . . . . . . . . . . . . . . . . 117

4.8.2 The Failure Compensation Cost . . . . . . . . . . . . . . . . . . . . 118

4.8.3 The System Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

4.8.4 Example CPF Calculation for a Ka-Band Communication Satellite . 120

4.9 Utility of the Cost per Function Metric . . . . . . . . . . . . . . . . . . . . . 123

4.10 The Adaptability Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

4.10.1 Type 1 Adaptability: Elasticities . . . . . . . . . . . . . . . . . . . . 124

4.10.2 Type 2 Adaptability: Flexibility . . . . . . . . . . . . . . . . . . . . 127

4.11 Truncated GINA for Qualitative Analysis . . . . . . . . . . . . . . . . . . . 127

4.12 The GINA Procedure { Step-by-Step . . . . . . . . . . . . . . . . . . . . . . 129

4.13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

10

Page 9: The Generalized Information Network Analysis Methodology for

II Case Studies and Results 131

5 The NAVSTAR Global Positioning System 135

5.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

5.1.1 The GPS Space Segment . . . . . . . . . . . . . . . . . . . . . . . . 137

5.1.2 The GPS Ranging Signal . . . . . . . . . . . . . . . . . . . . . . . . 138

5.1.3 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 140

5.1.4 Measured navigation performance . . . . . . . . . . . . . . . . . . . 141

5.2 Fundamental Error Analysis for GPS . . . . . . . . . . . . . . . . . . . . . . 141

5.3 GINA Modeling of GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

5.3.1 GPS Network Architecture . . . . . . . . . . . . . . . . . . . . . . . 147

5.3.2 The Constellation Module- Visibility and PDOP . . . . . . . . . . . 148

5.3.3 Signal structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

5.3.4 Ephemeris and Satellite clock errors . . . . . . . . . . . . . . . . . . 150

5.3.5 Space loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

5.3.6 Ionospheric and Tropospheric errors . . . . . . . . . . . . . . . . . . 153

5.3.7 Interferers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

5.3.8 GPS receiver model . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

5.4 GINA Capabilities of GPS for the Navigation Mission . . . . . . . . . . . . 158

5.5 GINA Performance of GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

5.6 The CPF Metric for GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

5.7 Improvements by Augmenting GPS . . . . . . . . . . . . . . . . . . . . . . . 164

5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

6 Comparative Analysis of Proposed Ka-Band Satellite Systems 167

6.1 The Modeled Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

6.1.1 Proposed System Speci�cations: Cyberstar . . . . . . . . . . . . . . 168

6.1.2 Proposed System Speci�cations: Spaceway . . . . . . . . . . . . . . 169

6.1.3 Proposed System Speci�cations: Celestri . . . . . . . . . . . . . . . . 171

6.1.4 Information network representations: Cyberstar and Spaceway . . . 173

6.1.5 Information network representations: Celestri . . . . . . . . . . . . . 173

6.1.6 The Capability Characteristics . . . . . . . . . . . . . . . . . . . . . 174

6.2 Generalized Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

6.3 The CPF Metric: The Cost per Billable T1-Minute . . . . . . . . . . . . . . 192

6.3.1 Modeling the Broadband Market . . . . . . . . . . . . . . . . . . . . 192

6.3.2 Calculating the market capture . . . . . . . . . . . . . . . . . . . . . 194

6.3.3 System cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

11

Page 10: The Generalized Information Network Analysis Methodology for

6.3.4 Cost per Billable T1-Minute Results . . . . . . . . . . . . . . . . . . 201

6.4 Type 1 Adaptability Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 204

6.4.1 The Requirement Elasticities . . . . . . . . . . . . . . . . . . . . . . 204

6.4.2 The Technology Elasticities . . . . . . . . . . . . . . . . . . . . . . . 207

6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

7 Techsat21; A Distributed Space Based Radar for Ground Moving Target

Indication 213

7.1 Space Based Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

7.2 Detecting Moving Targets in Strong Clutter Backgrounds . . . . . . . . . . 214

7.2.1 Locating the Target . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

7.2.2 The Radar Range Equation . . . . . . . . . . . . . . . . . . . . . . . 218

7.2.3 Detecting the Target . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

7.2.4 Noise-Limited Detection . . . . . . . . . . . . . . . . . . . . . . . . . 219

7.2.5 Clutter-Limited Detection . . . . . . . . . . . . . . . . . . . . . . . 224

7.2.6 Pulse-Doppler Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

7.2.7 The Potential of a Symbiotic Distributed Architecture . . . . . . . . 231

7.3 The Techsat21 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

7.3.1 Signal Processing Arrays . . . . . . . . . . . . . . . . . . . . . . . . . 233

7.3.2 Overall System Architecture . . . . . . . . . . . . . . . . . . . . . . . 238

7.4 Using GINA in Design Trades for Techsat21 . . . . . . . . . . . . . . . . . 242

7.4.1 Goals of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

7.4.2 Transformation of the GMTI mission into the GINA framework . . . 243

7.4.3 Modeling Techsat21 . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

7.4.4 Capability Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

7.5 The Performance, CPF and Adaptability for Techsat21 Candidate Architec-

tures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

7.5.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

7.5.2 The CPF Metric and the System Lifetime Cost . . . . . . . . . . . . 264

7.5.3 Lifetime Cost Results . . . . . . . . . . . . . . . . . . . . . . . . . . 267

7.5.4 Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

7.5.5 Conclusions of Design Trades . . . . . . . . . . . . . . . . . . . . . . 272

7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

8 Conclusions and Recommendations 277

8.0.1 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

A Capability Characteristics for Techsat Candidate Architectures 293

12

Page 11: The Generalized Information Network Analysis Methodology for

List of Figures

1-1 The System Engineering Process Overview . . . . . . . . . . . . . . . . . 31

1-2 Quality Function Deployment (QFD) . . . . . . . . . . . . . . . . . . . . . 33

2-1 Network representation of a simple communication system . . . . . . . . 44

2-2 Classes of distribution for satellite systems . . . . . . . . . . . . . . . . . 47

3-1 The coverage improvements o�ered by distribution leading to increased

availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3-2 System mass trades for a separated spacecraft interferometer . . . . . . 63

3-3 The propellant mass fraction for the satellites of a separated spacecraft

interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3-4 The USCM Cost Estimating Relationship for IR Payloads . . . . . . . . 66

3-5 Recurring hardware cost versus constellation size for a distributed in-

frared imaging system with a 25 minute revisit time . . . . . . . . . . . . 72

3-6 Recurring hardware cost versus constellation size for a distributed in-

frared imaging system with a 1 hour revisit time . . . . . . . . . . . . . . 73

3-7 Satellite and Sensor Con�gurations . . . . . . . . . . . . . . . . . . . . . . 75

3-8 Total system costs over the 10 year mission life of a polar orbitingweather

satellite system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

3-9 Data storage and communication data rates for a distributed imager with

25 minute revisit time, 5 minute interval between downloads . . . . . . . 81

4-1 Top-level network representation of a single communication satellite . . 92

4-2 Detailed network representation of a communication satellite . . . . . . 93

4-3 Simple linear-time-invariant system . . . . . . . . . . . . . . . . . . . . . . 94

4-4 A square low-pass �lter and its time-domain response . . . . . . . . . . . 95

4-5 Basic antenna model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

4-6 A rectangular aperture distribution and its radiation pattern . . . . . . 96

4-7 The basic channel model for a simple system . . . . . . . . . . . . . . . . 98

13

Page 12: The Generalized Information Network Analysis Methodology for

4-8 The probability of error is the integral under the noise probability density

function from [d=2;1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4-9 The signal space representation of QPSK. The four information symbols

di�er in phase, while their amplitude is constant. . . . . . . . . . . . . . 103

4-10 A simple system with input signals X and Y , and an output signal Z . . 105

4-11 Capability characteristics for a modeled Ka-band communication satellite 109

4-12 Failure state probabilities for a modeled Ka-band communication satellite

payload: R = 1:544Mbits/s, BER = 10�9, Av = 98%. . . . . . . . . . . . . . 113

4-13 Failure state probabilities for a modeled Ka-band communication satellite 115

4-14 Market capture pro�le for a modeled Ka-band communication satellite . 121

5-1 The NAVSTAR GPS architecture . . . . . . . . . . . . . . . . . . . . . . . 136

5-2 A typical Block II/IIA GPS satellite . . . . . . . . . . . . . . . . . . . . . 138

5-3 Characteristics of the L1 and L2 . . . . . . . . . . . . . . . . . . . . . . . . 139

5-4 PPS and SPS speci�ed accuracies . . . . . . . . . . . . . . . . . . . . . . . 140

5-5 The network representation of GPS used in GINA . . . . . . . . . . . . . 147

5-6 A snapshot of the visibility of the GPS-24 constellation . . . . . . . . . . 149

5-7 A snapshot of the PDOP for the GPS-24 constellation . . . . . . . . . . 150

5-8 The probability distribution function for the visibility of the GPS con-

stellation between �60o latitude . . . . . . . . . . . . . . . . . . . . . . . . 151

5-9 The probability distribution function for the PDOP of the GPS constel-

lation between �60o latitude . . . . . . . . . . . . . . . . . . . . . . . . . . 152

5-10 The probability distribution function for the average elevation angle of

GPS satellites in view of ground locations between �60o latitude . . . . 153

5-11 Comparison of the GPS broadcast ephemeris with the precise orbital

solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

5-12 The probability distribution function for range errors attributable to

ephemeris errors and unmodeled satellite clock errors . . . . . . . . . . . 155

5-13 The Capability Characteristics of GPS-24; PPS . . . . . . . . . . . . . . . 159

5-14 The Capability Characteristics of GPS-24; SPS; SA o� . . . . . . . . . . 159

5-15 The Capability Characteristics of the PPS with 2, 4 or 6 satellite failures 161

5-16 The Capability Characteristics of the SPS with 2, 4 or 6 satellite failures 161

5-17 The Performance of GPS-24 SPS in satisfying 2drms (90%) navigation

accuracy; Satellite failure rate=0.0035 per year . . . . . . . . . . . . . . . 162

5-18 The Capability Characteristics of the PPS service for GPS augmented

with 3 GEO satellites, after zero, two, four, or six satellite failures . . . 164

14

Page 13: The Generalized Information Network Analysis Methodology for

6-1 Information network for Ka-band communications through Cyberstar or

Spaceway satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

6-2 Information network for Ka-band communications through the Celestri

system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

6-3 The probability distribution function for the elevation angle to a Cyber-

star satellite from the ground locations served by the system . . . . . . . 176

6-4 The probability distribution function for the elevation angle to a Space-

way satellite from the ground locations served by the system . . . . . . . 176

6-5 The probability distribution function for the elevation angle of the highest

celestri satellite in view of each ground location between �60o latitude . 177

6-6 The Capability Characteristics of Cyberstar1 in addressing the broad-

band communications market in Western Europe . . . . . . . . . . . . . . 179

6-7 The Capability Characteristics of Cyberstar2 in addressing the broad-

band communications market in North America . . . . . . . . . . . . . . 180

6-8 The Capability Characteristics of Cyberstar3 in addressing the broad-

band communications market in the Paci�c Rim . . . . . . . . . . . . . . 181

6-9 The CapabilityCharacteristics of Spaceway1 in addressing the broadband

communications market in North America . . . . . . . . . . . . . . . . . . 183

6-10 The CapabilityCharacteristics of Spaceway2 in addressing the broadband

communications market in Western Europe . . . . . . . . . . . . . . . . . 184

6-11 The CapabilityCharacteristics of Spaceway3 in addressing the broadband

communications market in South America . . . . . . . . . . . . . . . . . . 185

6-12 The CapabilityCharacteristics of Spaceway4 in addressing the broadband

communications market in the Paci�c Rim . . . . . . . . . . . . . . . . . 186

6-13 The Capability Characteristics of the Celestri network in addressing the

global broadband communications market . . . . . . . . . . . . . . . . . . 187

6-14 Failure state probabilities for a typical (modeled) Ka-band GEO commu-

nication satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

6-15 The Capability Characteristics of the degraded Celestri network after

losing all seven spares and any other satellite . . . . . . . . . . . . . . . . 191

6-16 Failure probability for the Celestri constellation, relative to a 95% avail-

ability requirement for T1 connections, 10�9 BER. . . . . . . . . . . . . . 192

6-17 Broadband market growth models . . . . . . . . . . . . . . . . . . . . . . 193

6-18 The last-mile market in 2005, GDP distribution . . . . . . . . . . . . . . 194

6-19 Cyberstar's market capture map; exponential market model in 2005,

GDP distribution:2400GMT . . . . . . . . . . . . . . . . . . . . . . . . . . 195

15

Page 14: The Generalized Information Network Analysis Methodology for

6-20 Celestri's market capture map; exponential market model in 2005, GDP

distribution; 1200GMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

6-21 The market capture pro�le for the Cyberstar system . . . . . . . . . . . 197

6-22 The market capture pro�le for the Spaceway system . . . . . . . . . . . . 198

6-23 The market capture pro�le for the Celestri system; both market models;

GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

6-24 The market capture pro�les of the Cyberstar satellites; exponential mar-

ket; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

6-25 The market capture pro�le for the Spaceway satellites; exponential mar-

ket; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

6-26 The market capture pro�le for a typical Celestri satellite; exponential

market; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

6-27 The Cost per billable T1-minute metric for Cyberstar, Spaceway and

Celestri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

6-28 The rate elasticity of the CPF for Cyberstar, Spaceway and Celestri . . 206

6-29 The manufacture cost elasticity of the CPF for Cyberstar, Spaceway and

Celestri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

6-30 The launch cost elasticity of the CPF for Cyberstar, Spaceway and Celestri209

6-31 The failure rate elasticity of the CPF for Cyberstar, Spaceway and Celestri210

7-1 Space-based radar geometry . . . . . . . . . . . . . . . . . . . . . . . . . . 217

7-2 The Neuvy approximation for the probability of detection of a Swerling

2 target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

7-3 Frequency spectrum of a sequence of square radar pulses; PRF=3000Hz,

pulse length � = 1=12000 seconds, and dwell time Td = 1=300 seconds (10

pulses) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

7-4 Simpli�ed block-diagram for pulse-doppler radar processing . . . . . . . 228

7-5 Artist's impression of the operational Techsat21 system . . . . . . . . . . 232

7-6 The relationship between the aperture distribution, the far-�eld ampli-

tude response, the spatial frequency and the power response. . . . . . . 234

7-7 Simpli�ed Techsat21 Radar Architecture . . . . . . . . . . . . . . . . . . 239

7-8 Network diagram for Techsat21 with ns = 4 satellites . . . . . . . . . . . 245

7-9 Network diagram for Techsat21 with ns = 8 satellites . . . . . . . . . . . 245

7-10 Network diagram for Techsat21 with ns = 11 satellites . . . . . . . . . . . 246

7-11 Grazing angle probability distribution function . . . . . . . . . . . . . . . 247

7-12 Clutter re ectivity, �o, as a function of grazing angle, for several terrains

environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248

16

Page 15: The Generalized Information Network Analysis Methodology for

7-13 Far �eld power response for an unrestricted minimum redundancy array;

ns = 4;Dc = 100m;Ds = 2m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

7-14 Far �eld power response for an unrestricted minimum redundancy array;

ns = 8;Dc = 100m;Ds = 2m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250

7-15 Far �eld power response for an unrestricted minimum redundancy array;

ns = 11;Dc = 100m;Ds = 2m . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

7-16 Capability Characteristics for candidate Techsat21 architecture: ns = 8;

Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz . . . . . 255

7-17 Capability Characteristics for candidate Techsat21 architecture: ns = 11;

Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz . . . . 256

7-18 Far �eld power response for candidate Techsat21 architecture: ns = 8;

Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz . . . . . 257

7-19 Far �eld power response for a candidate Techsat21 architecture: ns = 11;

Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz . . . . 258

7-20 Capability Characteristics for candidate Techsat21 architecture: ns = 11;

Dc = 100m; Generalized Array; P = 400W ;Ds = 2m; PRF=3000Hz . . . . . 260

7-21 Capability Characteristics for candidate Techsat21 architectures at a 1

minute update of a 105km2 theater; requirements are PD = 0:75, Availabil-

ity=0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

7-22 The state probabilities for di�erent numbers of satellite failures in the 8

satellite cluster; �s = 0:026 . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

7-23 The generalized performance of the di�erent architectures subject to re-

quirements for a 1 minute update of a 105km2 theater with PD = 0:75 and

Availability=0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

7-24 The system lifetime cost of di�erent Techsat21 architectures subject to

requirements for a 1 minute update of a 105km2 theater with PD = 0:75

and Availability=0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

7-25 Elasticities of the lifetime cost for the 11 satellite cluster; PD : 0:75! 0:9;

Availability: 0:9! 0:95 ; Pt : 200W ! 100W . . . . . . . . . . . . . . . . . . 271

A-1 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294

A-2 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295

A-3 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

17

Page 16: The Generalized Information Network Analysis Methodology for

A-4 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

A-5 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

A-6 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

A-7 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300

A-8 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

A-9 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302

A-10 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303

A-11 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

A-12 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305

A-13 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306

A-14 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307

A-15 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

A-16 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

A-17 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310

A-18 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

A-19 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

A-20 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313

A-21 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314

18

Page 17: The Generalized Information Network Analysis Methodology for

A-22 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

A-23 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316

A-24 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

A-25 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318

A-26 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319

A-27 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

A-28 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

A-29 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

A-30 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

A-31 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324

A-32 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

A-33 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326

A-34 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

A-35 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328

A-36 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

A-37 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330

A-38 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

A-39 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

19

Page 18: The Generalized Information Network Analysis Methodology for

A-40 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333

A-41 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334

A-42 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335

A-43 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336

A-44 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

A-45 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338

A-46 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

A-47 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340

A-48 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

A-49 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342

A-50 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

A-51 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344

A-52 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

A-53 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

A-54 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347

A-55 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

A-56 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349

A-57 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

20

Page 19: The Generalized Information Network Analysis Methodology for

A-58 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

A-59 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352

A-60 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

A-61 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354

A-62 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355

A-63 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

A-64 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

A-65 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358

A-66 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

A-67 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360

A-68 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

A-69 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

A-70 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363

A-71 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

A-72 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

A-73 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

A-74 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367

A-75 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

21

Page 20: The Generalized Information Network Analysis Methodology for

A-76 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369

A-77 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370

A-78 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371

A-79 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

A-80 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

A-81 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

A-82 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

A-83 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376

A-84 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

A-85 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378

A-86 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

A-87 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380

A-88 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

A-89 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

A-90 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383

A-91 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384

A-92 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385

A-93 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 200W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386

22

Page 21: The Generalized Information Network Analysis Methodology for

A-94 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 200W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387

A-95 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388

A-96 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389

A-97 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 400W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390

A-98 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 400W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

A-99 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392

A-100Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

A-101Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 100W ,

PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394

A-102Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 100W ,

PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

23

Page 22: The Generalized Information Network Analysis Methodology for

24

Page 23: The Generalized Information Network Analysis Methodology for

List of Tables

1.1 Example system attributes and measurables for the requirements de�ni-

tion of satellite systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.1 Satellite system classi�cations . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.1 Factor of improvement in the energy per symbol to noise density ratio

for distributed clusters compared to singular deployments . . . . . . . . 58

3.2 Distributed infrared imaging system parameters . . . . . . . . . . . . . . 72

4.1 System parameters for a modeled Ka-band communication satellite . . . 107

4.2 Cost per Function metrics for example applications . . . . . . . . . . . . 116

4.3 System cost pro�le for a single Ka-band communication satellite . . . . 122

4.4 Qualitative comparison between Techsat21 and Discoverer-II space based

radar concepts using truncated GINA . . . . . . . . . . . . . . . . . . . . 128

5.1 PPS measured accuracies in terms of SEP and CEP navigation errors . 141

5.2 Calculated PPS and SPS accuracies in terms of SEP (50%) and 2drms

(90%) navigation errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

5.3 System cost pro�le for GPS . . . . . . . . . . . . . . . . . . . . . . . . . . 163

6.1 System parameters for Cyberstar . . . . . . . . . . . . . . . . . . . . . . . 170

6.2 System parameters for Spaceway . . . . . . . . . . . . . . . . . . . . . . . 171

6.3 System parameters for Celestri . . . . . . . . . . . . . . . . . . . . . . . . 172

6.4 System cost pro�le for Cyberstar; constant year FY96$ . . . . . . . . . . 202

6.5 System cost pro�le for Spaceway; constant year FY96$ . . . . . . . . . . 203

6.6 System cost pro�le for Celestri; constant year FY96$ . . . . . . . . . . . 203

6.7 Lifetime costs CL for the modeled systems (net present value in FY96$) 203

7.1 Minimum redundancy arrays, up to N = 11 elements; the number se-

quence indicates relative spacings . . . . . . . . . . . . . . . . . . . . . . . 237

7.2 Test Matrix for Analysis of Techsat21 . . . . . . . . . . . . . . . . . . . . 243

25

Page 24: The Generalized Information Network Analysis Methodology for

7.3 Modeled Techsat21 system parameters held constant across all cases . . 244

7.4 System lifetime costs for Architecture 1 (8 sats) . . . . . . . . . . . . . . 267

7.5 System lifetime costs for Architecture 2 (11 sats) . . . . . . . . . . . . . 267

7.6 System lifetime costs for Architecture 3 (8 sats, Centralized Processor) 268

7.7 System lifetime costs for Architecture 4 (11 sats, Centralized Processor) 268

26

Page 25: The Generalized Information Network Analysis Methodology for

Chapter 1

Introduction

1.1 The Bottom Line

It might seem strange to begin a document with a statement of the �nal conclusions, right

upfront in the �rst few paragraphs. This document is however an engineering thesis and

not prose, so there is no need for suspense. In fact, knowing the eventual destination adds

meaning and context to each page. Thus it is stated immediately,

Almost all envisioned satellite systems are information disseminators that can be

represented as information transfer networks. These systems are characterized

by a set of standardized and measurable parameters for the quality of service

they provide. Using these parameters to de�ne quanti�able cost-e�ectiveness

and sensitivity metrics, a generalized system analysis methodology for satellite

systems can be formulated. This is useful for Systems Engineering (SE) of

satellite systems as well as competitive analysis and investment decision making.

The next two hundred pages or so go on to explain and qualify these statements, using

fundamental science and principles of satellite systems engineering. The development of

this formal methodology, that is fully compatible with conventional SE practices, is the

main contribution of this work to the state-of-the-art. The interested engineer that wants to

implement this methodology for real analyses should continue through the whole document.

The higher-level decision-maker that needs only a working appreciation of the concepts can

�nish this chapter, then ip straight to the conclusions.

1.2 Satellite Systems in the New Millennium

We are entering a new era in the utilization of satellite systems. In the next ten to twenty

years, the commercial world will see the development of four types of space-based systems

27

Page 26: The Generalized Information Network Analysis Methodology for

that will be available to both friendly and unfriendly nations, corporations, and individuals

on a worldwide basis:

� Global positioning and navigation services While the DoD already has GPS,

other countries are developing equivalent systems or augmenting the existing one; sim-

ilar capabilities will be available through the development of personal communication

systems. They will enable navigation with an accuracy of less than one meter.

� Global communication services Several systems are already in production or on-

orbit, such as Iridium, Globalstar and ICO. These systems will provide universal

communications services between mobile individuals to almost anyplace on the surface

of the Earth. These systems will work transparently with local cellular systems and

will enable rapid telecommunications development in underdeveloped parts of the

world.

� Information transfer services These services will enable data transfer between any

two points on the surface of the Earth at rates ranging from a few bits per second

for paging, to mega- and gigabits per second for multimedia applications. Proposed

systems include Orbcomm, Spaceway, Cyberstar, Astrolink and Celestri/Teledesic.

Individual users will be able to access large amounts of data on demand.

� Global reconnaissance services These services will provide commercial users with

multispectral data from almost any point on the surface of the Earth with meter-

scale resolution. This data will span the range from the radio frequencies (RF) to the

infrared (IR) through the visible into the ultraviolet (UV). This information will be

available within hours of a viewing opportunity and on the order of a day from the

time of a request. Proposed and existing systems include, SPOT, Orbimage, World

View, Earthwatch and RadarSat.

It will therefore be possible for persons of means to locate themselves on any point on the

Earth, communicate both by voice and computer to other points on the Earth, and have a

good picture of the local environment. Both the services and the technologies that enable

them will be commercially available all over the world. The commercial potential of these

services will fuel their continued development, and companies will be forced toward even

more advanced and ambitious concepts to gain competitive advantage.

Within the military, increased political pressures to move American troops out-of-harms-

way, while still being able to project global superiority, is leading to an increased reliance

on space assets. Indeed, the Air Force's recent doctrine of Global Engagement has started

the transition from an \air and space force" to a \space and air force". This too will drive

the development of increasingly sophisticated satellite systems for communications, remote

28

Page 27: The Generalized Information Network Analysis Methodology for

sensing, navigation and even weapons delivery. Unfortunately, with the Cold War over

and no identi�able single adversary to spur funding, DoD budget controls have become

more restrictive. Any new military satellite systems must therefore not only improve the

capabilities to wage a modern war, but also provide utility during peace time operations,

and do so at a lower cost.

These factors will drive a move towards a new way of doing business in space. No longer

will it be acceptable to rely on proven technologies, processes and practices to minimize risk.

Ventures with higher levels of risk in performance, schedule and cost will be undertaken in

all sectors. To support this shift, higher levels of technology in spacecraft components, and

improved manufacturing processes will be needed. However, the largest and most immediate

bene�t will likely come from improved systems engineering practices. The existing paradigm

for satellite system architectures is based on years of experience, but re ects outdated

technology and budget climates that were very di�erent from today's. By removing the

preconceived notions about how to design e�ective space systems, and instead, starting

from a clean sheet, enormous bene�ts may be possible in capabilities, performance and

cost. The potential o�ered by improved systems engineering is made clear by the following

excerpt:

The need for a well-integrated approach to system design and development can be

better appreciated when it is realized that approximately eighty to ninety percent

of the development cost of a large system is predetermined by the time only �ve

to ten percent of the development e�ort has been completed [1].

As a result there is a de�nite need for sophisticated analysis techniques that re ect the

newer architectures and can reduce risk by accurate predictions of capabilities and cost. To

be useful, any new analysis methodology must be compatible with the formal SE process.

1.3 Background: Analyses in Systems Engineering

The International Council on Systems Engineering (INCOSE) Handbook [1] gives the fol-

lowing de�nitions:

� System | An integrated set of elements to accomplish a de�ned objective. These

include hardware, software, �rmware, people, information, techniques, facilities, ser-

vices, and other support elements.

� Systems Engineering | An interdisciplinary approach and means to enable the

realization of successful systems

29

Page 28: The Generalized Information Network Analysis Methodology for

� Systems Engineering Process |A logical, systematic process devised to accom-

plish system engineering tasks.

The basic tenet behind SE is to consider the system and its functionality as a whole, rather

than a collection of independent components. The individual parts of a system do not have

to be optimal for the system to perform optimally, or at least satisfactorily.

The SE process is an iterative procedure for deriving or de�ning requirements at each

hierarchical level of the system, beginning at the top, and owing down these requirements

in a series of steps that eventually leads to a preferred system concept. Further iteration

and design re�nement leads successively to preliminary design, detailed design and �nally,

approved design. In fact, SE activities are carried out in almost all phases of a project's

lifecycle, from the system analysis, requirements de�nition and conceptual design at the pro-

gram's inception, through production, operations, maintenance, replacement, and eventual

disposal at the end of life (EOL). For this thesis, it is the role played by systems analyses

that is primarily of interest, and this is usually most important during the design phase.

To clarify how system analysis is used within the SE process, the actual process must be

explained.

1.3.1 The Systems Engineering Process

The SE process is good engineering practice and should be applied by all engineers, just as

the scienti�c method is applied by scientists. The basic steps in the systems engineering

process, as de�ned by INCOSE [1], are: (1) De�ne the system objectives (User's Needs);

(2) Establish performance requirements (Requirements Analysis); (3) Establish the func-

tionality (Functional Analysis); (4) Evolve design and operations concepts (Architecture

Synthesis); (5) Select a baseline (through Cost/Bene�t Trades); (6) Verify the baseline

meets requirements (User's Needs); and, (7) Iterate the process through lower level trades

(Decomposition).

This process is shown in Figure 1-1. The requirements loop establishes the quanti�-

able performance requirements that represent the user's needs, and from them derives the

functional requirements for each functional component in the architecture. The design loop

translates those requirements into design and operations concepts. The veri�cation loop

checks the capability of the solutions to determine if they match the original requirements.

A control loop insures that only the most cost-e�ective solutions are selected.

It should be emphasized that there is big di�erence between de�ning what needs to be

done versus how well it must be done. If there is to be no ambiguity about knowing when

a job is completed or when a product is acceptable, requirements must be expressed in

measurable terms. Also, a requirement is not a requirement unless it can be veri�ed [1].

30

Page 29: The Generalized Information Network Analysis Methodology for

Req

uire

men

ts

Ana

lysi

sR

equi

rem

ents

A

naly

sis

Sys

tem

Ana

lysi

s &

Con

trol

Sys

tem

Ana

lysi

s &

Con

trol

Fun

ctio

nal

Ana

lysi

sF

unct

iona

l A

naly

sis

Syn

thes

isS

ynth

esis

Ve

rific

atio

nD

esig

n

Req

ts.

Con

trol

outp

uts

inpu

ts

Figure

1-1:TheSystem

EngineeringProcess

Overview[1]

RequirementsAnalysis

Theob

jectiveofrequirem

ents

analysisisto

translatetheusers'needsinto

aquanti�able

setof

perform

ance

requirem

ents

that

canbeusedto

derivedesignrequirem

ents.

The

users'needscanoften

becharacterizedin

measurablecategories

such

asQuantity,Quality,

Coverage,Timeliness,

andAvailability.

INCOSE[1]giveexam

plesof

thesecategories

for

twodi�erenttypes

ofsatellitesystem

s,reproducedin

Table1.1.

Atthispointthereader

must

make

amentalnoteto

revisitthistableaftercompletingtherest

ofthethesis.The

Generalized

Analysismethodology

isbased

onqualityof

serviceattributesforinform

ation

transfer

system

s,andalthough

developed

independentlyof

(andconcurrentlywith)IN-

COSE's

form

alizeddocumentation

ofSE

practices,arrives

atalmostidenticalcategories

forthemeasurablerequirem

ents

ofthesesatellitesystem

s.Oneof

thecontributionsof

this

work

hasbeento

standardizethiscategorization

such

that

thereisnosubjectivityin

its

de�nition.

Modelingandanalysisare

usedto

converttheseperform

ance

requirem

ents(availability,

etc.)

into

suitable

requirem

ents

that

thehardwareandsoftwaredesignerscanrelate

to

moreeasily(pow

er,aperture,etc.).Functionaldecom

positiontoolssuch

asfunctionalblock

diagram

s,functional

ow

diagramsandtimelines

areusefulin

developingrequirem

ents.

Quality

FunctionDeployment(Q

FD)[2]is

atool

forrequirem

ents

ow

dow

nthat

is

rapidly

gainingpopularity

inAmerica,

afterbeingdeveloped

inJapan

duringthe1970's.

31

Page 30: The Generalized Information Network Analysis Methodology for

Table 1.1: Example system attributes and measurables for the requirements de�nitionof satellite systems, from INCOSE [1]

MeasurableAttribute Surveillance Satellite Communication SatelliteQuantity Frames/day, Sq. Miles/day Throughput (bits/s)Quality Resolution (ft) Bit error rate (BER)Coverage Latitude, Longitude Latitude, LongitudeTimeliness Revisit time (hrs), Delivery time (sec) Channel availability on demand (min)Availability Launch preparation time (days) Bandwidth under stressed conditions (Hz)

The essential characteristics of this method, sometimes called the \House of Quality", are

shown in Figure 1-2. QFD systematically translates customer requirements (\voice of the

customer") into design requirements (\voice of the engineer") using a Relationship Matrix

that correlates design features (power, aperture, etc.) with customer requirements. The

strength of the correlation between each pair is estimated subjectively, and used to deter-

mine the most important design drivers. The chosen values of the design parameters are

entered along the bottom of the Relationship Matrix, and then benchmarked against the

corresponding values for competing systems in the next row. The Requirements Correlation

Matrix on the top of the diagram is used to compare the design features against each other

to indicate if they are supportive or competing. For example, a surveillance satellite may

need a wide �eld-of-view sensor to achieve high search rates, but this opposes the need for

a high signal to noise ratio (SNR) derived from target detection requirements.

QFD owdown of requirements to the next system level is achieved by copying the values

of each of the design variables into a new QFD diagram, this time in the requirements

column, as shown in Figure 1-2. Note that the actual numerical values that are used

for these design variables are the result of separate system analyses and trade studies to

determine how best to meet the customer requirements. In this way, QFD is not a system

analysis tool as such, but rather a way of organizing requirements owdown. Furthermore,

the subjectivity of the method means that the con�dence in the results depends on the

experience and skills of the individuals involved.

For large, complicated systems, more sophisticated modeling and simulation techniques

are required that can predict the system's capabilities in satisfying user needs, given a set

of design variables for the system components. This modeling is important for sizing and

design of the functional components, for the requirement veri�cation process, and for the

requirement sensitivity analysis.

32

Page 31: The Generalized Information Network Analysis Methodology for

RelationshipMatrix

RequirementsCorrelation Matrix

Best

Worst

Requirements

Features

x

xx

{ {

Flow down of requirementsto successive levels

Values

Figure 1-2: Quality Function Deployment (QFD) [1]

Functional Analysis

Again, drawing from INCOSE's de�nitions, \. . . the objective of the Functional Analysis

task is to create a functional architecture that can provide the foundation for de�ning the

system architecture through allocation of functions and subfunctions to hardware/software

and operations. It should be clearly understood that the term `functional architecture'

only describes the hierarchy of decomposed functions and the allocation of performance re-

quirements to the functions within that hierarchy. It does not describe either the hardware

architecture or the software architecture of the system. Those architectures are developed

during the System Synthesis phase of the systems engineering process. . . [Functional Anal-

ysis] describes what the system will do, not how it will do it" [1].

One of the best tools for de�ning the functional architecture are functional ow dia-

grams (FFD's). These are multi-tier, step-by-step decompositions of the system functional

ow, with blocks representing each separate function. FFDs are useful to de�ne the de-

tailed operational sequences for the system, and might include functional representations

of hardware, software, personnel, facilities and procedural actions.

Modeling and simulation can be used to verify the interpretation, de�nition or viability

of the functional decomposition. The modeling and simulation allow the capabilities of the

functional architecture to be compared with the system requirements derived from the user

needs, so that the architecture can be made to satisfy the mission objectives. The output

of the Functional Analysis phase is therefore an FFD hierarchy with each function at the

lowest possible level uniquely described, and veri�ed by detailed modeling and simulation.

33

Page 32: The Generalized Information Network Analysis Methodology for

System Analysis: Trade Studies

Trade studies provide an objective basis for deciding between alternative approaches to the

solution of an engineering problem. Clearly, the mechanism for performing trade studies

should be based on objectively quantifying the impact of the decision on the system's ability

to carry out the mission objectives that represent the user needs. Unfortunately, trade

studies will often be carried out using selection criteria that are not directly related to the

mission objectives, but rather to the immediate engineering problem at hand. Unless the

engineer properly understands the interaction between all the functions in the architecture,

this approach may not capture the real impact of the trade on the overall mission. Selection

criteria must be chosen very carefully to properly represent the impact of any decision made.

Furthermore, most published methods for trade studies involve biasing the analysis with

subjective weighting factors for each selection criteria [1]. This may be acceptable if the

engineer is experienced and the functional architecture is well understood, but can lead

to incorrect analysis in other cases. System analysis that quanti�es the impact of each

decision on the overall system operations is more accurate, but usually takes more time and

resources [3].

System Architecture Synthesis

The main objective of the System Architecture Synthesis phase is to create a system archi-

tecture that: (1) satis�es the overall mission requirements; (2) implements the functional

architecture; and, (3) is acceptably close to the true optimum within the constraints of

time, budget, available knowledge and skills, and other resources [1].

The process of Architecture Synthesis is essentially a giant trade-study. The best alter-

native is chosen from a set of candidate system architectures for which there is reasonable

certainty that at least one of them is acceptably close to the true optimum. De�ning the

set of alternatives involves owing requirements down from the functional architecture, to

de�ne a set of element options for each component of the system. System elements are the

physical entities that make up the system. By selecting a range of elements for each compo-

nent, a set of system architectures can be de�ned. Of course, modeling and analysis must

be used to verify that all the considered system architectures satisfy the system require-

ments. Measuring the \best" then involves functional ow analysis or other such modeling

techniques, using selection criteria that represent the ability of the system to ful�ll mission

requirements at the lowest costs, within resource constraints, and with acceptable risk. In

the interest of e�cient analysis, a minimal set of criteria should be used, including only

the most signi�cant ones that are su�cient to distinguish the optimum from the other

contenders, and no more [1].

34

Page 33: The Generalized Information Network Analysis Methodology for

1.3.2 Modeling and Simulation

Summarizing then, the most important uses of modeling and system analysis in the design

phases of the SE process are:

� Requirements Analysis: to determine and measure impacts of candidate requirements

� Functional Analysis: to assess capabilities of functionally decomposed architectures

� Trade studies: to accurately determine the impacts of design decisions

� System Synthesis: to evaluate candidate options against selection criteria

Note that a single modeling and analysis methodology based on the hierarchical func-

tional architecture of the system could be used in all four phases of the SE process. By

simply adding or reducing the level of detail, or by moving up or down the hierarchy, a

single consistent model could be used and re�ned throughout the process. Further, if the

modeled parameters are direct representations of the measurable categories of the mission

requirements, then the models have a clear and meaningful interpretation.

Now, although there are many commercially available software tools that could in the-

ory perform this kind of analysis1 there is no well-publicized generalization of the proce-

dural logic that should be followed in order to obtain objective, relevant, and quantitative

results. Rather, the tools mostly provide the computing environment for engineers to de-

velop custom, application-speci�c analyses. Essentially, system engineering requires a lot

of book-keeping, and this is where the commercial tools are most useful. They do not how-

ever, instruct or guide the engineer as to what variables, parameters or requirements are

important. The reason for this is to provide maximum exibility across the enormous range

of potential engineering applications for which these tools can be used.

However, for satellite systems, there are some basic similarities across nearly all cases

that suggests that such a generalization is possible. Speci�cally, the functional goal of most

satellite systems is to transfer information-bearing signals between remote locations. This

common link permits the adoption of generalized measures for capability, requirements, per-

formance, cost and sensitivity. These generalized metrics would impose additional structure

on satellite systems analyses, removing a lot of subjectivity, standardizing the procedure

and allowing analyses to be performed quickly and e�ciently. The methodology would not

replace the existing tools, but instead guide the engineer on how best to use them. In short,

a generalized methodology would simply organize the thought-process needed to perform

system analysis.

1It is inappropriate to discuss the speci�c commercial tools in an archival document, especially since theychange every few months. However, INCOSE maintains an updated database of the available software toolsfor SE on their World Wide Web site at URL=http://www.incose.org/.

35

Page 34: The Generalized Information Network Analysis Methodology for

There is thus both the motivation and the opportunity to develop a generalized, quan-

titative modeling and analysis methodology for satellite systems.

1.4 Previous Work

The generalized analysis methodology described in this thesis builds upon some standard

texts on space systems design, and also on recent application-speci�c studies that adopted

a similar approach to the analysis.

First of all, the classic reference Space Mission Analysis and Design, edited by Wertz

and Larson, and hereforth referred to as SMAD [3], is perhaps the most comprehensive

presentation of the concepts, the science and engineering principles, and design techniques

associated with unmanned space missions. Each chapter of this book is written by leading

specialists, and discusses a di�erent aspect of either the design process or the satellite

system itself. The usefulness of this text is unquestioned, and the fundamental engineering

concepts it presents are the underpinnings of a great deal of this research. In particular,

the concept of a lifetime system cost that includes expected compensation for failures, as

described by Hecht in Chapter 19 of SMAD [3], is a key feature of the generalized analysis.

The only shortcomings of SMAD are that it lacks a detailed treatment of systematic analysis

methodologies and, at present, does not address the features speci�c to distributed satellite

systems. The research presented in this thesis attempts to �ll these needs.

The de�nition of a measurable metric for the lifetime cost, normalized by the func-

tional performance, is a principal contribution of this research. This metric is equivalent

to amortizing the lifetime system cost over all the satis�ed users of the system, a concept

used very e�ectively by Gumbert et al [4] in a comparative study of the proposed mobile

communication satellite systems. In that work, several proposed satellite communication

systems (Iridium, Globalstar, ICO, etc.) were compared on the basis of the smallest cost

per billable voice-circuit minute that the company could support, while still achieving an

acceptable rate of return on their investment. The calculation of this metric involved de-

tailed simulation of the di�erent systems in realistic market scenarios, to determine the

maximum number of users (voice-circuits) that could be addressed, and estimation of the

lifetime system cost, accounting for development, construction, launch and operations. The

results suggested that market penetration and not system architecture was the dominant

factor in achieving low values for the cost per billable voice-circuit minute. In a follow-up

study, Kelic et al [5] applied a similar technique to the broadband data communications

market, evaluating the cost per billable T1-minute for various proposed satellite systems2.

2T1 is a 1.544 Mbits/sec data rate

36

Page 35: The Generalized Information Network Analysis Methodology for

The conclusions of that study were that market uncertainty had a larger impact than sys-

tem architecture. A major goal of this research was to generalize and extend the concept of

the cost per billable minute metric, such that it could be applied to more general satellite

applications than just communications. Also, the broadband case has been revisited, with

additional consideration for the e�ects of reliability, a feature that was missing from both

the previous studies.

The notion of designing a system to optimize a cost-e�ectiveness metric was used re-

cently in two studies by Jilla et al [6] and Wickert et al [7]. Jilla et al [6] applied Markov

techniques to modular decompositions of separated spacecraft interferometers in order to

predict system reliability and degraded operability. The reliability predictions were used to

determine the cost-e�ectiveness of several alternate architectures. The key result was that

modular, multifunctional designs improved the reliability, and supported graceful degra-

dation, thus realizing higher cost-e�ectiveness than dedicated single-function designs. The

work is signi�cant not only for its conclusions, but also for the systematic application of

functional ow models, Markov models and quanti�able cost-e�ectiveness metrics. The

techniques presented in this thesis are complementary generalizations of the methods used

by Jilla.

In a feasibility assessment of performing the next generation Airborne Warning and

Control System (AWACS) mission from a space based radar platform, Wickert et al [7]

showed that a distributed architecture o�ered signi�cant cost-savings, improved capabili-

ties and increased overall reliability compared to monolithic designs. The cost metric used

was the cost to initial operating capability (IOC), and included contributions from devel-

opment, construction, launch, and reliability expenditures. The design process minimized

the IOC cost with respect to system architecture variables, while ensuring compliance with

established functional requirements. The overlap between Wickert's work and this research

are: (1) the form of the requirements de�nition, that could be restated in the same terms

as used consistently in this thesis; (2) the adoption of a quanti�able metric for the cost to

provide a constant level of performance; and, (3) the investigation of distributed concepts

of operations. Indeed, the results of Wickert's work were motivating in �nding the general

characteristics of distributed systems that lead to improved capabilities and lower costs for

a wide variety of missions.

1.5 Content of the Document

The thesis is divided into two parts; Part 1 comprising Chapters 2{4 contains the de-

velopment of the generalized analysis methodology. Chapter 2 classi�es the generalizable

characteristics of satellite systems, and qualitatively introduces the concepts that will be

37

Page 36: The Generalized Information Network Analysis Methodology for

needed in later chapters. The classi�cations and generalizations developed in Chapter 2

are used qualitatively in Chapter 3 for a detailed discussion of distributed satellite systems.

Chapter 4 is the crux of the work, and succinctly describes the quantitative Generalized

Information Network Analysis (GINA) for satellite systems. Part 2 begins at Chapter 5

and includes the case studies, representing detailed, quantitative applications of the GINA

methodology. The Global Positioning System is analyzed in Chapter 5, and is used primarily

as a validation of the technique for an existing system. A comparative analysis of proposed

broadband communication satellite systems is presented in Chapter 6. The last case study

is in Chapter 7, where GINA is applied for the (real) design of a proposed military dis-

tributed space-based radar. Finally, Chapter 8 states the conclusions and recommendations

for future work.

Please note that each chapter is intended to be somewhat stand-alone, to obviate endless

page ipping. Unfortunately, this means that there is a little repetition of content across

chapters. This is a small price to pay for clarity.

38

Page 37: The Generalized Information Network Analysis Methodology for

Part I

Development of the Generalized

Analysis Methodology

39

Page 38: The Generalized Information Network Analysis Methodology for
Page 39: The Generalized Information Network Analysis Methodology for

Chapter 2

Generalized Characteristics of

Satellite Systems

The primary goal of this research is to develop a consistent methodology for quanti�able

analysis of all satellite systems, spanning all likely applications. The emphasis of this

chapter is to introduce the concepts that are needed to construct this generalized analysis.

This involves the identi�cation of the characteristics that are general to all satellite systems,

regardless of application, and also the de�nition of a framework for classifying space system

architectures.

2.1 Distributed satellite systems

Recently, increases in the available processing power, improvements in navigation, and

advances in the manufacturing process have all made the concept of a distributed satellite

system feasible. The term \distributed satellite system" is used to refer to a system of many

satellites designed to operate in a coordinated way in order to perform some speci�c function.

This de�nition encompasses a wide range of possible applications in the commercial, civilian

and military sectors. The advantages o�ered by such systems can mean improvements in

performance, cost and survivability compared to the traditional single-satellite deployments.

This makes their implementation attractive and inevitable. The term \distributed satellite

system" can have two di�erent meanings:

1. A system of many satellites that are distributed in space to satisfy a global (non-

local) demand. Broad coverage requirements necessitate a separation of the satellite

resources. At any time, the system supports only single-fold coverage of a target

region. The local demand of each region is served by the single satellite in view. Here,

the term "distribution" refers to the fact that the system is made up of many satellites

41

Page 40: The Generalized Information Network Analysis Methodology for

that work together to satisfy a global demand.

2. A system of satellites that gives multifold coverage of target regions. The system

therefore has more satellites than the minimum necessary to satisfy coverage require-

ments. A subset of satellites that are instantaneously in view of a common target

can be grouped as a cluster. The satellites in the cluster operate together to satisfy

the local demand within their �eld of view. Note that the cluster may be formed by

a group of formation- ying satellites, or from any subset of satellites that instanta-

neously share a common �eld of regard. The cluster size and orientation may change

in time, as a result of orbital dynamics or commanded actions. In any case, the

number of satellites in the cluster is equal to the level of multifold coverage. In this

context, \distribution" refers to the fact that several satellites work together to satisfy

a local demand. The entire system satis�es the global demand.

The most important characteristic of all distributed systems, common to both of the

above concepts, is that more than one satellite is used to satisfy the overall (global) de-

mand. This is the basic distinction between a distributed and a singularly-deployed system.

Within the classi�cation of a distributed system, the main di�erence between the two con-

cepts described above lies in the way that the local demand is served. Speci�cally, the

distinction is the number of satellites used to satisfy this local demand: the cluster size ns

thus characterizes the level of distribution, with larger cluster sizes corresponding to higher

levels of distribution. The lowest level of distribution, with a cluster size of one, corresponds

to the �rst meaning of distribution described above.

2.2 Abstraction to Information Networks

All current satellite applications provide some kind of service in communications, sensing,

or navigation. The common thread linking these applications is that the satellite system

must essentially perform the task of collection and dissemination of information. Data that

contains pertinent information is gathered by the satellite, either from other components of

the system (on the ground, in the air or in space) or from the environment (local or remote).

Some interpretation of the data may be performed, and then the satellite disseminates the

information to other system components. The generalization made is that all satellite sys-

tems are basically information transfer systems, and that ensuring information ow through

the system is the overall mission objective. This is easily understood for communication

and remote sensing systems. Perhaps more surprising is that navigation systems such as

GPS are also information disseminators. The GPS satellites use the information uploaded

from the control segment to construct a signal which is transmitted to the ground. GPS

42

Page 41: The Generalized Information Network Analysis Methodology for

receivers can use the information in the signal, including not only the navigation message

contained therein, but also the phase of the signal itself, to determine a navigation solution.

As with communications and remote sensing, the performance of the system relies on the

ow of information through the satellite network.

While the format and routing of the information being transferred may be di�erent

for di�erent applications, the physics characteristic of information transfer systems are, of

course, invariant. This common thread linking all systems (navigation, surveillance, commu-

nications, and imaging) establishes a context for a generalized analysis, and is particularly

useful in the study of distributed systems.

To generalize, satellite systems can be represented as information processing networks,

with nodes for each satellite, subsystem or ground station. The satellite network connects

a set of source nodes to a remote set of sink nodes, and in doing so, addresses a demand

for the transfer of information between them. Figure 2-1 graphically represents a simpli�ed

version of such a network for a communication system consisting of three satellites and two

gateway ground stations. The system transfers data between users distributed throughout

its coverage area, using several spot beams, which are the input and output interfaces for

the satellite nodes. The satellites can also route information through ground stations. Even

in this simple example, there are many possible routes for information to travel through the

network. Some paths involve only satellites nodes, while others involve both satellites and

ground stations.

Accepting the abstraction of satellite systems to information networks allows satellite

system analysis to be treated with the well-developed mathematics of network ow and

information theory. The principles of network ow apply to the overall routing and ow of

information, while the transmission of information over each individual link is governed by

the rules of information theory. The relevant concepts are discussed in detail in Chapter

4 within the development of the quantitative generalized analysis framework. For now, it

su�ces to be aware of only the most important consequences of the abstraction.

The information symbol is the atomic piece of information demanded by the end-users.

For communication systems the symbol is either a single bit or a collection of bits. For

imaging systems, the symbol is an image of a scene. For a navigation system, the symbol is

a user navigation solution. The NAVSTAR GPS system is an interesting example because it

addresses this demand without transferring user navigation solutions through its satellites;

they only relay their position and time to the users. With this information from at least four

satellites, the user terminal can calculate the navigation solution, assembling the information

symbol from several constituent parts.

To be a contributing element to the system, each satellite node must receive information

from some other node, be it a source, a ground station or another satellite. Once this

43

Page 42: The Generalized Information Network Analysis Methodology for

Sat 1

Sat 2

Sat 3

Gateway

SinksSources

SinksSources

SinksSources

Gateway

Figure 2-1: Network representation of a simple communication system

information has been received, the satellite may perform some processing and reduction

before relaying the information to the next node in the network. This destination node may

likewise be an end-user, a ground station or another satellite in the system. Although some

data reduction may be done, information must ow through the satellites. Because of this

continuity constraint, every satellite must be able to communicate with at least one other

node in the network. For all satellites, the energy conversion system (eg. solar arrays)

must provide the energy for the transmission of this information. For \active" systems,

the satellite must also provide the energy needed to receive the information in the �rst

place. These are systems such as radar and lidar that illuminate a target and detect the

return. The satellites must transmit a signal with enough energy to make the round trip

journey to the source and back. The source adds the information to the signal, but returns

only a fraction of the incident energy, depending on its cross-section. Note that under this

de�nition, and contrary to intuition, communications satellites are \passive" since they only

relay received information to a destination node.

44

Page 43: The Generalized Information Network Analysis Methodology for

The volume of demand served by the system is limited by the market (demographics,

capture and exhaustion), and by the system capabilities. For information networks, the

quantity, quality and availability of the information arriving at the sinks are fair measures

of the system's capabilities, and represent the quality of service delivered to the users. Four

quality-of-service parameters can be de�ned to measure system capabilities1 :

� Isolation characterizes the the system's ability to isolate and identify the information

signals from di�erent sources within the �eld of regard. The isolation capabilities of a

system determine the level of cross-source interference that is admitted. Multiple ac-

cess schemes for communication systems are methods of signal isolation. Analogously,

the resolution of an imaging system allows spatially separated sources to be isolated.

� Information Rate measures the rate at which the system transfers information

between the sources and the sinks. This is most familiarly associated with the data

rate for communication systems. The revisit rate is the corresponding parameter for

imaging systems. Information must be sampled at a rate that matches the dynamics

of the source or end-user. For example, a high speed cruise missile must be tracked

with a high sampling rate. Similarly, a GPS receiver on a high-dynamic aircraft must

receive information from the satellites at a rate that is su�cient to allow navigation

solutions to be updated very quickly.

� Integrity characterizes the probability of making an error in the interpretation of an

information symbol based on noisy observations. For communications, the integrity

is measured by the bit error rate. The integrity of a surveillance radar system is a

combination of the probability of a missed detection and the probability of a false

alarm, since each constitutes an error.

� Availability is the instantaneous probability that information symbols are being

transferred through the network between known and identi�ed O-D pairs at a given

rate and integrity. It is a measure of the mean and variance of the other capability

parameters. It is not a statement about component reliabilities. At any instant, the

network is de�ned only by its operational components, and so all networks are assumed

to be instantaneously failure-free. Should a component fail, the network changes by

the removal of that component. Generally, the capabilities of the new network will be

di�erent to those of the previous network.

Basically, the rate and integrity correspond to the quantity and quality of the infor-

mation exchanged between a single O-D pair, the isolation measures the ability to serve

1Chapter 4 formally de�nes these parameters, generalizes the concepts, and describes how to quantifythe values.

45

Page 44: The Generalized Information Network Analysis Methodology for

multiple O-D pairs without interference, and the availability measures how well the system

does all this, at any particular instant. These quality-of-service parameters measure the

capabilities of satellite systems over all likely operating conditions. The actual operating

point is set to match the demands of the market that the system is to serve. This demand is

represented by a set of functional requirements, speci�c to an individual information trans-

fer. The requirements specify minimum acceptable values for each of the quality of service

variables.

Since the availability implicitly includes a reference to the other characteristics, the

requirements simply enforce that, for a speci�ed level of isolation, rate and integrity, the

availability of service exceeds some minimum acceptable value. Architectures that support

capabilities exceeding the requirements of the market are viable candidates for the mission.

The degree to which a system is able to satisfy the demands of a market is a critical

consideration for system analysis. In fact, the probability of satisfying the system require-

ments that correspond to the market is the correct measure of system performance. This

is sensitive to component reliabilities, since failures can degrade the system such that the

new capabilities violate requirements. Architectures that can tolerate component failures

without signi�cant degradations in the capabilities are good candidates for the mission.

2.3 Satellite System Classi�cations

The information network representation of satellite systems and the de�nition of the four ca-

pability parameters supports a generalized classi�cation of all satellite systems, distributed

or singularly deployed. Categorizing the di�erent architectures and identifying those is-

sues and problems characteristic of each class allows immediate architectural decisions to

be made for any given mission. Classi�cations are therefore necessary that allow system

identi�cation and highlight the most important system characteristics.

2.3.1 Distribution

In Section 2.1, it was pointed out that the level of distribution exhibited by a system

is de�ned by the cluster size. Although the cluster size is the primary form of system

categorization for distributed systems, additional classi�cations beyond this are necessary.

Specifying a cluster size of 10, for example, says nothing about the way that the satellites

coordinate to satisfy the local demand. The �rst type of classi�cation is therefore based on

the level of coordination exhibited by the system elements, and is related to the network

architecture. Referring to Figure 2-2:

46

Page 45: The Generalized Information Network Analysis Methodology for

+

sinksource

sink sinksinksourcesource source

Collaborative

Symbiotic

Junction(symbol isassembled)

Parallel,uncoupled paths

ns = 2

FOV

Comsats - Iridium- Orbcomm- Cyberstar, etc.

Remote Sensing- KH-11- SPOT, etc.

Remote Sensing- SSI- TechSat21

Navigation- GPS- Glonass

Figure 2-2: Classes of distribution for satellite systems

� Collaborative

Each separate satellite operates independently and is able to isolate signals satis-

factorily. Although an individual satellite addresses a given source (or sink), other

satellites (or sensors) may be needed for connectivity across the network. The cluster

size can be as low as unity, but may be more if multiple satellites are needed to satisfy

rate, integrity or availability requirements for the size of the market that is addressed.

The de�ning characteristic is that the network architecture consists of uncoupled par-

allel paths from the set of sources to the set of sinks. Most communication satellite

systems are collaborative, because each satellite can support local point-to-point com-

munications, although in some cases rely on the constellation for connectivity across

the network. Examples of collaborative remote sensing systems are the commercial

distributed imaging systems such as SPOT, OrbImage and Resource 21 [8]. These

systems feature constellations of several satellites, each capable of recording images

47

Page 46: The Generalized Information Network Analysis Methodology for

with about 10m resolution. The size of the constellation determines the coverage

and revisit-time of the system. Traditional singular deployments are by de�nition

collaborative.

� Symbiotic

The separate satellites cannot operate alone, exhibiting a symbiotic relationship with

the others in the system. No single satellite can su�ciently isolate the signals or

transfer information symbols from the sources to sinks. Only by the coordinated

operation of several elements can the system perform the design function. The cluster

size of symbiotic systems must be greater than unity. The de�ning characteristic is

that the network architecture features junctions of paths from separate satellites where

the information symbols are assembled before delivery to the sinks. An example of a

symbiotic system is the proposed separated spacecraft interferometer (SSI) [6]. Here,

the signals from two small apertures are combined and interfered to obtain very high

resolution images. GPS is also symbiotic since the signals from several satellites are

used to assemble a navigation solution within the user receiver.

2.3.2 Architectural Homogeneity

The second level of classi�cation speci�es the level of homogeneity exhibited by the system

architecture:

� Local Cluster

Some proposed systems involve a local grouping of satellites that are in close proximity.

The clusters can be made up of formation- ying satellites or can even involve the

physical tethering of satellites. If there is only a single cluster in the system, such as

with the SSI, the architecture is simply termed as a local cluster.

� Constellations

These are systems that feature a large number of similar satellites in inertial orbits,

each with their own unique set of orbital parameters. Walker Delta patterns or Mol-

niya orbits support these types of constellations. Systems such as GPS and Iridium

are characterized as being constellations. Cluster sizes greater than unity, can be

formed if the constellation supports multiple coverage of target regions.

� Clustellations

A system may involve more than one local cluster. Each cluster orbits as a group, and

several clusters can be placed in separate orbits. An architecture that utilizes several

local clusters is classi�ed as a clustellation, since it features a constellation of clusters.

48

Page 47: The Generalized Information Network Analysis Methodology for

Essentially, the cluster is used to satisfy the isolation requirement, while the constel-

lation provides availability by improving coverage. An example of a clustellation is

the proposed TechSat21 space based radar (see Chapter 7).

� Augmentations

An augmented system has a hybrid architecture featuring primary and adjunct dissim-

ilar components that perform di�erent subsets of the mission. The system is designed

such that the combined capabilities of the di�erent components satis�es the overall

mission objective. An example of an augmented system would be the combined use

of di�erent platforms or sensors to perform active and passive surveillance. Within

this analysis framework, the Space Based Infra Red Systems, SBIRS Low and SBIRS

High, are collectively classi�ed as augmented. Another example of an augmented

system is the proposed concept of using both unmanned aerial vehicles (UAV's) and

space assets for tactical reconnaissance of a battle�eld.

2.3.3 Operational

A third level of classi�cation groups systems according to their operational characteris-

tics. This type of classi�cation is the most abstract. The list shown here is by no means

exhaustive and covers only some examples of the operational classi�cations.

� Active or Passive

Remote sensing may be active or passive, with marked di�erences in capability and

cost. This is primarily due to the additional power requirements needed to overcome

the two way attenuation losses associated with active systems.

� Track, Search, or Imaging

Tracking targets using staring sensors involves di�erent scaling parameters than search-

ing for targets with scanning sensors. The detailed imaging of a static scene di�ers

from either tracking or searching. These di�erences are all related to the extent over

which the ground must be illuminated or viewed.

� Distributed or Concentrated market

The market addressed may involve multiple sources or sinks, distributed over a wide

area, or could involve small numbers of sources or sinks concentrated in speci�c lo-

cations. Conventional communication satellites (Intelsat) serve concentrated sources

and sinks. Weather satellites are examples of systems that address distributed sources,

and concentrated sinks, while DirecTV broadcast satellites serve concentrated sources

and distributed sinks. The proposed mobile communication systems (Iridium, Glob-

alstar, ICO) are characterized by a distributed market of both sources and sinks.

49

Page 48: The Generalized Information Network Analysis Methodology for

As with track versus search, the di�erence between a concentrated and distributed

market lies in the amount of ground that must be illuminated or viewed.

Table 2.1 gives some examples of existing or proposed systems for each class that has

been introduced.

Table 2.1: Satellite system classi�cations

Local Cluster Constellation Clustellation Augmentation

Collaborative Comsats (Iridium, etc) SBIRS High & LowSPOT, OrbImage, etc.

Symbiotic SSI GPS TechSat21 Sat+UAV bistatic radar

All satellite systems for missions in communications, sensing or navigation can be simi-

larly classi�ed using these di�erent categories. If any trends in the capabilities, performance

and cost can be found within and between classes, quick decisions can be made in choosing

an architecture for a particular mission. This is the subject of the next chapter, which

identi�es the characteristics of distributed satellite systems.

50

Page 49: The Generalized Information Network Analysis Methodology for

Chapter 3

Distributed Satellite Systems

The development of small, low cost satellites o�ers new horizons for space applications when

several satellites operate cooperatively. The vision of what can be achieved from space is

no longer bound by what an individual satellite can accomplish. Rather, the functional-

ity can be spread over a number of cooperating satellites. Further, the modular nature

of these distributed systems allow the possibility of selective upgrading as new capabilities

become available in satellite technology. The goal of this chapter is to highlight the impor-

tant concepts and issues speci�c to distributed satellite systems. This is achieved through

a systematic discussion of the bene�ts o�ered by distribution, illustrated with extensive

examples of real and proposed systems. This is followed by a description of the problems

that are most pertinint to distributed satellite systems, together with suggestions for their

resolution.

Most of the arguments presented in this chapter are qualitative, based on the generalized

characteristics and classi�cations introduced in Chapter 2. They are presented to show

clearly and fundamentally why distributed satellite systems are worthy of further attention.

Chapter 4 will take this process one step further by developing the quantitative tools needed

to perform useful system analysis, based on measurable capability, performance and cost.

This allows comparative analysis between distributed systems and traditional deployments.

Only in this way, with quantitative analysis, can the bene�ts of distribution be properly

appreciated.

3.1 To Distribute or not to Distribute?

There are many reasons why a distributed architecture is well suited to some space applica-

tions. Unfortunately, the arguments for or against distribution are fraught with subjectivity

and �rmly entrenched opinions. It currently seems that most of the satellite design houses in

the country are internally split between the proponents and opponents of distribution. Each

51

Page 50: The Generalized Information Network Analysis Methodology for

camp supports one side of the debate vehemently and can �nd a seemingly endless stream

of supporting arguments to back their claims. The \radicals" claim that the development

of large constellations of small satellites leads to economies of scale in manufacture and

launch, reducing the initial operating costs. They also expound that the system becomes

inherently more survivable due to the in-built redundancy. Conversely, the \traditionalists"

debunk these arguments, reminding everyone that you cannot escape the need for power

and aperture on orbit, and that building even 100 satellites does not imply signi�cant bulk-

manufacturing savings. They assert that the lifetime operating costs for large constellations

will far outweigh the savings incurred during construction and launch.

In fact, most of the statements made by both sides are true, but only when taken in

context. Clearly a distributed architecture is not the panacea for all space applications. It

is tempting to get carried away with the wave of support that the proponents of distributed

systems currently enjoy. Care must be taken to curb this blind faith. Also best avoided

is the naive, but commonplace application of largely irrelevant metaphors supporting the

adoption of distributed systems; the unerring truth that ants achieve remarkable success as

a collective is really not an issue in satellite system engineering!

This section summarizes the real reasons supporting the use of distributed satellite

systems, and should hint toward the type of applications for which they are best suited.

The shortlist given here is probably not complete; there are likely many other reasons

one could think of that support or oppose the use of a distributed architecture for some

particular application. Rather, this highlights only the most important and fundamental

factors that are both relevant to this debate and play a common role in system architecture

studies.

Stated very simply, in order for a distributed architecture to make sense, it must of-

fer either reduced lifetime cost or improved capabilities compared to traditional singular-

deployments. As discussed in Chapter 2, the four parameters of isolation, information rate,

integrity, and availability are good measures of the capabilities of a satellite system. A

system architecture that o�ers improvements in any of these parameters should be given

serious consideration during the system design.

The system lifetime cost accounts for the total resource expenditure required to build,

launch and operate the satellite system over the expected lifetime. This includes the baseline

cost of developing, constructing, launching and operating the components of the system, and

also the expected costs of failure. These additional costs arise from the �nite probability of

failures occurring that could compromise the mission. Should such failures occur, economic

resources must be expended to compensate for the failure. One example would be the

cost to build and launch a replacement, while another is the lost revenue associated with

a reduced capability. The options to lower the expected cost of failure are to reduce the

52

Page 51: The Generalized Information Network Analysis Methodology for

impact of any failures that do occur, or to lower the component failure rates such that these

failures are less likely.

As a result, all of the reasons supporting the use of distribution relate in some way

to improving the capability characteristics or to reducing the baseline or failure compen-

sation costs. The following sections detail these relationships, and highlight the general

trends observed within and between the di�erent classes of systems. Chapter 4 takes this

process one step further by introducing quantitative metrics based on measurable capabil-

ity, performance and cost, allowing comparative analysis between many di�erent system

architectures.

3.1.1 Signal Isolation Improvements

The system's ability to isolate and identify signals from di�erent sources within the �eld of

view is a critical mission driver for many applications. In general, di�erent signals can be

isolated by exploiting di�erences in their frequency content, by sampling at times that match

the source characteristics, or by isolating spatially separated sources using a high resolution

detector. By de�nition, each satellite in a collaborative system independently satis�es the

isolation requirements of the mission. Distribution therefore makes no di�erence to the

isolation capabilities of a collaborative system.

Isolation capabilities can be improved with a symbiotic architecture. The reason is

straightforward; by separating resources spatially over a large area, the geometry of the

signal collection is di�erent for each detector. Combining the received signals can assist

isolation of the di�erent sources due to �eld of view changes, di�erent times-of- ight, or

di�erent frequencies or phases of the received signals. Larger spatial separation of the

apertures means that the phase di�erence between signals arriving at di�erent detectors is

increased, further separating the sources. This is best demonstrated with an example:

Example 3.1 Isolation Improvements and Spacecraft Arrays

The advent of economical, fast integrated-circuit technology has recently surpassed the

previously restrictive data processing requirements of forming large sparse and synthetic

apertures in space. Many people have now started to claim that their use o�ers potential

bene�ts by reducing the mass and cost of remote sensing systems for high resolution imaging.

The angular resolution of any aperture scales with the overall aperture dimension, ex-

pressed in wavelengths (�). That is,

� � �

D

where D is the size of the aperture.

An array is an aperture excited only at discrete points or localized areas. The array

consists of small radiators or collectors called elements. An array with regularly spaced

53

Page 52: The Generalized Information Network Analysis Methodology for

elements is called a periodic array. To avoid grating lobes in the far-�eld radiation pattern,

the elemental spacing of a periodic array should be less than one-half of the wavelength.

A random array is a thinned array with random positions of the array elements. The

spacing of the elements is usually much larger than one-half of the wavelength, leading to

fewer elements for a given overall aperture dimension. Grating lobes are avoided because

there are no periodicities in the elemental locations.

The concept of the spacecraft array involves forming a large thinned aperture from a set

of satellites, each acting as a single radiator element. Since the spacing between satellites

is very much greater than characteristic wavelengths , grating lobes can be avoided only by

positioning the satellites to avoid periodicities. This can be done by a random placement

of satellites, or by arranging them such that their relative separations are prime [9].

The resolution of sparse arrays can be very much larger than an equivalent �lled aper-

ture. This arises from the enlarged overall array dimension resulting from splitting and

separating the aperture into elements. Consider the case of an imaging system capable of

1m resolution at a wavelength of 0.5�m (green-visible). A geostationary satellite would re-

quire di�raction-limited optics 18m across. Similar resolution for lower frequencies (X-band

etc.) requires even greater aperture sizes. This is clearly impractical for �lled aperture sys-

tems. A �lled aperture must be supported over its entire extent, leading to heavy structures.

Even if mass can be kept low through the use of advanced materials, impressive deploy-

ment techniques would be required to stow such an antenna within the launch shroud. The

question arises as to how big a �lled aperture can be built and launched.

A sparse aperture can be made very large indeed, the only requirement being that the

signal at each aperture be known, with measured and preserved phase. Widely separated

elements connected through light tethers or booms could easily extend over length scales

of 10-100m [10]. For even larger baselines, a sparse array of separated spacecraft allows

resolutions in the sub-milliarcsec range.

3.1.2 Rate and Integrity Improvements

For many applications, the requirements for a high information rate or high integrity drives

the designer toward very large apertures and high power payloads. The probability of

correctly detecting information symbols in the presence of noise is a function of the energy

in each information symbol. Collecting or transmitting symbols at a high rate with a low

probability of error therefore requires high power signals. This in turn leads to high power

transmitters or large apertures to collect more power or to concentrate the power radiated.

Distributed systems can o�er large improvements in both rate and integrity compared

to singular deployments by reducing the impacts of noise and interference. The interfering

54

Page 53: The Generalized Information Network Analysis Methodology for

noise can arise from several sources:

� Thermal noise from resistive heating of electrical components in the receiver

� Noisy radiation sources in the �eld of view (FOV) of the instrument

� Jamming from unfriendly systems

� Interaction with the transmission medium (rain, bulk scatterers)

� Background clutter

The ergodic property of thermal noise means that integrating over multiple detectors gives

the same processing gain as integrating the signal-plus-noise from a single detector over

time. The advantage is that there is no penalty in rate. Also, the interference from noisy

radiating bodies in the FOV of one satellite may not be an issue for a second satellite due to

the di�ering viewing angle of the scene. Jamming interference may also be satellite speci�c;

an enemy can easily disrupt a single satellite but would struggle to jam an entire group

of satellites that may be spatially separated. In general, the level of improvement in rate

and integrity that is o�ered by distributed architectures varies across the di�erent system

classes.

Rate and Integrity Improvements for Collaborative Systems

For a given level of integrity, a collaborative system can achieve higher rates by summing the

capabilities of several satellites that individually operate at modest rates. This is equivalent

to division of the top-level task into smaller, more manageable tasks that can be allocated

among the elemental components of the architecture. The responsibilities of each satellite

in the collaborative system reduce linearly with the number of satellites in the cluster. Each

satellite can allocate more of its resources to each source, satisfying higher rate requirements.

Increasing the number of satellites in the cluster yields linear increases in the achievable rate

of information ow from each source. The limit is reached when each satellite is dedicated

to a single user. The maximum rate for that user would then be the maximum supportable

rate of the satellite.

Equivalently, at a given rate, the level of integrity increases with the number of satellites

in a collaborative cluster. The energy per symbol Es increases with the number of satellites,

as a result of the increased dwell time allowed by the task division. If each satellite coherently

integrates the received signal, linear increases in the dwell time result in linear increases

in the energy per symbol to noise density ratio (Es=N0). The integrity will then improve

almost exponentially since the error probability can be approximated as an exponential

function of Es=N0.

55

Page 54: The Generalized Information Network Analysis Methodology for

In actuality, the integrity improvements will not be quite this good, since this result

assumes the detection is limited only by stationary white noise. Linear integration gains

are achieved only with coherent integration of a signal plus random noise. This is the case

if the dominant interference source is thermal receiver noise. Unfortunately, any correlated

interference experiences the same linear gains during the integration process. This is a

critical consideration for active systems where clutter returns from the ground are not at all

suppressed by time integration. For this reason, collaborative distribution cannot improve

the clutter-limited performance of radar or lidar systems over that achievable with singular

deployments.

Rate and Integrity Improvements for Symbiotic Systems

Unlike collaborative systems, a symbiotic architecture does not, in general, give a simple

linear improvement in rate capabilities with increases in the numbers of satellites. In fact,

the relationships between the number of satellites and the achievable rate and integrity are

di�erent for passive systems and active systems.

Consider �rst passive symbiotic clusters that form sparse receiver arrays. The SNR

behavior of sparse arrays is identical to a �lled aperture of the same physical collecting

area. To show this, consider a cluster of ns satellites, each with aperture A. Assume the

array is illuminated by a distant source. Each satellite measures the radiation �eld and

the signals from the di�erent satellites are then combined to deliver a single waveform to a

detector. In the most general case the input signal power varies across the array. Assuming

unit impedance throughout, the average signal power is,

E [Si] = E

"1

ns

nsXs2ij

#(3.1)

where E[] is the expected value, the subscript i refers to the input side of the array, and sij

is proportional to the envelope of the RF signal voltage for the jth satellite. Unless the array

is so large that the signal strength varies across the array due to path length di�erences, the

signal strength across the array will be constant. In this case, sij = sik = si, and the signal

power per satellite is Si = s2i . Assuming that all signals are cophased by a bank of phase

shifters, the output signal voltage after integration is nssi and the output signal power of

the array S0 = n2sSi, since all the signals add in phase.

If the dominant noise source is thermal noise, then the noise input at each of the satellite

apertures will be independent, with zero-mean. The average input noise power will be,

E [Ni] = E

"1

ns

nsXn2ij

#(3.2)

56

Page 55: The Generalized Information Network Analysis Methodology for

The noise does not add in phase since it is uncorrelated and so the output noise voltage

after combining is given by,

n0 =nsXnij (3.3)

and its square is the output noise power:

N0 =

nsXnij

!2

(3.4)

=nsXn2ij +

nsXj

nsXk

nijnik (3.5)

Since the noise sources are independent with zero mean, the second term is zero, leaving

the average noise power to be E[N0] = nsE[Ni]. The output SNR is therefore equal to:

SNR =S0

E[N0]=

nsSiE[Ni]

= ns (SNR)1 (3.6)

where (SNR)1 is the signal to noise power ratio for a single satellite of the cluster. The

improvement in SNR compared to a single satellite is therefore ns, the number of satellites

in the array. The same SNR is achievable with a �lled aperture of area nsA receiving the

same signal and with the same average thermal noise temperature. This of course makes

sense, since the same amount of energy is collected over the same collection area in both

cases.

Even larger bene�ts in SNR can be obtained with active symbiotic systems. Recall that

active systems are de�ned to be systems that have to provide the power for the signal to

make the round-trip journey to the information source. The active system may have several

transmitting satellites that illuminate the source. If the transmitters radiate coherently,

the power incident upon the information source is increased quadratically, since the signal

amplitudes add. Alternatively, if the transmitters radiate independently, the power at the

source sums linearly. The incident power is then re ected back to be collected by the

cluster of satellites, with the receive characteristics described above. The resulting SNR

improvement for the symbiotic system compared to a single satellite is given by,

(SNR)sym

= n2tnr (SNR)1 (3.7)

where nt is the number of coherent transmitters, nr is the number of receive channels. Note

that nr can be greater than ns, the number of satellites. If each of the ns satellites transmit

incoherent but uniquely identi�able signals, and each satellite receives all ns transmissions,

a total of nr = n2s di�erent signals can be coherently integrated. This is the operating prin-

ciple behind Techsat21, the Air Force's most recently proposed space based radar concept.

57

Page 56: The Generalized Information Network Analysis Methodology for

Chapter 7 features a complete quantitative analysis of this system.

The integrity is a function of Es=N0, given by multiplying the SNR by a dwell time

corresponding to the duration over which the signal is integrated. For a tracking mission,

the symbiotic cluster must cycle through all of the the targets one at a time, so there is

no di�erence in the dwell time compared to the single satellite case. For a search mission

however, there is a penalty paid for coherence. The beamwidth scales with the overall

synthetic aperture dimension as opposed to the physical aperture size of each satellite. For

a given area coverage rate, the symbiotic cluster must scan its smaller beam more quickly,

tsym =Ds

Dct1 (3.8)

Here tsym is the dwell time for a linear cluster of extentDc, comprised of satellite apertures of

size Ds, and t1 is the maximum dwell time for a singularly deployed satellite. For coherence

only on receive, multiple receive beams can be formed simultaneously to �ll the satellite

FOV. The dwell time then scales the same as that of a single satellite.

The resulting Es=N0 relationships for both the search and track missions are summarized

in Table 3.1. To simplify the results it has been assumed that the symbiotic cluster has

ns satellites, and can operate in three di�erent modes; a passive mode in which all ns

satellites are used to form a coherent receive array; an active mode in which each satellite

independently transmits (incoherent) and all the satellites coherently receive all the signals,

(nr = n2s); and a coherent transmit and receive mode, (nt = nr = ns).

Table 3.1: Factor of improvement in the energy per symbol to noise density ratio fordistributed clusters compared to singular deployments

Collaborative SymbioticPassive Active Passive Active Active

Coherent Rx Coherent Tx/Rx

Search ns ns ns n2s (Ds=Dc)n3s

Track ns ns ns n2s n3s

Interestingly, collaborative and symbiotic clusters both achieve linear improvements for

passive missions, but for quite di�erent reasons. The collaborative system gains bene�t

from task division, increasing the allowable dwell time, while the symbiotic cluster achieves

the same linear improvement by increasing the SNR.

Notice that the symbiotic system with coherence on transmit and receive is not well

suited to the search mission unless many satellites can be deployed over a reasonably short

extent, such that n3s > (Dc=Ds). The largest and most realizable bene�t from distribution

for the search mission can be gained with several independent transmitters and coherent

58

Page 57: The Generalized Information Network Analysis Methodology for

integration of all the received signals. This is the reason supporting the development of the

Techsat21 space based radar program.

Of course, in the presence of a heavy clutter background, the detection is not noise-

limited and the results change somewhat. However, this is where symbiotic clusters can

really help. As a direct result of the smaller beamwidths that are characteristic of symbiotic

systems, the clutter rejection of the system is greatly improved compared to single satellites

or collaborative systems. Consequently, the improvements in the Es=N0 seen in Table 3.1

are conservative estimates of the bene�ts o�ered by symbiotic clusters.

The tempting conclusion to draw from this is that symbiotic clusters are bene�cial for

missions requiring high rates and integrity. Unfortunately, there is a crucial factor that

has been omitted so far. The data processing requirements placed on symbiotic systems

are extremely restrictive, and are on the frontier of what can be achieved with today's

technology. This issue will be discussed in a later section.

3.1.3 Availability Improvements

If carefully designed, a distributed architecture can often lead to improved capabilities by

increasing the availability of system operations. Losses of availability can arise due to

increased numbers of users accessing the limited resources of the system, signal attenuation

from blockage or weather, or from statistical uctuations due to noise and clutter. More

commonly, a loss of availability can also be attributed to poor viewing geometries or poor

coverage statistics. For example, reconnaissance satellites may have to image scenes over

two or more continents, relaying the data to multiple downlink stations across the world.

There will be times during the orbits of these satellites when they are not passing over

important targets. The system is unavailable at these times since images of the targets

cannot be recorded. The revisit time of the satellites e�ectively speci�es the maximum

availability that is built-in to the system. Of course, very high availabilities can only be

achieved by constellations giving continuous coverage over the target regions.

Note that the availability of a system is related to the variance of the supportable

isolation, rate and integrity, and as such is sensitive to worst-case scenarios. Since a loss of

availability represents an inoperative state, any measures that can be taken to improve the

availability of a system are desirable. As illustrated in Figure 3-1, distributed architectures

can lead to increased availability through:

� better coverage of the demand, or

� reducing the variability of the capabilities

59

Page 58: The Generalized Information Network Analysis Methodology for

The methods by which distribution can lead to these improvements are described in detail

in the following sections.

Reqt.

Coverage

Time

Coverage requirementderived from minimumacceptable capabilities

Improved mean value

Availability = % of time that reqt. is exceeded

Reduced variability

Figure 3-1: The coverage improvements o�ered by distribution leading to increasedavailability

Matching a Distributed Demand

Some applications require the reception of signals at many di�erent locations. Such appli-

cations are characterized as having a distributed demand. A worldwide communications

consumer base, or sampling locations for a global mapping of the geomagnetic �eld are

examples of a distributed demand. The architectural options for these applications are

to place sensors everywhere there is a demand, to have a single sensor that maneuvers to

the demand locations, or to adopt some strategy somewhere in between these extremes.

The trade is between the cost of additional hardware resources and the cost of additional

expendables, such as fuel and time. A system with a few satellites that can maneuver to

di�erent sampling locations (either by thrusting or by utilizing orbital mechanics) requires

less dry mass on orbit, possibly leading to lower costs. However, the additional cost of fuel,

or the opportunity cost associated with the loss of availability due to sequential sampling

may sway the balance in the other direction. A question presents itself: how should space-

craft resources be distributed to best match a distributed demand? The answer to this

question is, unfortunately, neither simple nor general. The best option for one application

may be unsuitable for another. There are, however, some general trends.

Clearly, a distributed collaborative architecture is the only option for applications re-

quiring simultaneous sampling at all demand locations. This is equivalent to a continuous

coverage requirement. Consider, for example, a global mobile communication system. A

60

Page 59: The Generalized Information Network Analysis Methodology for

single satellite cannot serve the entire globe, forcing the designer toward a constellation of

satellites that can guarantee continuous coverage.

Some applications involve a coupling between measurements at di�erent sample loca-

tions, especially during processing of the information. An example of this coupling is the

combining of signals collected by the apertures of an interferometer, an essential operation

in the construction an image. This sharing of information necessitates interfaces between

the satellites of the system that are expensive and add complexity. Furthermore, the trans-

mission of information between satellites requires energy expenditure. Electrical energy, like

propellant, is a valuable expendable resource. This is especially true for satellite systems

relying on non-renewable energy sources (batteries, fuel cells, etc). For these applications,

if sequential sampling can be tolerated, the savings in hardware and the reductions in com-

plexity associated with fewer satellites can o�set the opportunity costs associated with losses

of availability.

Some tasks involve no coupling between the di�erent sampling locations. In this case, the

processing of signals at the di�erent locations can be performed independently. Separate,

independent sensors can satisfy the demand without having to interface among themselves.

Without any of the energy costs or complexity of intersatellite links, a very distributed

architecture may be favorable for these applications, the improvements in availability out-

weighing the costs of extra on-orbit hardware. This can be seen in the following example.

Example 3.2 Matching a Distributed Demand: The Separated Spacecraft In-

terferometer [11]

Optical interferometers collect light at widely separated apertures and direct this light to

a central combining location where the two light beams are interfered. Fringes produced

by the interference provide magnitude and phase information from which a synthesized

image can be generated. Space-based optical interferometers can be implemented as sin-

gle spacecraft, featuring collecting apertures separated by tens of meters, or as separated

spacecraft where baselines of hundreds or thousands of meters enable measurement with

sub-milliarcsec angular resolution.

The collector spacecraft sample the distant starlight at several di�erent baselines (sep-

aration and orientation) in order to construct the image. The locations of the sampling

points de�ne a distributed demand. Clearly then, a possible modi�cation to the basic con-

�guration that could o�er improved availability is a system with an increased number of

collectors. By distributing the collectors at the desired sampling locations, many di�erent

baselines can be made from the numerous combinations of collector pairs. In this way, many

baselines can be measured simultaneously (or at least without additional maneuvers) and

the image can be �lled out more quickly.

61

Page 60: The Generalized Information Network Analysis Methodology for

The m-point Cornwell distribution [12] describes m sampling locations that are well-

known to give high quality snap-shot interferometric images of intersteller objects. If ns

apertures are used to sample these locations, such that ns < m, a compound image can

be constructed from di�erent snap-shot images formed by moving the ns apertures to all

combinations of the various Cornwell imaging locations.

Obviously, sampling from a distribution with a larger number of Cornwell points results

in a higher quality image. Unfortunately, sampling at more locations requires either more

collectors or more maneuvers. A system with more collectors requires fewer maneuvers to

sample at all pairs of Cornwell points. In order to sample all pairs of points from anm point

Cornwell distribution, a system consisting of ns collectors must make (m-Choose-ns � 1)

separate maneuvers. In choosing the system size, a trade is therefore made between the

cost of additional collectors and the cost of propellant for maneuvers. For this calculation,

the system cost can be represented as total system mass; this is a reasonable approximation

to �rst order, and allows the important trends to be seen. The total mass is the sum of the

total dry mass and the total propellant mass.

Kong [11] used a Monte Carlo simulation to determine the optimal maneuver sequence

for ns satellites sampling an m-point Cornwell con�guration to minimize the propellant

required to construct 500 images within a 15 year lifetime. The resulting system mass for

di�erent sized clusters and di�erent numbers of imaging locations are shown in Figure 3-2.

For low quality images with a small number of Cornwell points, it is more e�cient to

have only two collectors that maneuver frequently. However, for greater than 10 imaging

locations, increasing the number of satellites reduces the propellant enough to outweigh

the dry-mass penalty. The Cornwell-11 has a minimum system mass with three collector

satellites, while the Cornwell-12 is best implemented with four collectors.

Unfortunately, the results of Figure 3-2 do not re ect realistically achievable propellant

mass-fractions. Figure 3-3 shows the average propellant mass-fraction that would be re-

quired of satellites designed to implement the systems considered in Figure 3-2. Restricting

the maximum propellant mass-fraction at 25%, it can be seen that the two satellite cluster

cannot image Cornwell con�gurations with more than nine points. The realistically achiev-

able optimum number of spacecraft for the 10 point Cornwell con�guration is therefore

three satellites.

Similar trends, with constrained optimums between increased hardware or more expend-

ables, are seen in a wide range of applications involving a distributed demand.

62

Page 61: The Generalized Information Network Analysis Methodology for

24

68

10

56

78

910

1112

500

1000

1500

2000

No. of Collector S/CNo. of Cornwell Points

Mas

s (k

g)

Figure 3-2: System mass of a separated spacecraft interferometer required to form 500images over a 15 year lifetime using di�erent Cornwell con�gurations and di�erentcluster sizes [11]

Improved Visibility and Coverage Geometry

There are some instances when distribution and multifold coverage improve the availability

by reducing the variability of the system capabilities. By making the behavior of the system

more predictable, the probability of operating within acceptable bounds is increased. The

capabilities of the system are particularly sensitive to coverage variations, and it is here

that distribution can lead to improvements. The multi-fold coverage that is characteristic

of distributed architectures supports consistent capability in two ways:

� Reducing the variance of the visibility, de�ned as the number of satellites in view from

a ground station. Generally, the visibility is a function of both space and time. The

number of satellites in view from a location changes in time, and is usually di�erent

at other locations. The capabilities of a satellite system are frequently dependent on

the visibility. Large variations in visibility can therefore cause large uctuations in

the isolation, rate or integrity. The designer faces the choice of sizing the system for

the worse-case coverage, or accepting losses of availability at times when the visibility

is below average. Increasing the number of satellites in the constellation not only

increases the visibility, but also reduces the variance. According to the Central Limit

63

Page 62: The Generalized Information Network Analysis Methodology for

24

68

10 56

78

910

1112

0

50

100

150

No. of Cornwell PointsNo. of Collector S/C

25 %

% P

rope

llant

/Spa

cecr

aft (

%)

Figure 3-3: The propellant mass fraction for the satellites of a separated spacecraftinterferometer required to form 500 images over a 15 year lifetime [11].

Theorem, as the number of satellites is increased, the minimum visibility converges

toward the average value. This assists the designer, improving the availability of

systems based on average coverage characteristics.

� Reducing the impact of the variability in the capabilities of individual satellites in col-

laborative systems. The geometry of the coverage over target regions can have a large

impact on the sensitivity of the system. Frequently, the isolation, rate or integrity

that can be supported by a single satellite can be spatially and time varying, de-

pending on the viewing angle, the transmission path, and the detector characteristics.

Favorable coverage geometries minimize the impact of these variations, ensuring that

the combined operation of the collaborative con�guration achieves consistent levels of

capability.

These two concepts are most easily understood with the help of a simple example.

Example 3.3 Visibility and Geometry: Distributed Space Based Radar [7]

Consider a collaborative space based radar system consisting of a cluster of satellites in

common view of a theater of interest. The system satis�es requirements on the rate (target

update) and integrity (probability of detection or false alarm) by summing the capabilities

64

Page 63: The Generalized Information Network Analysis Methodology for

of several small radars that independently search the same target area. The cumulative rate

is therefore directly proportional to the number of satellites in view of the target area, as

indicated in Table 3.1. Variations in the visibility translate directly into variations in the

achievable rate or integrity. This can result in a loss of availability if the visibility drops

below that necessary to support the requirements. The availability can be improved if the

system is designed to use an even greater number of smaller satellites to satisfy the detection

requirement. As the number of satellites increases, the spatial and temporal variations in

the visibility are reduced. The minimum visibility approaches the average value, and the

achievable detection rate changes over a much smaller range.

Furthermore, larger con�gurations of satellites result in more favorable coverage geome-

tries. The multi-fold coverage leads to a wide distribution of viewing angles surrounding the

target. This is particularly important for slow moving targets. The radar return from slow

moving targets is di�cult to distinguish from the ground clutter. Normally the di�erent

velocities of the target and the ground relative to the radar give rise to di�erent Doppler

shifts that separate the target and clutter in frequency, allowing detection. The return

from slow moving targets is often buried in the clutter because of the low relative velocities.

A viewing angle parallel to the target's velocity maximizes the Doppler shift between the

target and the ground in the frequency spectrum, increasing the signal isolation and im-

proving the probability of detection. Since the target's velocity vector is unknown a priori,

receivers must be placed at all possible viewing angles to ensure detection. With receivers

located at all angles around the target, the distributed space based radar concept increases

the probability of detecting slow moving targets. This makes the system less sensitive to

the target velocity vector, e�ectively increasing the availability by reducing the probability

of failing the detection requirement.

3.1.4 Reducing the Baseline Cost

Initial deployment costs for a given satellite constellation include costs associated with

development, production, and launch of the system's original complement of satellites. Ad-

ditional expenditures beyond the initial deployment costs are necessary to maintain the

constellation over a given time period. These costs include the production, storage, and

launch costs associated with the on-orbit or on-ground spares, and also all of the operations

costs. The baseline system cost is the sum of the initial deployment costs and the main-

tenance costs. Baseline costs are typically very high. For distributed satellite systems to

be considered viable, they must be at least competitive in cost, as compared to traditional

systems.

Conventionally, system cost estimates can be made using basic parametric models such

65

Page 64: The Generalized Information Network Analysis Methodology for

as the USAF Unmanned Spacecraft Cost Model (USCM), or the Small Satellite Cost Model

(SSCM) [13]. These models consist of a set of cost estimating relationships (CERs) for each

subsystem. The total cost of the system is the sum of the subsystem costs. The CERs

allow cost to be estimated as a function of the important characteristics, such as power

and aperture. Frequently they are expressed as a power law, regressed from historical data.

For example, the USCM estimate for the theoretical �rst unit (TFU) cost of an infrared

imaging payload is based on aperture and is shown in Figure 3-4

40000

60000

80000

100000

120000

140000

0.2 0.4 0.6 0.8 1 1.2

Aperture (m^2)

Cos

t (F

Y92

$K)

Figure 3-4: The USCM Cost Estimating Relationship for IR Payloads

Care must be taken in applying the SSCM to distributed systems. Although each

satellite in a distributed system may be small, the SSCM was derived assuming single-string

designs and modest program budgets. This is clearly unsuitable for a distributed system of

perhaps 1000 satellites, with a total system cost of several billion dollars. Unfortunately,

the use of USCM generally leads to high costs for distributed systems. This is due to two

factors:

� In partitioning the mission and allocating tasks among separate components, the total

hardware resources required on-orbit are often increased. Among other things, this is

a result of having to add redundancy to overcome serial reliability problems. Consider

the case of single satellite satisfying a demand with reliability of 0.9 over the mission

lifetime. To achieve the same overall reliability with two collaborative satellites of

half the size, an additional redundant satellite is also necessary. In this example, the

total resources on-orbit for the distributed system is 50% more than for the single

deployment. Since the CERs base cost on characteristic resource, the result of this

66

Page 65: The Generalized Information Network Analysis Methodology for

increase of total hardware is an increase in cost.

� Typically, the USCM power laws in the CERs are nonlinear, with an exponent less

than unity. There is a higher marginal price per kg of mass, or per m2 of aperture for

smaller systems. Figure 3-4 demonstrates this trend. As a result, it is more expensive

to divide a large system (especially aperture or power) into smaller components.

It would appear then that distributed satellite systems are characteristically more ex-

pensive than singular-deployments. However, there are additional factors that can sway the

balance in favor of distribution.

First of all, there is a question as to the validity of using the USCM for estimating the

cost of modern distributed satellite systems. The basic problem here is that the model is

based on regression from historical data of past military satellite programs. As such, the

CERs of the USCM may not re ect modern trends or practices. The programs from which

the model was derived were not subject to the same budget constraints as modern systems.

Stated simply, past military satellite programs were expensive because they were allowed

to be. An additional point is that conventional cost models, being based on historical data,

re ect an industry that was crippled by a conservatism and a reliance on risk avoidance.

The high baseline cost of space systems was perhaps the largest reason for the conservatism.

The enormous initial expenditure, added to the characteristically high risk, lead to a reliance

on tried and tested practices and established technologies. Unfortunately this doctrine was

self-supporting, being usually more costly than modern alternatives, and thus serving only

to refuel the conservatism.

There is however, some indications that things are changing. The advent of small satel-

lite technology has heralded a new era of satellite engineering that minimizes costs by risk

management rather than risk avoidance [14][15]. A willingness to accept some risk can lower

the cost of satellite programs, enabling more missions to be own and allowing new tech-

nology and innovative techniques to be implemented [15][13] [16]. The use of commercial-

o�-the-shelf (COTS) technology can lead to substantial cost savings in development and

operations (legacy systems often require specially trained operators). By accepting high

risk and implementing strategies to manage failures, small satellites have been successfully

designed, built and operated at a fraction of the cost of traditional systems [17]. Should

distributed satellite systems really proliferate the market, they will achieve low costs by low-

ering the requirements on individual satellite reliability, taking advantage of the redundancy

built into the architecture.

The changes in the space industry have not been restricted to the small satellite arena.

The commercial satellite industry is just now beginning to realize the bene�ts of mod-

ernized design practices. Moving away from the concept of the \hand-crafted" satellite,

67

Page 66: The Generalized Information Network Analysis Methodology for

Hughes Space and Communications and Lockheed Martin are enjoying enormous savings

from adopting the \production-line" approach to satellite design and construction. Stan-

dardized bus designs with modular interfaces to many di�erent payloads reduce the develop-

ment time and simplify assembly and test. Recent developments in commercial distributed

satellite systems (Iridium, Ico, Orbcomm, etc.) re ect this production-line approach to

satellite manufacture, and are reporting cost reductions that were previously unheard of

in the satellite industry. Whereas the CERs of the USCM assume a single-string design,

favorable economies of scale can result from bulk manufacture. The production of a larger

number of small units allows quicker movement down the learning curve. Lockheed Martin

was apparently observing a 15% discount rate in the production of the 66 satellite Iridium

system. This is made possible by economies of scale in manufacture and by modifying the

way that satellites are built and assembled. An example here is that Lockheed requires that

the subcontractor (Raytheon Inc.) for the main mission antennas of the Iridium satellites

perform full subsystem testing prior to delivery of each unit. No further testing is done

until full integration. Components that fail the integration test are returned to the manu-

facturer, and a new antenna is taken from the storeroom. This has greatly reduced costs

and assembly time. Such practices are poorly represented by existing cost models.

The cost of launching a satellite system can make up a signi�cant portion of the baseline

costs. This is especially true of distributed satellite systems featuring many small satellites.

Typically launch costs do not scale linearly with mass. The price per kg is higher for

lower mass payloads. Unless bulk-rate contractual agreements can be made with launch

providers, learning curve discounts do not apply to launch costs. This would suggest that

the launch costs of distributed systems is greatly increased compared to traditional singular

deployments. However, although each satellite in a distributed system may be small, when

considered as a whole, the entire system can be huge. Economies of scale support the larger

launch vehicles, and so, subject to volume and orbit constraints, it is cheaper to deploy

the initial constellation using large launch vehicles. An entire orbital plane of satellites

could be deployed on a single launch, giving the added bene�t of distinct performance

plateaus. The initial launch costs of distributed systems therefore scales more like that of

large satellites, and should be priced based on the total constellation mass rather than on

the individual satellite mass. Note that replacement satellites (for system augmentation or

for compensation of failures) can be launched on dedicated small vehicles, such as Pegasus

or Scout, or as secondary payloads, utilizing the spare launch capacity on larger boosters.

The cost associated with this replacement scales more like that of small satellite launches.

Collaborative distributed systems also o�er the possibility of being able to ramp up the

investment gradually, in order to match the development of the market. Only those satellites

needed to satisfy the early market are initially deployed. If and when additional demand

68

Page 67: The Generalized Information Network Analysis Methodology for

develops, the constellation can be augmented. The cost of constructing and launching these

additional satellites is incurred later in the system lifetime. Due to the time value of money,

the delayed expenditure can result in signi�cant cost savings.

Each of these factors help to o�set the apparently high costs suggested by conventional

parametric cost modeling. Consequently, the baseline cost associated with a distributed

satellite system may actually be smaller than for a comparable large-satellite design. This

is not always true, being extremely sensitive to the application. Some missions are more

suited to distribution than others. An example of a mission that is well suited to distribution

is passive infrared imaging of the Earth, as shown in the following example.

Example 3.4 Baseline Costs for a Distributed Infrared Imaging System

For mid-wavelength infrared (IR) payloads on low altitude satellites, the payload costs

scale with the resolution and the swath width of the instrument. Small swaths require less

expensive satellites, but require more of them. The e�ect of these scalings can be quanti�ed.

The payload cost for a single mid-wavelength infrared satellite is the sum of the costs of

the optics, the focal plane array of electro-optic detectors, and the computational resources

needed to process the image.

Canavan [18] suggests that the cost of the optics for instruments of this type scale with

volume rather than area. The volume of an optic scales as D2f , where D is the aperture

diameter and f is the focal length. To achieve a resolution of �x meters, the aperture size

for di�raction limited optics is,

D =�rmax

�x

where rmax is the maximum range to the target. For a satellite at a low orbital altitude h,

covering a swath of half-width W , the slant range is given by,

rmax =q(W 2 + h2)

and may be dominated by the cross range component. For a constellation of satellites, the

swath width of the instrument is dependent on the number of satellites in the constellation

and the revisit time required of the system. Small revisit times require more satellites and

larger swaths. The revisit time T for a constellation of N satellites is given by,

T =4�R2

e

2zVWN

where Re is the Earth's radius, V is the along-track velocity of the satellite, and z is a con-

stant (� 3) that depends on the extent and uniformity of coverage in latitude [18]. Inverting

this relationship gives the swath half-width in terms of revisit time and the constellation

69

Page 68: The Generalized Information Network Analysis Methodology for

size.

The focal length of the optics is related to the resolution requirements and to the size

�d of the IR detectors that are available,

f = h

��d

�x

Smaller detectors lead to smaller focal lengths, and a great deal of e�ort has been

expended in trying to shrink IR detectors. Currently, several commercial detectors are

available in the mid-wavelength band with sizes ranging from 17-100�m. This gives the

cost of the primary optics as,

Optics Cost = ah3�2�d

�1 + W 2

h2

��x3

where a is the cost density ($/m3). Canavan [18] suggests that $10M/m3 is a reasonable

cost density for modern optics. To be conservative, let us assume the optics cost an order

of magnitude more than this, a =$100M/m3.

The cost of the focal plane array (FPA) scales directly with the number of detectors in

the focal plane. This is dependent on the swath width, and on the dwell time requirements of

the detectors. Long dwell times mean that the detectors cannot be scanned as quickly, and

more detector elements are needed. The dwell time td can be calculated from the required

sensitivity of the IR device. This is measured by the noise equivalent temperature di�erence,

NE�T , which quanti�es the minimum detectable change in apparent scene temperature

from one pixel to the next during a scan [19],

NE�T =C(�)

�2ptd

where C(�) is a function of wavelength for each detector. For HgCdTe detectors with

�d = 40�m, C(3�m) = 1:3 � 10�12. An NE�T of approximately 0.5K is considered a

good IR system. By inverting this relationship, the dwell time can be calculated. This then

allows an estimation of the number of detector pixels in the instantaneous �eld of view,

NFPA =NxNytdP

where Ny is the number of pixels scanned in the along-track direction over an orbit, Ny =

2�Re=�x and Nx is the number of pixels scanned across-track, Nx = 2W=�x and P is the

period of the orbit, in seconds. The cost of the FPA is then calculated assuming a cost of

$1 per pixel, based on current levels of technology [18].

The computation costs scale with the number of instructions that must be carried out

70

Page 69: The Generalized Information Network Analysis Methodology for

each second. This is equal to the product of the number of pixels across the swath width of

the instrument (Nx), and the rate at which they are crossed (V=�x). A 100 MIPS computer

can now be own for about $100K, and so the computation cost density is approximately

$0.001 per instruction per second [18].

The total payload cost is the sum of the costs of the optics, the FPA and the computation

costs. The bus costs can be estimated by assuming a 20% payload mass fraction and a

constant $77K per kg of mass. This payload mass fraction represents a compromise between

that of a typical large satellite (30%) and of a small satellite (10%) [20]. The payload mass

is needed for this calculation and is estimated by assuming an average mass density of the

optics of one gram per cm3 with an additional multiplicative factor of 2 to account for some

extra margin [19].

The total constellation cost can then be estimated by summing the costs for optics,

FPAs, computers and dry mass for each satellite, and multiplying by the number of satellites

in the constellation. A discount factor to account for an expected learning curve must be

applied, depending on the number of satellites produced. The discount factor is assumed

to be 5% for less than 10 satellites, 10% for between 10 and 50 satellites and 15% for more

than 50 satellites [3].

Launch costs do not have to be calculated because, as discussed earlier, they should

scale only with total mass on orbit. Since we already account for total dry mass, adding

launch costs only alters the total system cost by a constant amount, without altering the

trends.

Incorporating these equations in a spreadsheet allows us to examine the e�ect of con-

stellation size on just the recurring hardware cost, for various di�erent orbital altitudes.

Figure 3-5 shows this relationship for a system with the following parameters:

� Revisit time, T = 25 minutes

� Ground resolution �x = 30m

� NE�T = 2 K

� HgCdTe detectors, tuned to 4�m, �d = 40�m

The hardware cost curves exhibit a minimum at a given amount of distribution. In-

creased constellation sizes re ect a separation of the overall task among more components,

reducing the swath that each satellite is responsible for imaging. There appears to be an op-

timum swath corresponding to the level of distribution at the minimum point in the curves

for each altitude. The existence of the optimum swath is a direct result of the quadratic

nature of the optics cost with swath, and the hyperbolic relationship between swath and the

number of satellites in the constellation. Neglecting learning curve e�ects, the total optics

71

Page 70: The Generalized Information Network Analysis Methodology for

-

20406080100

120

140

160

180

200

020

4060

8010

0

Num

ber

of S

atel

lites

Total Constellation Cost ($M)

200k

m40

0km

600k

m80

0km

Figure

3-5:Recurringhardware

costversusconstellationsizeforadistributedinfrared

imagingsystem

witha25minute

revisittime

costsover

thesystem

therefore

scales

as(N

+1=N

).Constellationswithfewer

satellites

than

optimum

feature

larger

swaths,

andconsequentlylarger

costsforoptics,FPA

and

computation.System

swithmore

satellites

than

theoptimum

haveincreasedcostsbecause

theswathdoesnotdecreasefastenough

too�settheincreasingcostsofmoresatellites.This

isagoodexampleofwhen

distribution

canlower

thebaselinecosts.How

ever,iftherevisit

timeisincreasedto

60minutes,thebene�tsofdistribution

beginto

diminish.Thisisshow

n

inFigure

3-6.For

revisitsoflongerthan

anhour,distribution

incurs

acost

penalty.This

isbecause

theswathforlongrevisittimes

does

not

needto

beverybig

foraconstellation

ofanysize,andalargedistributedsystem

has

toomuch

wastedresource.

Acandidatearchitecture

canbechosen

from

thesecurves.Thesystem

param

etersfor

aviable,lowcost

architecture

are

show

nin

Table3.2.

Table3.2:

Distributedinfraredim

agingsystem

parameters

Value

Notes

Satellites

50Orbitalaltitude

400km

200km

toolowdueto

drag

Revisittime

25mins

Requirem

ent

Resolution

30m

Requirem

ent

Aperture

diameter

6cm

D=�r m

ax=�x

Focallength

55cm

f=h(�

d=�x)

Payload

mass(per

sat)

4kg

1gram

/cm

3with100%

margin

Dry

mass(per

sat)

20kg

20%

payloadmassfraction

Iftheproposedmicrosatellitesystem

sbecom

eareality,

thecurrentcostingparadigm

72

Page 71: The Generalized Information Network Analysis Methodology for

-

20406080100

120

140

160

180

200

020

4060

8010

0

Num

ber

of S

atel

lites

Total Constellation Cost ($M)

200k

m40

0km

600k

m80

0km

Figure

3-6:Recurringhardware

costversusconstellationsizeforadistributedinfrared

imagingsystem

witha1hourrevisittime

willchange

completely.Costmodelsthat

scalewithunitcost,modi�ed

only

byalearning

curve,

arenotreallyapplicable

tomicrofabrication

orbatch

processingtechniques.The

microfabricationofsolid-state

componentsinvolves

huge

productionruns,andso

thecostis

reasonably

insensitive

totheactual

number

produced,beingdom

inated

bystart-upcosts.

Aninterestingcaveatto

beconsidered

hereistheincreasedcomponentreliabilityresult-

ingfrom

mass-m

anufacture.Asaresultof

themanufacturingprocess,mass-manufactured

productshaveavery

lowvariabilityin

productionstandardsandthereforehaveacharac-

teristically

highreliability.

3.1.5

ReducingtheFailure

CompensationCost

Inadditionto

thebaselinecosts,expenditure

isnecessary

tocompensate

foranyfailures

that

cause

aviolationof

requirem

ents

duringthelifetimeof

thesystem

.Expectedfailure

compensationcostscanbeminimized

byloweringtheprobabilityoffailures,byreducingthe

impactoffailuresso

that

compensatory

action

isnot

needed,or

byreducingthecostofthat

action.Clearlytheexpectedfailure

compensation

costsarecloselyrelatedto

theoverall

reliabilityof

thesystem

.System

reliabilitycanbeimproved

bydeployingredundancy,or

byimprovingthequalityofthecomponents.Bothof

theseoptionsaddto

thecost

ofthe

system

.Generally,alarger

initialexpenditure

inimprovingthesystem

reliabilityleadsto

smallercompensation

costs.

Note

thatdistribution

canimprovereliabilityonly

ifthereisredundancy

inthedesign.

Adistributedarchitecture

withtotalresources

that

canonly

just

satisfythedem

andisa

73

Page 72: The Generalized Information Network Analysis Methodology for

serial system and is subject to serial failure modes. Under these conditions a failure in any

component will lead to a failure of the system. The system reliability would be the product

of the reliabilities of the components, decreasing geometrically with the number of serial

components. Only by adding redundancy can a distributed architecture take advantage

of parallel reliability. System failure of a redundant architecture occurs only if all parallel

paths fail.

In general, most architectures will require some redundancy to satisfy reliability require-

ments throughout the expected lifetime. Frequently the cost associated with this redun-

dancy is less for a distributed architecture than it is for traditional systems. This reliability

cost accounts for the production, storage and launch of on-orbit or on-the-ground spares

necessary to maintain operations. For a distributed system these spares often represent

only small fractions of the initial deployment.

For collaborative systems, the system degradation is linear with the number of satellite

failures. When the number of satellites drops below that needed to satisfy the size of

the market, either replacements must be launched, or the system will incur opportunity

costs corresponding to the part of market that is not served. Deployed redundancy simply

provides initial capability over and above that necessary to satisfy the market. In the

absence of any other compensatory action, the system capabilities will continuously degrade

toward the minimum acceptable levels. If enough redundancy is deployed, this point will

not be reached within the system's designed lifetime.

Redundancy in symbiotic systems has a di�erent role. Individual satellite failures do

not have a linear relationship with the degradation of the overall system. In fact, a small

number of satellite failures may have no noticeable impact on the system capabilities. If

however, the number of satellites in the cluster falls below some safe limit, the cluster will

simply not operate at all. For example, the users of GPS can obtain navigation solutions

provided at least four satellites are in view. Usually, many more than this minimum number

are visible from any ground location, but should failures occur such that this is not the case,

a navigation solution cannot be obtained at all. This can happen in some ground locations if

as few as two satellites fail from the existing constellation of 24 satellites. The consequences

of failure, in terms of the opportunity cost, are therefore very much greater for symbiotic

systems.

For certain satellite missions, a distributed architecture may also lower lifetime costs by

reducing the cost of any failure compensation that is necessary. A recent design study at

MIT [21] showed that distributed systems appear to yield the greatest cost savings under

two conditions:

� When the components being distributed make up a large fraction of the system cost.

74

Page 73: The Generalized Information Network Analysis Methodology for

It is prudent to distribute the highest cost components among many satellites. Do

not carry all your eggs in one basket!

� When the component being distributed drives the replacement schedule of the space-

craft.

These savings manifest themselves in a number of ways. First of all, for the distributed

architecture, the replacements represent only a small fraction of the initial deployment

whereas in a traditional design, the entire space segment must be replaced after a failure.

Also, the replacements, on average, occur later, thus realizing larger savings from discount-

ing to constant year dollars. The potential savings over traditional singular deployments

are demonstrated very well in the following example, taken from the MIT design study.

Example 3.5 Replacement Costs: Polar Orbiting Weather Satellites [21]

Instruments aboard polar orbiting weather satellites, such as the proposed NPOESS system,

are classi�ed as either primary or secondary. Because the primary instruments provide crit-

ical environmental data records, failure of a primary instrument necessitates replacement.

A secondary instrument is one whose failure may be tolerated without replacement. If an

orbital plane's complement of sensors are all located on a single satellite, failure of any pri-

mary sensor will require redeployment of all the plane's sensors. By distributing the primary

instruments intelligently across a cluster of several smaller spacecraft, it may be possible

to reduce the cost of the system over its lifetime because the plane's entire complement of

sensors are not redeployed after every failure.

Consider the following three con�gurations illustrated in Figure 3-7. The blocks labeled

as A, B, and C represent three primary instruments required in a given orbital plane.

❷ 1 sat /2 sets

❸ 3 sats /1 set

1 satellite per plane1 set of critical instruments per plane

1 satellite per plane2 set of critical instruments per plane

3 satellites per plane1 set of critical instruments per plane

A B C

A B C

A B C

A B C

❶ 1 sat /1 sets

Figure 3-7: Satellite and Sensor Con�gurations

75

Page 74: The Generalized Information Network Analysis Methodology for

The total costs over a 10 year mission life were calculated for each of the three cluster

con�gurations. As shown in Figure 3-8, the costs over the 10 year period are broken down

into three categories; namely initial deployment, required spares, and expected replace-

ments. Initial deployment includes the development, production, and launch costs for each

orbital plane's original complement of spacecraft. The number of required bus, payload, and

launch vehicle spares were derived from a Monte Carlo simulation of the mission, assuming

reasonable component reliabilities.

Figure 3-8 shows that the initial deployment cost is least expensive for the f1sat/1setgcon�guration. Adding a redundant sensor to the single satellite con�guration greatly in-

creases initial deployment cost in terms of larger bus size, additional instruments, and more

expensive launch vehicles. The f3sat/1setg con�guration, although being launched on a

less expensive vehicle, is slightly more expensive than the f1sat/1setg con�guration due to

the duplication of bus subsystems and some sensors on each of the three smaller satellites.

The �gure also shows that adding a redundant sensor increases the cost as compared to

con�gurations with a single primary instrument. The slight decrease in the failure densities

as a result of redundancy does not make up for the expense of additional sensors.

Distributing the primary instruments among three satellites signi�cantly increases the

reliability of each individual satellite. Higher satellite reliability and lower replacement

launch costs result in the f3sat/1setg con�guration having the lowest expected replacement

cost. Once again the slight increase in reliability gained from adding redundant primary

instruments for the f1sat/2setg con�guration is outweighed by the higher bus, payload,

and, launch costs.

To summarize, distribution within a satellite mission may reduce the replacement costs

over the lifetime of a mission. A modular system bene�ts not only because a smaller

replacement component has to be constructed, but also because there are huge savings in

the deployment of the replacement. These savings are the greatest when the component(s)

being distributed make up a large fraction of the system cost and drive the replacement

schedule.

3.2 Issues and Problems

There are some factors that are critical to the design of a distributed architecture that were

irrelevant to the design of traditional systems. Depending on the application, these issues

may be minor hurdles, or could be so prohibitive that the adoption of a distributed architec-

ture is unsuitable or impossible. Some of the important considerations, characteristic of all

distributed architectures, and particular to small- and microsatellite designs are presented

here.

76

Page 75: The Generalized Information Network Analysis Methodology for

0

500

1000

1500

2000

2500

3000

RD

T&

EIn

itial

Dep

loym

ent

Req

uire

dS

pare

sE

xpec

ted

Rep

lace

men

tT

otal

s

10 yr non operating cost (FY97$M)

1 sa

t / 1

set

1 sa

t / 2

set

3 sa

ts /

1 se

t

0.86

sen

sor

relia

bilit

y ov

er 7

ye

ars

10 y

r. b

us d

esig

n lif

e

Figure

3-8:Totalsystem

costsoverthe10yearmissionlifeofapolarorbitingweather

satellitesystem

3.2.1

Modularity

VersusComplexity

Thepotentialbene�ts

from

distribution

ofsatelliteresources

werestressed

intheprior

sections.

Itwas

show

nthat

improvements

incost

andperform

ance

canresultifarchitec-

turesare

carefullydesigned

toutilize

asegregationof

resources

into

smaller,moremodular

components.Byallocatingindividual

system

functionsto

separatesatellites,andadding

redundancy,it

was

suggested

thatsigni�cantcost

savings

arisefrom

anincreasedavail-

abilityandareducedfailure

compensation

cost.Furthermore,

theenhancedcapabilities

o�ered

bydistributedarchitecturesgreatlyexpandtheusefulapplicabilityof

small-and

micro-satellites.

For

thesereasons,an

importantissueto

beaddressed

isthelevelto

which

asystem

should

bedistributed.How

much

canthesystem

bedivided

into

smallercom-

ponents

andstillo�er

thebene�ts

discussed

earlier?

Thecentral

issuehereisthetrade

betweentheadvantages

ofmodularization

andthecost

ofcomplexity.

Modularization

Indistributingthefunctionalityofasystem

amongseparatesatellites,thesystem

ises-

sentially

beingtransform

edinto

amodularinform

ationprocessingnetwork.Thesatellites,

subsystem

sandgroundstationsmakeupindividual

modulesofthesystem

,each

withwell-

de�ned

interfaces(inputs

andoutputs)anda�nitesetof

actions.

Such

system

sareanalo-

gousto

thedistributedinter-andintranet

computingnetworks,andas

such,aresubject

to

similar

mathem

atics.

Distributedcomputingisarapidly

developing�eldandagreatdeal

77

Page 76: The Generalized Information Network Analysis Methodology for

of work has been done to formalize the analyses [22] [23]. A lot of insight can be gleaned

by adopting much of this groundwork.

One bene�cial aspect of modularization comes from an improved fault-tolerance. Sys-

tem reliability is by nature hierarchical in that the correct functioning of the total system

depends on the reliability of each of the subsystems and modules of which the system is

composed. Early reliability studies [23] showed that the overall system reliability was in-

creased both by applying protective redundancy at as a low a level in the system hierarchy

as was economically and technically feasible, and by the functional separation of subsystems

into modules with well-de�ned interfaces at which malfunctions can be readily detected and

contained. Clearly, subdividing the system into low-level redundant modules leads to a mul-

tiplication of hardware resources and associated costs. However, the impact of improved

reliability over the lifetime of the system can outweigh these extra initial costs.

There are additional factors supporting modularization that are speci�c to distributed

satellite systems. As discussed earlier, the baseline costs associated with a system of small

satellites may be lower than for a monolithic satellite design. Of even greater impact is the

lower replacement costs required to compensate for failure. A modular system bene�ts not

only because a smaller replacement component has to be constructed, but also because of

the huge savings in its deployment.

All of these factors suggest that a system should be separated into modules that are as

small as possible. However, there are some distinct disadvantages of low-level modulariza-

tion that must be considered. The most important of these are the costs and low reliability

associated with very complex systems.

Complexity

The complexity of a system is well-understood to drive the development costs and can

signi�cantly impact system reliability. In many cases, complexity leads to poor reliability

as a direct result of the increased di�culty of system analyses; failure modes were missed

or unappreciated during the design process. For a system with a high degree of modularity,

these problems can o�set all of the bene�ts discussed above.

Although each satellite in a distributed system might be less complex, being smaller and

having lower functionality, the overall complexity of the system is greatly increased. The

actual level of complexity exhibited by a system is di�cult to quantify. Generally, however,

it is accepted that the complexity is directly related to the number of interfaces between

the components of the system. Although the actual number of interfaces in any system is

architecture speci�c, it is certainly true that a distributed system of many satellites has more

interfaces than a single satellite design. Network connectivity constraints mean that the

78

Page 77: The Generalized Information Network Analysis Methodology for

number of interfaces can increase geometrically with the number of satellites in a symbiotic

architecture. This is an upper bound; collaborative systems exhibit linear increases in

interfaces with satellites. The complexity of a distributed system is therefore very sensitive

to the number and connectivity of the separate modules.

The impact of this additional complexity is di�cult to evaluate, especially without a

formal de�nition of how complexity is measured. Recent studies at MIT [24], [25] would,

however, suggest that complexity can cause signi�cant increases in development and qual-

i�cation time, increases in cost, and losses of system availability. For these reasons, the

level of modularization must be carefully chosen. Only with thorough system analysis and

e�cient program management can the impacts of complexity be minimized.

A Lower Bound on the Size of Component Satellites

From very basic analyses, a lower bound on the size of the satellites of a distributed system

can be estimated.

Recall that there is a continuity constraint on the information ow through satellites.

The delivery of the information to the next node in the network need not be immediate.

For some applications, it is preferable to use a store-and-forward method of delivery. Here,

the information is stored on-board the satellite until such a time that it can be transmitted

to the destination node. The continuity constraint therefore enforces that at all times, the

information owing into a satellite must be either stored or transmitted. This leads to two

simple statements, and two associated bounds.

Firstly, in order to maintain extended operations, over the course of a single duty period

the net information transmitted by a satellite must be equal to that received. The energy

conversion system of the satellite must be able to support this net transmission of infor-

mation over the same duty period. If the satellite cannot provide enough energy to allow

transmission of this quantity of information, requirements cannot be satis�ed. The amount

of energy required depends on the integrity requirements (driving the energy per bit), the

distance to the destination node (free space loss), and the transmitter/receiver character-

istics. The satellite must also provide the energy needed to receive the information in the

�rst place. This is a small factor for passive signal detection, but can dominate in active

systems.

Secondly, the amount of new information stored on the satellite at any instant is the

di�erence between the rate of information collected and the rate of transmission. The gives

a bound on the minimum data storage requirements placed on satellites. The net storage at

some time t is the integration of this di�erence, from an initial time at the start of the duty

cycle to the current time t. For store-and-forward systems, the value of this integral initially

79

Page 78: The Generalized Information Network Analysis Methodology for

increases with time as more data is stored, and then decreases to zero at the end of the duty

cycle, when all the data has been downloaded. The maximum value of this integral de�nes

the data storage capacity requirement. Note that this can be a very costly requirement to

satisfy, especially for remote sensing systems. Consider for example, a single panchromatic

image of a 25km square scene at 1m resolution with 256 shades of gray. This single modest

image requires 625 megabytes of storage capacity [26]. Compression techniques can help,

reducing this �gure by as much as an order of magnitude. However, in order to store any

reasonable number of images, it is clear that a great deal of storage capacity is required

on the satellite. Data storage devices are typically heavy and power hungry, and can

consume a substantial portion of the satellite's resources. This is true for large satellites

such SPOT and Landsat, that weigh over 1000kg and so it would seem unlikely that small

satellites with modest resources could satisfy similar storage requirements. Solid state

storage devices are now available that relieve this problem somewhat. In 1993, the state

of the art in solid state storage could bu�er 1Gbit of data in static RAM, consuming only

5W of power [17]. Nevertheless, data storage requirements place a severe constraint on the

smallest size for a remote sensing satellite. Of course, in distributing the task of collecting

data among the many elements of a distributed system, the storage requirements for each

satellite are reduced. This may actually enable large constellations for use in remote sensing

applications.

Example 3.6 Data Storage: Distributed Infrared Imaging System

Return to the distributed imaging system described Example 3.4 to estimate the storage

requirements on each satellite. Assume each pixel has a 4-bit value, corresponding to 16

shades of gray. The data rate of the IR detector on each satellite is then given by multiplying

this 4 bits by the product of the number of pixels across the swath width of the instrument

(Nx), and the rate at which they are crossed (V=�x). The data must be stored until an

opportunity for downlink arises. The maximum downlink interval is set by the requirement

on responsiveness of the system. For near real-time applications, downlink opportunities

must come frequently. This is helpful for distributed systems since storage capacities are

limiting, and the interval between downloads must be as short as possible. Assuming a 25

minute revisit time for imaging, a 5 minute interval between downlinks, and a minimum

elevation from the ground station to the satellites of 20 degrees, Figure 3-9 shows the data

storage and downlink communication requirements on the satellites.

The �gure shows that a 1Gbit storage device is su�cient provided the system has more

than approximately 10 satellites. Communication data rates are high, but manageable for

constellations with greater than 50 satellites at altitudes above 400km.

80

Page 79: The Generalized Information Network Analysis Methodology for

1.00

E+

08

1.00

E+

09

1.00

E+

10

020

4060

8010

0

Num

ber

of S

atel

lites

Storage per Satellite (bits)

1.00

E+

05

1.00

E+

06

1.00

E+

07

Downlink Rate (bits/s)

200k

m40

0km

600k

m80

0km

Dat

a S

tora

ge

Dow

nlin

kR

ate

Figure

3-9:Data

storageandcommunicationdata

ratesforadistributedim

agerwith

25minute

revisittime,5minute

intervalbetweendownloads

3.2.2

Clusters

andConstellationManagement[27,28]

InSection3.1.2

itwas

argued

thatbycombiningthecapabilitiesof

manyindividual

ele-

ments,system

sofsm

all-ormicro-satellitescanbeusedforhighrate

orresolution

appli-

cations.

Fortheseapplications,therelative

positionsanddynam

icsof

thesatellites

inthe

cluster

areacritical

factorin

thedesign.Therearetwooptions:

�Localclusters

inwhichagroupof

satellites

yin

form

ation.Therelativepositions

ofthesatellites

are

controlled

within

speci�ed

tolerances.

�Virtualclusters

inwhichasubsetof

satellites

from

alargeconstellation

makeup

thecluster.Theactual

constituentsatellites

andtheirpositionsconstantlychange

subject

totheorbitaldynamicsof

theconstellation.

Themostsuitablechoicedependsontheapplication.Consider,forexam

ple,usingacluster

toform

asparse

aperture

forhighresolution

imagingof

terrestrialor

astronom

icaltargets.

Bycoherentlyaddingthesignals

received

byseveralsatellites,thecluster

would

create

asparseaperture

manytimes

thesize

ofareal

aperture.Thephasingandtheoptical

pathsoftheelectrom

agneticwaves

mustbecarefullycontrolledso

that

thesignalscombine

coherently.

Thetolerance

istypically�=20.Station

keepingisthereforealargeproblem

for

alocalcluster.Provided

thatthesatellitepositionsarecontrolledwithin

thistolerance,the

processingrequirem

entsonthesatellites

arereduced.Conversely,theelem

entpositionsofa

81

Page 80: The Generalized Information Network Analysis Methodology for

virtual cluster continuously vary, and so to correctly phase the signals these locations must

be known with a high degree of accuracy at all times. As a result, the virtual cluster has

slack station keeping requirements, but needs a great deal of intersatellite communication

and processing to ensure coherence. This coherence issue is discussed later in Section 3.2.3.

The remainder of this section details the propulsion requirements necessary to maintain the

relative positions of satellites orbiting in a local cluster.

Equations of Motion

The relative motion of satellites in a local cluster can be predicted by linearized pertur-

bations of the equations of motion about a reference orbit. The linearization is valid if

the cluster diameter is small compared to the radius of the reference orbit. For circular

reference orbits, the linearized set of equations are known as the Hill's Equations [29],

�x� 2 _y �3n2x = ax

�y + 2 _x = ay

�z +n2z = az (3.9)

where n is the frequency of the reference orbit in radians per second, and the acceleration

terms on the right-hand side represent all non-central force e�ects (drag, thrust, gravity

perturbations, etc.) The right-handed local coordinate frame has x pointing up and y

aligned with the velocity direction of the reference orbit. These equations can be used to

estimate the propulsive requirements placed on satellites constrained to orbit in clusters.

The di�erent cluster con�gurations are de�ned by di�erent degrees of freedom in (x; y; z)

in Eqn. 3.9.

There are, in fact, many ways to create local clusters. Two will be considered here.

The �rst is to y the satellites in rigid formation, maintaining their relative positions and

orientation. This involves constant values of (x; y; z) for each satellite, since the position

is �xed relative to the reference orbit. The second option, which proves to be usually

more realizable, is to allow the cluster con�guration to rotate, maintaining only the relative

intersatellite separations. This option has (x; y; z) that are constrained to follow circular

trajectories around the reference orbit. The propulsion requirements for each of these

options are described below.

Rigid clusters

Since the reference orbit is assumed to be a circular Keplarian orbit, rigid clusters must fea-

ture some satellites in non-Keplarian orbits. These non-Keplarian orbits are characterized

82

Page 81: The Generalized Information Network Analysis Methodology for

by either a focus which is not located at the Earth's center of mass, or orbital velocities

which do not provide the proper centrifugal acceleration to o�set gravity at that altitude.

The Earth's gravity will act to move these satellites into Keplerian orbits, giving rise to

\tidal" accelerations exerted on the satellites that are a function of the cluster baseline and

orbit altitude. To maintain relative position within the cluster, these accelerations must be

counteracted by thrusting.

The required amount of thrusting to maintain the cluster for a single orbit can be

estimated from Eqn. 3.9, by setting all time varying terms to zero and integrating over a

single orbital period (=2�=n). The result is that,

�V (per orbit) = 2�np9x2 + z2 (3.10)

If the cluster diameter is R0 =px2 + y2 + z2, the above result suggests that the �V re-

quirements for rigid clusters scale to �rst order as 10nR0 ms�1 per orbit. At LEO altitudes,

n � 0:001 rads/sec, and so �V � 0:01R0 ms�1 per orbit.

For a particular propulsion speci�c impulse1 (Isp), and propellant mass fraction fp, the

lifetime as a function of the �V per orbit is [28],

Lifetime (years) = ��years

orbit

��orbit

�V

�Ispg ln (1� fp) (3.11)

where g is the gravitational �eld strength. For reasonable propellant mass fractions of

around 10%, even propulsion systems with high speci�c impulse (2600 sec) cannot maintain

a 100m cluster in LEO for more than six months. This makes the implementation of rigid

clusters extremely unlikely.

Circular (dynamic) clusters

An alternative to holding the clusters rigidly is to allow the satellites to rotate in circles

around around each other, in a plane de�ned by a normal aligned in the viewing direction,

such that their relative separations are preserved. In this case, (x; y; z) are only constrained

to lie on a circle. The period of rotation of the circle is the same as the orbital period, so

that the satellites have the same natural frequency as the reference orbit. In the plane of

the cluster, the general motions of the satellites are described by,

x0 = 0

y0 = R0 cos (nt + �)

1The speci�c impulse of a thruster is a measure of the thrust per unit mass ow rate of propellant

83

Page 82: The Generalized Information Network Analysis Methodology for

z0 = R0 sin (nt + �) (3.12)

where t is time and � is a phase angle. These equations can be transformed into the Hill

frame by standard rotational transformation through angles (�; ) corresponding to the

azimuthal and elevation angles of the line of sight to the nadir (negative x) direction. This

gives the constraints on (x; y; z) for satellites in the Hill frame. Sedwick [28] integrates Eqn.

3.9 with these constraints for all values of (�; ) over a single orbit, and concludes that at

worst, the �V per orbit scales as 3nR0. From the same arguments as were made for rigid

clusters, this leads to lifetimes of at most 18 months for LEO clusters of 100m diameter.

However, there are some angles (side-looking at 30o o�-nadir), at which there is no

�V required to maintain the circular con�guration. These represent free-orbit solutions to

the problem. If this o�-axis viewing angle can be tolerated, the propulsion requirements

are reduced to only that needed to overcome perturbations. Over reasonable cluster sizes,

these perturbations exert negligible di�erential forces to distort the cluster, and only act to

perturb the cluster as a rigid body.

The results presented in this section would suggest that maintaining clusters is pro-

hibitively di�cult if the cluster is required to move only as a rigid body. This is unfortu-

nately the requirement placed on optical interferometers, that need di�erential paths to be

preserved very accurately such that the same wavefront is measured at the di�erent aper-

tures in real-time. However, if a sparse array is to be formed at radio frequencies, there is

a possibility for the signals from di�erent satellites to be combined during post-processing

after digitization. Time delay can be easily introduced during the interfering process, and

the distance of any given satellite from the source is no longer an issue. This relaxes a

degree-of-freedom in Eqn. 3.9, since the satellites are no longer bound to move in a plane.

No results have yet been presented, but it is suggested that great propellant saving will be

realized from allowing this behavior [28].

3.2.3 Spacecraft Arrays and Coherence

Some people have proposed the use of large symbiotic clustellations of satellites, each with a

small antenna, for forming extremely large sparse apertures in space. This spacecraft array

concept has received a great deal of attention from many sectors of the space community,

due mostly to the potential it o�ers for high resolution sensing and communications.

The symbiotic spacecraft array idea was introduced in Example 3.1 and discussed in

section 3.1.2. To recap, a large thinned aperture is formed from a set of satellites, each

acting as a single radiator element. The angular resolution of any aperture scales with the

overall aperture dimension, expressed in wavelengths. The SNR achievable by the array is

directly proportional to the number of constituent elements. The spacing of the elements is

84

Page 83: The Generalized Information Network Analysis Methodology for

much larger than one-half of the wavelength, and so grating lobes are avoided only if there

are no periodicities in the elemental locations.

Unfortunately, there are many technical di�culties involved with the design and con-

struction of such a system, mainly due to the requirement for signal coherence between

large numbers of widely separated apertures. This is especially true for systems intended

for Earth observation. Interferometric techniques are not well suited to Earth observation

from orbit since the Earth forms an extended source, unlike the astronomical sources which

lie embedded in a cold cosmic background. This forces a need for very high SNR's and high

sampling densities [30], leading to designs featuring a large number of satellites. For high

resolution imaging applications, requiring either long integration times or high SNRs, the

situation is made worse by forward motion of LEO satellites limiting the time over the tar-

get. This forces more simultaneous measurements to be made in order to reach the required

SNR and therefore requires even greater numbers of elements. Furthermore, although there

may be no grating lobes to consider, the thinned and random array will exhibit large average

sidelobe levels. For randomly distributed arrays, the ratio of the power in the main lobe to

the average sidelobe power is approximately equal to the number of elements in the array

[31]. For most detection applications, the maximum sidelobe power should be much lower

(more than 10dB lower) than the main beam power. Using this measure gives bounds on

the order of 10 for the minimum number of satellites that must be used to form the sparse

array. As will be shown later in Chapter 7, this is a reasonable approximation.

The formation of sparse apertures using a large number of satellites is complicated

by data processing, presenting a barrier to the adoption of sparse array technology. The

most generic problems, common to both active and passive clusters, involve using spacecraft

arrays as receivers. The signals from each element of a receiver must be combined coherently.

The data processing requirement scales quadratically with the number of elements, and the

equipment becomes very costly as the aperture size grows. The actual exchange of signals

between receivers and combiners also poses a di�cult challenge. For an interferometer the

exchange is done simply by routing the analogue signals from the pair of collectors to a

common combiner, constraining the optical paths to be equal in each case. It is di�cult

to adopt the same strategy for arrays with many elements. Since the satellites are remote

from each other, there is no easy way of simultaneously combining the signals from all of

the di�erent elements in an analogue form. For these arrays, the combining is easier done

during post-processing after digitizing the signals while preserving the phase.

This e�ectively limits the applicability of spacecraft arrays for passive sensing. A passive

receiving spacecraft array must record information over a reasonably long period of time

to integrate the SNR. This then necessitates enormous storage capacity on board each

satellite, since all the phase information must be preserved. Sampling the carrier wave at

85

Page 84: The Generalized Information Network Analysis Methodology for

the Nyquist limit with 8 bit quantization would result in storage rates of 96 Gb/s for an

X-band detector. Even with high-speed bu�ers, the required storage capacity after only a

few seconds of integration time is prohibitive. Of course, the receiver can �lter and mix the

input signal down to a lower intermediate frequency (IF) before the A-D conversion, greatly

reducing the load on the data processing. This results in no loss of information provided the

information bandwidth is known to be small compared to the carrier frequency. In general,

the bandwidth of the information may be as high as the receiver bandwidth. Sometimes,

however, the nature of the target is such that the information content is known to be

bandlimited over a reasonable range (kHz-MHz). In these cases digitization and storage

can still be problematic, but at least manageable. An active symbiotic system may bene�t

here, since the characteristics of the transmitted signal are known, and the information

content is limited only to the changes observed in the received signal.

Spacecraft arrays are also unsuitable as active transmitters for tracking applications.

These systems track targets with a narrow beam, optimizing the signal to noise from the

target while nulling the clutter and noise. Correct phasing of the array at the desired

angle and range to illuminate the target must be performed in real-time. Returns from

the target are used by a feedback controller to vary the phase at each element in order to

steer the beam. To do this, each array element must have accurate information about the

relative position of all other array elements. Continuous communication between satellites

is needed. Furthermore, the time constant for the detection (including signal reception,

combining, processing, and phasing of the transmissions) must match the dynamics of the

target. For small local clusters, the slow dynamics of the array may allow this to be carried

out if the processing capability exists. For virtual clusters this would be a very tricky task

given the dynamic nature of the system.

3.3 Summary

Distributed architectures are enabling for small satellite designs because they expand their

useful range of applications to include high rate and resolution sensing and communications.

The capabilities of many small satellites are combined to satisfy mission requirements.

A distributed architecture makes sense if it can o�er reduced cost or improved capa-

bilities. Distribution can o�er improvements in isolation, rate, integrity and availability.

The improvements are not all-encompassing, and in many cases are application speci�c.

Nevertheless, it appears that adopting a distributed architecture can result in substantial

gains compared to traditional deployments. Some of the more important advantages that

distribution may o�er are:

� Improved isolation corresponding to the large baselines that are possible with widely

86

Page 85: The Generalized Information Network Analysis Methodology for

separated antennas on separate spacecraft within a cluster.

� Higher net rate of information transfer, achieved by combining the capacities of several

satellites in order to satisfy the local and global demand.

� Improved availability through a reduced variance in the coverage of target regions.

This reduces the need to \overdesign" and provides more opportunities for a favorable

viewing geometry.

� Staged deployment on an as-needed or as-a�orded basis.

� Progressive technology insertion and modular upgradeability, reducing the impact of

technology-freeze dates.

� Improved reliability through redundancy and path diversity.

� Lower failure compensation costs due to the separation of important system compo-

nents among many satellites; only those components that break need replacement.

There are some problems, speci�c to distributed systems of small satellites, that must

be solved before the potential of distributed architectures can be fully exploited. The most

notable of these problems are:

� An increase of system complexity, leading to long development time and high costs

� Inadequacy of the data storage capacity that can be supported by the modest bus

resources on micro-satellites

� Di�culty of maintaining signal coherence among the apertures of separated spacecraft

arrays, especially when the target is highly-dynamic

� Need for autonomous operations; if autonomy is not implemented, operations costs

will dominate, and for symbiotic systems human intervention may not be su�ciently

timely.

The resolution of these issues, and the proliferation of microtechnology, could lead to-

ward a drastic change in the satellite industry. It seems clear that distribution o�ers a viable

and attractive alternative for some missions. Large constellations of hundreds or thousands

of small- and micro-satellites could feasibly perform many of the missions currently being

carried out by traditional satellites today. For some of those missions, the utility and suit-

ability of distributed systems looks very promising. More analysis is warranted in order

to completely answer the question of where and when distribution is best applied, but the

potential prospects of huge cost savings and improvements in performance are impossible

87

Page 86: The Generalized Information Network Analysis Methodology for

to ignore. It therefore seems inevitable that massively distributed satellite systems will be

developed in both the commercial and military sectors. We are living in a time of great

changes, and the space industry has not escaped. Over the last few years, \faster, cheaper,

better" has been the battle cry of those engineers and administrators trying to instigate

changes to improve the industry. \Smaller, modular, distributed" may be their next verse.

88

Page 87: The Generalized Information Network Analysis Methodology for

Chapter 4

Development of the Quantitative

Generalized Information Network

Analysis (GINA) Methodology

If you know a thing only qualitatively, you know it no more than vaguely. If

you know it quantitatively |grasping some numerical measure that distinguishes

it from an in�nite number of other possibilities |you are beginning to know it

deeply. You comprehend some of its beauty and you gain access to its power

and the understanding it provides. Being afraid of quanti�cation is tantamount

to disenfranchising yourself, giving up on one of the most potent prospects for

understanding and changing the world

Carl Sagan, \Billions & Billions", 1997

4.1 Motivation

There are many di�erent ways to design satellite systems to perform essentially the same

task. In order to compare alternate designs, metrics are required that fairly judge the ca-

pabilities and performance of the di�erent systems in carrying out the required task. In

today's economic climate, there is also a requirement to consider the monetary cost asso-

ciated with di�erent levels of performance. Due to the extremely large capital investment

required for any space venture, it is especially important for satellite designers to provide

the customer with the best value. The case in point here is that for a distributed system to

make sense compared to another way of achieving the function, it must o�er reduced cost

for similar levels of performance. This hints to the possible bene�ts of a de�nable cost per

(functional) performance metric. Capability, performance and cost metrics can be used as

89

Page 88: The Generalized Information Network Analysis Methodology for

design tools by addressing the sensitivity in performance and cost to changes in the system

components, or by identifying the key technology drivers. This leads to the de�nition of

the adaptability metric that quanti�ably measures the sensitivity to changes in the design

or role. Any metric used for comparative analysis should be quanti�able and unambigu-

ous. A measurable metric therefore requires a formal de�nition that leads to a calculable

expression. Unfortunately, satellite engineering analysis has traditionally been treated on a

case-by-case basis. Each new satellite system is designed and judged by its own set of rules

for a speci�c, narrowly de�ned task. This has meant that any formal de�nition of a metric

has been speci�c and relevant only to systems of the same architecture. It is therefore nec-

essary to develop a generalized and formal framework for de�ning quanti�able metrics for

performance and cost, capability, and adaptability. A major goal of this research has been

to formally de�ne the three metrics of Capability, Cost per Function, and Adaptability,

such that the analysis techniques are common to all systems, regardless of application or

architecture.

4.2 Satellite Systems as Information Transfer Networks

The primary enabler for a generalized analysis framework is that for all current applications,

satellite systems essentially perform the task of collection and dissemination of information.

4.2.1 De�nition of the Market

Information transfer systems exist only to serve a market; a demand that speci�c informa-

tion symbols be transferred from some set of sources to a di�erent set of, presumably remote

sinks. This origin-destination (O-D) market is distinct from the systems built to satisfy it,

and is de�ned by the requirements of the end-users (at the sinks). In most cases the infor-

mation is transferred in the form of a digital data stream1. The information symbol is the

atomic piece of information demanded by these end-users. The symbol for communication

systems is either a single bit or a collection of bits. For imaging systems, the symbol is an

image of a particular scene. This compound symbol has many component pixels, each of

which has some value, de�ned by a sequence of data bits. The symbol is the image and not

the individual pixel because pixels on their own carry little or no information and are of no

use to the end-user, who demands images.

As described in Chapter 2, the symbol for a navigation system is a user navigation solu-

tion. This is an interesting example since it demonstrates the distinction between the market

1Even those systems featuring analogue detection, such as optical imaging, almost always featureanalogue-digital conversion before transmission to the end user

90

Page 89: The Generalized Information Network Analysis Methodology for

and the system implemented to satisfy it. The NAVSTAR GPS system does not transfer

user navigation solutions through its satellites, but simply relays the satellites position and

time to the end-users to enable a range measurement between the user and the satellite.

If the user terminals are able to calculate the pseudoranges to at least four satellites, the

required information symbol can be constructed. Note that the necessary information is

embedded in the signals from several di�erent satellites, and is only \assembled" into the

required form inside the user terminal. This dichotomy between supply and demand is

common in network ow problems and is essential for the analysis of augmented or hybrid

architectures. These are architectures composed of several dissimilar systems that together

perform the overall mission. An example is the combined use of space assets and unmanned

aerial vehicles (UAVs) for battle�eld reconnaissance.

4.2.2 Functional Decomposition and Hierarchical Modeling

A satellite system can be represented as a modular information processing network. The

satellites, subsystems and ground stations make up individual modules of the system, each

with well-de�ned interfaces (inputs and outputs) and a �nite set of actions. This abstraction

allows satellite system analysis to be treated as a network ow problem. System analysis is

then reduced to characterizing how to:

\move some entity [information] from one point to another in an underlying

network . . . as e�ciently as possible, both to provide good service to the users

. . . and to use the underlying transmission facilities e�ectively" [32].

The network representation of the satellite system provides the framework for quanti-

tative system analysis, based on the mathematics of information transmission and network

ow. If the interaction between each module and the information signal can be estimated,

the characteristics of the information arriving at the sinks can be calculated.

Correct representation of the satellite system as an information network requires a func-

tional decomposition of the system into its most important functional modules. The func-

tional modules are those elements of the system that impact the transfer of information

from source to sink. Note that functional modules do not necessarily represent system

hardware; a rain cloud can assuredly e�ect radio communication to a satellite, but it is

not conventionally considered a system component. In fact, other than for component re-

liability estimation, the actual hardware con�guration of a subsystem is of little interest

to the network modeler. Of much greater importance is correctly modeling the functional

interaction between a module and the information signals being transferred.

Figure 4-1 shows a simpli�ed network for a system consisting of a single communication

satellite. The system transfers data between a set of users utilizing eight spot beams, which

91

Page 90: The Generalized Information Network Analysis Methodology for

are the input and output interfaces for the satellite.

Comm.Satellite

SinksSources

Rx spotbeams

Tx spotbeams

Figure 4-1: Top-level network representation of a single communication satellite

At this most basic level of abstraction, the network is modeled to comprise only the

source and sink nodes2, the satellite node and the interfaces between them. This level of

detail is probably too simplistic for any useful system analysis. Figure 4-2 shows the net-

work for the same system modeled with a �ner level of functional decomposition. In this

more detailed model, the signal from a source node passes through modules representing the

e�ects of atmospheric rain attenuation, space loss, and cross-channel interference, before

being collected at a receiver module on the satellite. For diagramatical simplicity, only one

spot beam is drawn on the uplink. The signal from this receiver is passed (along with the

signals from the other seven receivers not shown in the diagram) through a multi-channel

module representing the satellite digital signal processor (DSP). This module interprets the

information symbols and re-routes them to the correct satellite transmit modules. Again,

only one channel is shown for simplicity. The downlink has similar attenuation and inter-

ference modules, a user receiver and a DSP, and terminates at a sink node. Clearly, this

lower-level model is a more accurate representation of the real system.

The network model can be further augmented by including additional support modules

that are not part of the primary information pathway. For instance, modules represent-

ing the power generation system, the propulsion system or attitude control system of the

satellite could be added. These support modules provide the other primary functional

modules with enabling signals (power, propulsion, control, etc.). The functional modules

must receive these enabling signals in order to transfer the information symbols correctly.

The inclusion of these support modules in the network adds a further level of detail to the

analyses.

2Provided their interface with the network is similar, the users within each spot beam can be grouped asa single node.

92

Page 91: The Generalized Information Network Analysis Methodology for

Spa

celo

ssS

pace

loss

Inte

rfer

ers

Inte

rfer

ers

TX

TXS

ou

rce

Sin

kR

X

RX

Rai

nR

ain

DS

P

DS

P

Figure

4-2:Detailednetwork

representationofacommunicationsatellite

Thishierarchicalnature

ofthenetworkmodelingallowsthedetailandaccuracy

ofthe

analysesto

becustomized

dependingon

theapplication.For

exam

ple,at

theconceptual

designstage,theanalysesmay

only

haveto

predictthefeasibilityof

thearchitecture.For

thislevelof

analysis,only

theessential

functional

modulesmust

beincluded.Later

inthe

designprocess,more

detailcanbeadded

toobtain

accurate

predictionsof

thecapabilities

oftheentire

system

.

4.3

TheCapabilityCharacteristics

Inform

ation

collection

anddisseminationalwaysrequires

thedetection

ofinform

ationbear-

ingsignalsinthepresence

ofnoiseandinterference.Thecapabilitiesofdigitaldatatransfer

system

scanbecharacterized

byfourimportantquality-of-serviceparam

etersrelatingto

the

detection

process

andto

thequantity,qualityandavailabilityof

theinform

ation:signal

isolation,inform

ation

rate,inform

ationintegrity,andinform

ationavailability.

4.3.1

SignalIsolation

Thesystem

'sabilityto

isolate

andidentify

signalsfrom

di�erentsources

within

the�eld

ofview

isacritical

missiondriverformanyapplications.

Obviously,

asystem

cannot

satisfactorily

transfer

inform

ationbetweenspeci�cO-D

pairs

unless

theindividual

sources

andsinkscanbeidenti�ed

andisolated.Variousmethodsareusedto

isolatethedi�erent

signals.

For

communicationsystem

s,common

isolationschem

esseparatethesignalsin

frequency

(Frequency

DivisionMultiple

Access,FDMA)or

time(T

imeDivisionMultiple

Access,TDMA).Also,

individual

spot

beamscanbeusedto

access

multiple

sources

that

arespatially

separated.Thesametechniques

canbeapplied

toradar

system

s.Doppler

frequency

shifts

areusedforidenti�cation

ofthetarget

velocity

andclutter

rejection,and

timegatingis

usedfortarget

ranging.

Scanningasm

allradar

beam

over

alargearea

allowsseparatetargetsto

beisolatedin

spaceto

within

abeamwidth.For

imagingand

93

Page 92: The Generalized Information Network Analysis Methodology for

remote sensing systems, the same principals apply. Di�erent sources can be identi�ed by

detecting in di�erent frequency bands. Spatially separated sources can be isolated using a

high resolution detector. An aperture can distinguish between sources that are separated by

a distance no less than the resolution of the aperture. Note the one-to-one correspondence

between:

� The resolution of an optic and the beamwidth of an antenna or a radar

� The frequency of radiation from a remote sensing pixel, the carrier frequency of a

communication signal, and the Doppler shifts of a radar signal.

The generality exhibited in the mathematics of signal analysis allows these isolation rela-

tionships to be formalized.

4.3.2 Generalized Signal Isolation and Interference

Information transfer systems must be able to isolate a given signal from any others that may

be present. If the di�erent signals cannot be distinguished, the cross-source interference will

introduce noise that could cause an erroneous interpretation of the information. In general,

a signal can be expressed in either of two domains; the physical domain x or the Fourier

domain s. These two domains are related by the Fourier transform.

For electrical signals, such as in communications, the physical domain is time t, while the

Fourier domain is frequency f . Either domain can be used for analysis, although it is often

easier to perform the calculations in the frequency-domain. For example, consider the simple

linear system shown in Figure 4-3. The time-domain output r(t) is given by the convolution

of the input signal i(t) and the impulse-response of the system p(t). Equivalently, in the

Fourier domain the output R(f) is given by multiplying the spectra of the input signal I(f)

and the frequency response of the system P (f). This duality-relationship is shown in Eqn.

4.1,

r(t) = i(t) � p(t)$ R(f) = I(f)P (f) (4.1)

R(f)

r(t)p(t)

i(t)

I(f)

Figure 4-3: Simple linear-time-invariant system

Note that a square low-pass �lter of bandwidth W Hz has a time-domain impulse-

response equal to a sinc function with a half-width of 1=W seconds, as shown in Figure 4-4.

This basically means that two time-domain impulse signals passing through the �lter can

94

Page 93: The Generalized Information Network Analysis Methodology for

be isolated only if their time separation is greater than this minimum value. The cut-o�

frequency W e�ectively limits the �lters ability to transfer time-domain information.

W

P(f)

f

p(t)

t2/W

Figure 4-4: A square low-pass �lter and its time-domain response

There is an exact analogy to these relationships for optics and antenna theory [33] [34].

The corresponding Fourier-transform pair is the angle between the propagation direction

of the radiation and the normal of the antenna, measured as sin �, and a spatial coordinate

along the antenna (measured in wavelengths) referred to as the spatial-frequency u. It is

convenient to consider sin � as the physical variable, and u as the Fourier variable, although

the choice is arbitrary due to the symmetry of the Fourier-transform. The analogy with

electrical signal theory allows most of the properties relating to �ltering and processing of

time-domain electrical signals to be extended to antennas and optics. For example, consider

the one-dimensional antenna shown in Figure 4-5. The antenna images an unknown \object"

distribution i(sin �) by �ltering the object signal with a low-pass �lter. An aperture or optic

is a spatial �lter since it samples only those parts of the signal within its spatial extent.

The output image (in the angular domain) is equal to the convolution of the input signal

i(sin �) and the impulse-response of the aperture, de�ned as the radiation pattern p(sin �).

Equivalently, the Fourier-domain output is given by the product of the input signal I(u)

and the aperture (illumination) distribution P (u).

r(sin �) = i(sin �) � p(sin �)$ R(u) = I(u)P (u) (4.2)

R(u)

r(sinθ)p(sinθ)

i(sinθ)

I(u)

Figure 4-5: Basic antenna model

Note that the angular radiation pattern of an aperture is equal to the Fourier-transform

of the aperture distribution. That is, it is the response of the antenna to uniform illumina-

95

Page 94: The Generalized Information Network Analysis Methodology for

tion over its extent. For a rectangular aperture of size D=�, this response is a sinc function

of half-width sin � = �=D, as shown in Figure 4-6. The position of this �rst null in the

radiation pattern corresponds to the angular resolution of the aperture, since it determines

the minimum angular separation of two point sources that can be successfully isolated. The

cut-o� frequency u0 = D=� limits the ability of the antenna in transferring angular infor-

mation. This property corresponds precisely to the earlier-stated isolation capabilities of

electrical �lters.3

D/λ

P(u)

u

p(sinθ)

sinθ2λ/D

Figure 4-6: A rectangular aperture distribution and its radiation pattern

The similarities between the isolation characteristics of electrical systems and antenna

systems are pervasive. The same principles of signal theory apply for both applications,

and generalizations can be made about the isolation capabilities of a general system.

Signals can be isolated only if they are distinct and separable. Clearly, two signals that

are separated in either the physical or Fourier domain satisfy this condition. For example,

two electrical signals with non-overlapping frequency bands can be isolated using a pair

of bandpass �lters. Similarly, two time-bounded signals transmitted sequentially can be

isolated using a simple time gate. However, the condition that the signals be \distinct

and separable" does not restrict them to exclusive occupation of part of one of the two

domains. It is possible for a set of signals to occupy the same parts of the physical and

Fourier domains, and still be distinguished, albeit with some amount of interference.

To better understand what is meant by \distinct and separable", it is helpful to adopt

the signal space interpretation of signal analysis. Here, a geometrical linear space is de�ned

by a set of real or complex vectors that represent a set of real or complex signals [35] [36].

This approach is useful in signal analysis because it allows a number of mathematically

equivalent problems to be treated with a common notation and a common solution.

Signal spaces are Hilbert spaces [37]. A Hilbert space is a vector space on which an

inner product is de�ned such that all norms are �nite. The set of all square-integrable (L2)

real signals is a real Hilbert vector space under pointwise addition and multiplication by

scalars in <, since any �nite linear combination of L2 signals is an L2 signal. The equivalent

3This one-to-one correspondence justi�es our de�nition of u being the Fourier domain variable.

96

Page 95: The Generalized Information Network Analysis Methodology for

statement is true about complex signals.

The dimensionality of the signal space is de�ned by the number of orthonormal basis

signals. The set W[0;x] of all real or complex L2 signals with support in a �nite physical

interval [0; x] is a signal space of countably in�nite dimension [37]. This is a statement of

the fact that a signal bounded in the physical domain has an in�nite number of Fourier

components. Similarly, the set WB of all real or complex signals whose Fourier domain is

strictly bandlimited to a band B is a signal space of countably in�nite dimension. The

orthonormal basis functions here are an in�nite set of (sin x)=x signals in the physical

domain.

No signal can be limited in both domains. However, it is possible for functions to be

bandlimited in the Fourier domain and approximately limited in the physical domain, or

limited in the physical domain and approximately bandlimited in the Fourier domain. A

signal is de�ned to be approximately limited to an interval within a domain if less than

a speci�c fraction � of its energy is outside that interval, where � ! 0 as the interval is

lengthened. The Landau-Pollak Theorem [38] shows that the dimensionality of the subspace

of all signals approximately limited in the physical domain to x0 and bandlimited in the

Fourier domain to B is �nite, in the sense that a small fraction of the signal's energy is

outside a signal subspace of dimension (2Bx0 + 1).

The detection and isolation process can now be stated in similar geometrical terms. If

a particular signal is to be isolated from a set of interfering signals, the detector need only

search in the subspace de�ned by the desired signal. A matched �lter optimally designed

to isolate a given signal simply projects the input signal space onto a space de�ned by the

desired signal by performing an inner product. According to the Theorem of Irrelevance

[37], this operation results in no loss of information about the desired signal and optimally

reduces the interference from other signals. This means that only signals that are orthogonal

to each other in signal space can be isolated with zero interference. Signals that are almost

orthogonal to each other have a small inner product, and can be isolated with a small amount

of interference. This concept of orthogonality is the correct interpretation of \distinct and

separable".

Additionally, the signal space interpretation of the detection process leads to an impor-

tant statement about the isolation capabilities of a system. In order to perfectly isolate

a given set of signals, a system must support a dimensionality at least as great as the di-

mension of the signal set. The space de�ned by the response functions of the matched

�lters must be equal to that of the signals that they have been designed to isolate. What

this means is that it is impossible to absolutely isolate (with zero interference) signals that

are limited in either domain using any realizable sensor. Of course, unless the amount

of interference introduced is signi�cant, the signals may still be distinguishable, and the

97

Page 96: The Generalized Information Network Analysis Methodology for

information can be interpreted correctly.

The amount of interference introduced in the detection process can be quanti�ed. The in-

terference noise power at the output of a matched �lter is the integrated squared-magnitude

of the interfering signals after being projected into the signal space of the matched �lter.

To understand how this relates to conventional signal analysis, consider the system shown

in Figure 4-7.

R(s)F(s)I(s)

N(s)

G(s)

Figure 4-7: The basic channel model for a simple system

An information signal I(s) and an interfering signal G(s) are the inputs to a system

designed to isolate I(s). The system's Fourier response is F (s), and so the output R(s) is

given by,

R(s) = (I(s) + G(s))F (s) +N(s)F (s) (4.3)

= I(s) + I(s) (F (s) � 1) +G(s)F (s) +N(s)F (s) (4.4)

where N(s) is the (thermal) noise spectrum in the Fourier domain. All of the terms except

the desired I(s) add noise and distortion to the output of the system. The last term is the

noise admittance of the system, but the second and third terms represent the interference

outputs.

The isolation capabilities of the system determine the size of these interference outputs.

The term G(s)F (s) is the cross-channel interference and I(s) (F (s)� 1) is the inter-symbol

interference (ISI) within a signal. To eliminate ISI, the system channel response F (s) must

be unity within the bands where I > 0, and zero elsewhere. This ISI term is signi�cant

if the system involves a sampling of the signal into discrete (digital) components. In this

case, F (s) is a periodic, aliased spectrum and (F (s)� 1) can have positive values. Digital

communication systems can be designed to give zero interference at the sampler output

by enforcing that the signals satisfy the Generalized Nyquist criteria [36]. This basically

requires each signal to be orthogonal to its translates by multiples of the sampling interval

and also to all translates of the other signals. Of course, it is extremely unlikely that this

condition be satis�ed for remote sensing systems since the signals are externally generated.

98

Page 97: The Generalized Information Network Analysis Methodology for

The interference power at the output of a system is the squared-magnitude of the �ltered

interfering signals, integrated in the domain in which the desired signal is bounded, and

over the same limits. For instance, the interference power at the output of a matched �lter

designed to isolate a signal bounded in the Fourier domain is equal to the power spectrum

of all interfering signals, integrated over the bandwidth of the matched �lter. Similarly, the

interference power at the output of a system designed to isolate a signal bounded to [0; x] in

the physical domain is the total power of the �ltered interfering signals within the physical

limits [0; x].

4.3.3 Information Rate

This is a measure of the rate at which the system transfers information symbols between

each origin-destination pair. This is most familiarly associated with the data rate for com-

munication systems. The revisit rate is the corresponding parameter for imaging systems.

The system must deliver information symbols at a rate that matches the characteristic

bandwidth of the source or the end-user. For instance, a high-speed cruise-missile must be

tracked with a high sampling rate. Similarly, a GPS receiver on a high-dynamic aircraft

must receive information from the satellites at a rate that is su�cient to allow navigation

solutions to be updated very quickly.

While most information markets require a source to be sampled repeatedly, there are

some that involve the transfer of only a single symbol from each source. These are \trigger"

markets, in which there is a demand for potential sources to be interrogated until a particular

event occurs, triggering a response. The system must be able to notify the end-users of this

occurrence within acceptable time bounds, that are once again related to the dynamics

of the problem. For these trigger markets the corresponding quality-of-service parameter

is time rather than rate. For example, an missile warning system must be able to detect

launches originating from particular ground locations within a time period that allows an

e�ective defensive response. Note that in many cases, the triggering of sources in these

trigger markets creates a whole new market, corresponding to a new set of sources with

di�erent demands. The early warning mission (in which the sources are ground cells that

have the potential for launch) triggers a missile tracking mission that requires rapid sampling

of a target. These two missions may or may not be addressed by the same system.

4.3.4 Information Integrity

This measures the error performance of the system. The integrity is most commonly rep-

resented by the probability of making an error in the interpretation of a signal based on

noisy observations. For communications, the integrity is measured by the bit error rate.

99

Page 98: The Generalized Information Network Analysis Methodology for

The integrity of a search radar system is characterized by both the probability of a missed

detection and the probability of a false alarm, since each constitutes an error in interpreta-

tion. Equivalently, the integrity of an imaging system could be measured by the pixel error

density within the image.

The error performance of data collection and transfer systems is a critical issue in their

design and operation [36]. A detector uses an observation of the signal plus noise to make a

decision about each information symbol. Generally, the probability of erroneously interpret-

ing an information symbol depends on the energy in the symbol. An error can occur if noise

or interference degrades the signal in such a way that an incorrect decision is made about

the observation. These errors can be as benign as a single bit error in a communication

message, or as consequential as a false alarm for an early warning radar system.

The probability of error for a single measurement is the likelihood that the interfering

and thermal noise power exceeds some threshold, equal to the di�erence between information

data values. Consider for example, the simplest case of an amplitude modulated binary

communication channel (binary PAM) . The two data values f0; 1g are represented by

two di�erent power levels of the passband carrier wave. The separation between these

power levels is d watts. If the noise component of the signal has a power level greater

than d=2 watts, a data symbol f0g can appear in the observation as a f1g, or vice-versa.The probability of an error of a single bit is then the probability that the noise power is

greater than the separation between data symbols. This is equal to the area under the noise

probability density function from [d=2;1], as shown in Figure 4-8.

g(x)

xd/2

Pr(error)

Figure 4-8: The probability of error is the integral under the noise probability densityfunction from [d=2;1]

For generality, this can be placed in the context of the signal space representation of

signals introduced in section 4.3.2. Consider an information transfer system that makes an

observation, known to be equal to one of two potential symbols, but distorted by noise. The

two possible information symbols have signal space representations ~s1 and ~s2, such that the

vector between them is (~s1 � ~s2). De�ne the length of this vector, equal to the separation

between the symbols, to be d. The task of the detector is to determine which of the two

100

Page 99: The Generalized Information Network Analysis Methodology for

possible symbols is the correct interpretation of the noisy observation. The decision rule

used is based on the position of the observed signal projected into the same signal space. In

general, the observation will not be coincident with either the two possible symbols, due to

the presence of additive noise. The actual position in the signal space of the observation will

be equal to the position of the underlying information symbol, plus the geometrically correct

vector representation of the noise, according to standard rules of vector addition. Usually

the symbol closest to the observation, among all those that are possible, is chosen by the

detector. For Maximum Likelihood (ML) detection with hard decisions, this corresponds to

a decision threshold along the bisector between the two possible signals, at perpendicular

distance of d=2 from each. A decision error will therefore be made if the projection of the

noise in the direction of (~s1 � ~s2), is greater than d=2. For additive noise with a probability

density function g(x) the probability of this error occurring is,

Pr(error) =Z 1

d=2g (x)dx (4.5)

If there are more than two possible information symbols from which to choose, the net

error probability for a given symbol is the sum of the probabilities calculated from Eqn

4.5 for each value of d=2 corresponding to the di�erent pairs of symbols. This can be

approximated from the Union Bound estimate [36], in which the assumption is made that

the closest pairs of symbols dominate the sum. If a given symbol has Kmin nearest neighbors

at a common distance d, then an estimate for the error probability is,

Pr(error) � Kmin

Z 1

d=2g (x)dx (4.6)

When g(x) is stationary white Gaussian noise with zero mean and variance �2, Eqn. 4.6

becomes

Pr(error) � Kmin � 1

�p2�

Z 1

d=2exp

�x22�2

!dx (4.7)

� Kmin � 12erfc

�d

2�p2

�(4.8)

� Kmin �Q0

d2

4�2

!(4.9)

� Kmin �Q0

d2

2N0

!(4.10)

where Q0() is the Gaussian complementary distribution function, often simply called the

\q-function". N0 = 2�2 is the average noise power per Hertz. Note that the above equations

represent the symbol error probability. In all but the simplest communication schemes, each

101

Page 100: The Generalized Information Network Analysis Methodology for

symbol represents more than a single bit of information. For example, the Quadrature Phase

Shift Keying (QPSK) modulation scheme used in most satellite communication applications,

the phase of the carrier wave is varied to transmit information, such that each of four possible

equal-power symbols represents a pair of data values, as shown in Figure 4-9. In most well

designed signal sets, adjacent symbols di�er only by a single information bit. In these cases,

an error in the interpretation of a multi-bit symbol results in only a single bit error. If each

symbol represents m bits, then the probability of bit error, in terms of Eb

N0is,

Pr(bit error) � Kmin

m�Q0

d2

4Eb

2Eb

N0

!(4.11)

� Kb �Q0

� c2Eb

N0

�(4.12)

where Kb =Kminm is the average number of nearest neighbors per bit. c =

d2

4Ebis de�ned as

the nominal coding gain [37], a measure of the improvement of a given signal set compared

to uncoded binary PAM, in which c = 1. For QPSK, there is no coding gain since d2 = 4Eb,

as shown in Figure 4-9. Also Kb = 1, and so the bit error rate (BER) is,

BER � Q0

�2Eb

N0

�(4.13)

Additional coding gain can be attained with error-correction coding, that involves fur-

ther separating the symbols in signal space. For example, QPSK with half-rate Viterbi

error correction has c = 2, such that

BER � Q0

�4Eb

N0

�(4.14)

Note that g(x) in Eqn 4.5 is the probability density function of the noise signal at

the input to the detector that makes the decisions. This may di�er from the density

function of the noise at the input to the antenna due to the e�ects of �lters and ampli�ers

upstream of the detector. For example, consider a simple radar system. A positive radar

detection is declared if the envelope (complex amplitude) of the received signal exceeds

some predetermined threshold. A radar detector therefore includes an envelope detector, to

measure the envelope of the signal, and a threshold detector to actually make the decisions.

If the noise entering the envelope detector has a Gaussian probability density function

with zero mean and variance �2, the probability density function of the noise at output of

envelope detector is a Rayleigh distribution [39],

g(x) =x

�2exp

�x22�2

!(4.15)

102

Page 101: The Generalized Information Network Analysis Methodology for

d

Eb=(d 2/4)

(d/√2)

In-Phase (I)

Quadrature (Q)

⇒Two bits per symbol

[0,0]

[1,0]

[0,1]

[1,1]

Energy per symbol = (d 2/2)

Figure 4-9: The signal space representation of QPSK. The four information symbolsdi�er in phase, while their amplitude is constant.

In this case, the probability of error, or false alarm, is given by Eqn 4.6 with Kmin = 1

and d=2 = vT , the threshold voltage, such that,

Pr(false alarm) =

Z 1

vT

g (x) dx = exp

�v2T2�2

!(4.16)

4.3.5 Information Availability

The availability measures the instantaneous probability that information is being transferred

through the network between a given number of known and identi�ed origin-destination

pairs at a given rate and integrity. The availability is a measure of the mean and variance

of the isolation, rate and integrity supportable by the system, and as such is sensitive to

worst-case scenarios.

Note that availability has a functional de�nition; it is the probability that the system can

instantaneously perform speci�c functions. In this way, the availability is not a statement

about component reliabilities. At any instant, the network architecture is de�ned only by its

operational components, and so all networks are assumed to be instantaneously failure-free.

Should a component fail, the network changes by the removal of that component. Generally,

the capabilities of the new network will be di�erent than those of the previous network.

For a given network, the supportable isolation, rate, integrity, and hence the availability,

can vary due to:

� The number of users simultaneously accessing the limited resources of

the system. The availability of service to a given user will be poor if the total

number of users approaches or exceeds the nominal operating capacity of the system.

103

Page 102: The Generalized Information Network Analysis Methodology for

� Viewing geometry and coverage variations. A system that cannot support

continuous coverage of a region will have a low availability for real-time applications.

The availability of high-accuracy navigation solutions (SEP4 � 16m) using GPS is

dependent on a favorable viewing geometry to several satellites. Spatial and temporal

variations in this Geometrical Dilution of Precision (GDOP) dominate the operational

availability of GPS. Imaging applications often require speci�c viewing geometries for

each image, e�ectively limiting the availability of a LEO remote sensing system to

those times that such a geometry occurs.

� Range variations due to the different elevation angles between the

users and the satellites. This is especially true for LEO communication systems

in which the range, and hence free space loss, changes dramatically as the satellite

passes overhead.

� Signal attenuation from blockage, rain or clouds. Clearly atmospheric

attenuation can vary geographically and temporally, and the impact on the availability

of service can be profound. Visible or ultra-violet imaging is impossible through

cloud cover, limiting the availability of such systems. A mobile user of the Big-LEO

communication systems will be very susceptible to signal fade from blockage, either

by buildings or foliage.

� Statistical fluctuations due to noise or clutter. These random variations

may be signi�cant if the system is operating close to the limits of its capabilities.

4.4 Calculating the Capability Characteristics

These characteristics de�ne the Capability of the system, that being the availability of pro-

viding an information transfer service between a given number of identi�ed O-D pairs at

a given rate and integrity. The Capability characteristics are probabilistic measures. The

availability is a function of three variables; rate, integrity, and the number of users. For

satellite applications, the information rate is usually a deterministic design decision. How-

ever, the integrity and the number of simultaneous users can be considered random variables,

the former being sensitive any variations in the signal power or noise, and the latter being

dependent on the market. While it is often di�cult to predict the statistics of the market,

probability distribution functions for the signal power and the noise power can be predicted

reasonably well from the statistics of the satellite's orbit and elevation angle, probabilistic

blockage or rain attenuation models, and component performance speci�cations.

4Spherical Error Probable is the sphere containing 50% of observations.

104

Page 103: The Generalized Information Network Analysis Methodology for

Calculating the Capability characteristics therefore involves tracking the statistics of

the information signals delivered to the end-users. The network representation of satellite

systems provides the framework for these calculations. Statistical distributions can be prop-

agated through a network sequentially, calculating the changes to the distribution functions

as a result of the transitions through each node along a path from source to sink.

Consider an arbitrary system component with input signals X and Y and an output

signal Z, as shown in Figure 4-10. X and Y can be treated as random variables with

distribution functions F1(x) and F2(y), and probability density functions f1(x) and f2(y),

such that,

F1(x) = Pr(X � x) =

Z x

�1f1(v)dv (4.17)

F2(y) = Pr(Y � y) =Z y

�1f2(v)dv (4.18)

ZX

Y

Figure 4-10: A simple system with input signals X and Y , and an output signal Z

If the output z = g(x) is a function of only one input x, then the random variable Z

has a distribution function Fz(z) given by,

Fz (z) = Pr (Z � z) = Fz (f (x)) (4.19)

= Fx (x) (4.20)

Generally, the output is a function of more than one input, such that z = g(x; y). If X

and Y are independent,

Fz (z) = Pr (Z � z) =Z Z

g(x;y)�z

f1(x)f2(y)dxdy (4.21)

Provided the \transfer functions" g() of each component are known, these equations

describe how to propagate the probability distribution functions for the signal power and

noise power through the network. The probability distribution function for the integrity

of decisions made at a detector can then be evaluated, again using Eqn 4.21, with the two

random variables being Eb and N0.

Note that some networks include several detectors that make interpretations of the

105

Page 104: The Generalized Information Network Analysis Methodology for

information at intermediate points along the path from source to sink. Any information

symbols that are interpreted erroneously by an intermediate detector will be received in

error at the next detector before any interpretation is even performed. The net error

probability (integrity) is the combination of the errors incurred at each detector. The

probability distribution of these errors is once again calculated using Eqn 4.21, where the

random variables are now the error probabilities for the decisions at each detector.

The probability distributions for the integrity of information transfers between a given

number of identi�ed O-D pairs at a variety of di�erent rates can thus be calculated. These

distributions de�ne the availability of providing this information transfer service.

4.4.1 Example Capability Calculation for a Ka-Band Communication

Satellite

Consider the information ow through a typical satellite from one of the proposed Ka-band

communication systems. Figure 4-2 shows a possible network diagram for one such satellite.

The modeled system parameters are given in Table 4.1, and correspond closely to those of a

single satellite from the Spaceway system, proposed by Hughes Communications Inc., [40],

[41], [42].

Starting at the left hand side of this diagram, consider �rst the uplink from the users to

the satellite. The modeled satellite employs a TDM/FDMA scheme for each of 48 uplink

spot beams. This means that each user transmits information within a speci�ed frequency

band, and at speci�ed times. This isolates the di�erent users of each spot beam. Note

that since the maximum transmitted power of the user terminals is limited, the energy per

symbol depends on the user transmission rate.

Each signal then passes through the atmosphere, which attenuates the power (and in-

troduces noise) by varying degrees depending on the local climate, the frequency of the

RF carrier, and the elevation angle of the line of sight. The probability distribution for the

likely attenuation can be predicted reasonably well using the familiar Crane rain attenuation

model [43]. There is additional attenuation from free-space loss, again with a probability

distribution due to the distribution of elevation angles for users within the �eld of view. The

power of the signal arriving at the satellite antenna therefore has a statistical distribution.

Noise power from thermal noise and cross-source interference (imperfect signal isolation)

lead to small average signal-to-noise ratios. The power of each signal entering the digital

signal processor (DSP) is therefore weak and varying. The DSP must detect the information

symbols, and reroute them to their destination. Recall that the integrity of the detection

process scales exponentially with Eb=No. There is therefore a statistical distribution for

the BER of each signal leaving the DSP, and the distribution will be di�erent for di�erent

106

Page 105: The Generalized Information Network Analysis Methodology for

Table 4.1: System parameters for a modeled Ka-band communication satellite

Units Value

Miscellaneous System ParametersMission Broadband communicationsMarket Western European residential usersNumber of satellites 1Orbit 25oE GEO

Uplink parametersMultiple access scheme Spot beams + TDM/FDMAModulation QPSK, 1/2-rate Viterbi error correctionFrequency GHz 30USAT5 EIRP dBW 44.5Number of uplink spot beams 48Satellite antenna gain dB 46.5System temperature dBK 27.6Losses dB 1.5

Downlink parametersMultiple access scheme Spot beams + TDMAModulation QPSK, 1/2-rate Viterbi error correctionFrequency GHz 20Number of downlink spot beams 48Channels per beam 1Channel bandwidth MHz 125Channel capacity Mb/s 92Satellite EIRP dB 59.5USAT antenna gain dB 43System temperature dBK 24.4Losses dB 1.5

user information rates.

The downlink involves a single TDM wideband carrier for each of the 48 spot beams.

The net information rate of this downlink is the sum of the rates for all users within the

beam. This means that the energy per symbol of the downlink stream is a function of both

the user information rate and the number of users. A larger numbers of users at a higher

rate per user results in a lower energy per symbol.

The downlink signal is also attenuated by the atmosphere, free space loss and interfer-

ence. Individual end-users must demultiplex the received signal, extracting only the parts

relevant to them. Here, isolation of the correct information signal depends on the stability

of the oscillators in the user terminals. Extraction of the wrong information is e�ectively

a multiple-symbol error. The subsequent interpretation of information symbols is sensitive

to the received energy per symbol. Recall however, that some symbols were interpreted er-

roneously by the satellites. These symbols are received at the user terminals in error before

any interpretation is even performed. The net symbol error rate is therefore a combination

107

Page 106: The Generalized Information Network Analysis Methodology for

of the errors incurred at the satellite and at the user terminals.

In this example, the rate of information transferred through the system for each O-D

pair is a design decision. The integrity of that information, as measured by the symbol

error rate, has a statistical distribution depending on the number of users, and the rate at

which they transmit. The resulting availability of service varies across the range of operating

conditions. The Capability characteristics for this network are shown in Figure 4-11, for two

di�erent rates and two di�erent numbers of users. The Capability characteristics shown here

were calculated using elevation angle statistics for users distributed across Western Europe,

accessing a Geostationary satellite located at 25oE longitude.

These characteristics can be used to determine the maximum number of users that the

system can support at a particular rate and integrity. Note that the availability for 3000

users at T-1 rates (1.544 Mbits/s) is below 95% over all BER's of interest6. This is a result

of the demand exceeding the downlink capacity of the satellite. Users must then be queued,

reducing their e�ective availability.

4.5 Generalized Performance

The formulation of the Capability characteristics allow us to calculate the generalized Per-

formance of satellite systems. Performance is perceived in terms of satisfying the demands

of a market. This demand is represented by a set of functional requirements, speci�c to an

individual information transfer. The requirements specify minimum acceptable values for:

� Signal isolation

� Information rate

� Information integrity

� Availability of service at the required isolation, rate and integrity.

Since the de�nition of availability implicitly includes values for the other characteristics,

these requirements simply enforce that, for a speci�ed level of isolation, rate and integrity,

the availability of service exceeds some minimum value. For instance, consider the market

for mobile voice communication. Typically, the requirement is that individual users have

at least 95% probability of being able to transmit and receive from small, mobile terminals

at a rate of no less than 4800b/s with a maximum BER of 10�3. Note that the isolation

requirement enforces that the system be able to address each mobile, individual user within

the distributed market. Also note that these functional requirements make no reference to

6It is generally assumed that BER's of 10�9 or 1010 are acceptable for broadband services

108

Page 107: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway2. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway2. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 4-11: Capability characteristics for a modeled Ka-band communication satellite

109

Page 108: The Generalized Information Network Analysis Methodology for

the size of the market being served; they simply specify the quality of service that must be

provided to the users.

Performance should always be de�ned relative to these requirements. To be unam-

biguous and quanti�able, Performance should represent the likelihood that the system can

satisfy the functional requirements for a certain number of users from a given market. In

short,

The Performance of a system within a given market scenario is the probability

that the system instantaneously satis�es the top-level functional requirements

that represent the mission objectives.

It is important to note that Performance is distinct from Capability, although the two are

related. The Capability characterizes a particular networks' ability to transfer information

between a given number of identi�ed users at di�erent rates and integrities. There is

no implicit reference to requirements within the de�nition of Capability, and component

reliabilities are not re ected. However, a measure of Performance should include all likely

operating states, and so reliability considerations are necessary. The existence of component

failures means that every system has many possible network architectures corresponding

to failures in di�erent components. Each network, or system state, is de�ned only by the

components that are operational. Each of these states will have di�erent capabilities. By

specifying requirements on isolation, rate and integrity, the Capability characteristics can

be used to determine the availability of service o�ered by each state, for di�erent numbers of

users. If the supported availability exceeds the minimum acceptable availability speci�ed by

the functional requirements, that system state is deemed \operational". The mathematical

formulation of the generalized Performance follows immediately,

The generalized Performance for a given market scenario is simply the probabil-

ity of being in any operational state.

The Performance can be improved therefore by either reducing the impact of any com-

ponent failures that could occur, or by improving the component reliabilities so that these

failures are less likely. The former approach e�ectively increases the number of operational

states, while the latter reduces the probability of transitioning to a failure state. The impact

of component failures, blockage, or rain/cloud cover can be reduced if there are redundant

information paths. This redundancy can be provided by distributed architectures featuring

multi-fold coverage. For example, a mobile communication user can select, from all of those

in view, the operational satellite with the clearest line of sight. This can reduce service

outages and improve availability. This concept extends across almost all applications.

110

Page 109: The Generalized Information Network Analysis Methodology for

4.5.1 Time Variability of Performance

The Performance can be quanti�ed for each year over the lifetime of the satellite system to

give the Performance pro�le. The Performance of the system generally changes in time as

a result of three factors:

� There are typically di�erent rate, integrity and availability requirements placed on a

satellite system at di�erent times within its life. Consequently, the functional require-

ments are properly speci�ed as an availability pro�le.

� System components have �nite failure probabilities that generally increase in time;

once on orbit, a satellite system is di�cult to repair. There is a higher probability of

being in a failed state with a degraded availability late in the lifetime.

� The number of users targeted by the system will usually change over the lifetime. As

shown in previous sections, the supported availability of a system is a strong function

of the number of users.

These trends can compound to give large variations in the Performance over the system

lifetime.

4.6 Calculation of the Generalized Performance

Since the context of its de�nition includes the notion of state probabilities, the calculation

of the generalized Performance is well-suited to Markov modeling techniques that determine

the probability of being in any particular state at a given time. In general, Markov calcu-

lations rely on the fact that the state probabilities ~Ps (t1) at some future time t1 depend

only on the current state probabilities ~Ps (t0) and on the rate of state transitions [44],

~Ps (t1) = A � ~Ps (t0) (4.22)

where A is the state transition matrix. Determination of this matrix requires the character-

ization of each state as an operational state or a failure state, since there are no transitions

from failure states. Herein lies the only complication in calculating the generalized Perfor-

mance compared to conventional Markov modeling. In order to ascertain whether a state is

operational, the Capability characteristics of that state must be calculated and compared

to the requirements. Since this is non-trivial, generation of the state transition matrix

involves a large amount of computation, and in most cases dominates over the computa-

tions involved in the actual solution of Eqn. 4.22. The complexity of the Performance

calculations therefore grows linearly with the number of possible states, since each must

111

Page 110: The Generalized Information Network Analysis Methodology for

be investigated. However, the number of possible states increases geometrically with the

number of failure transitions and the number of system components. For this reason, the

models usually include fewer than ten failure transitions from a subset of the most critical

system components. State aggregation techniques can also be used to reduce the number

of computations.

4.6.1 Example Performance Calculation for a Ka-Band Communication

Satellite

To illustrate calculation of the generalized Performance, return once again to the broadband

communication system of Figure 4-2 and Table 4.1. For demonstration purposes, let us

assume that the users of the system require availability of at least 98% for communication

at a data rate R = 1:544Mbit/s, and a BER of 10�9 . Using reasonable values for the

failure rates of the most critical system components, the failure states corresponding to a

violation of these requirements and the associated probabilities can be calculated. For this

simple system, there are basically two di�erent types of failure state; those that correspond

to degraded payload operations that violate the requirements, and those that constitute

a total loss of the satellite. These two scenarios can be modeled separately to simplify

the analyses. Consider �rst the failure states corresponding to degraded operation of the

satellite payload.

The most failure-prone components along the primary information path through the

network are the satellite DSP's and the satellite transmitters. The system shown in Figure

4-2 features 48 channels for each of these; one pair for each spot beam, with cross-connections

to remove serial failure modes. Note that this is not representative of the proposed Ka-

band systems that have multiple, redundant DSP's and transmitters. However, since this

is only an example calculation to demonstrate the process, there is some merit in using a

non-redundant con�guration; to minimize the number of failure and operational states and

to illustrate the impact of non-redundant designs.

Typical failure rates for a conventional communications payload, as given in SMAD [3],

are � � 0:052 per year. This value would seem reasonable for a single transmitter channel

using solid state power ampli�ers. A higher failure rate of � = 0:1 is chosen for each DSP

channel, since it represents a new satellite technology. However, it is arbitrarily assumed

that only one out of every �ve DSP failures are unrecoverable. The e�ective channel failure

rates used in the calculation of Performance for this system are therefore �TX = 0:052,

�DSP = 0:02.

If the satellite targets 2500 users with the stated set of requirements, the resulting failure

states and their probabilities over the system lifetime are shown in Figure 4-12.

112

Page 111: The Generalized Information Network Analysis Methodology for

Failure state1 = (7*Tx)Failure state2 = (1*DSP)Failure state3 = (1*DSP,1*Tx)Failure state4 = (1*DSP,2*Tx)Failure state5 = (1*DSP,3*Tx)Failure state6 = (1*DSP,4*Tx)Failure state7 = (1*DSP,5*Tx)Failure state8 = (1*DSP,6*Tx)

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time (years)

Pro

babi

lity

FS1

FS2

FS3

FS4

FS5FS6

FS7FS8

Pf

Figure 4-12: Failure state probabilities for a modeled Ka-band communication satellitepayload: R = 1:544Mbits/s, BER = 10�9, Av = 98%.

113

Page 112: The Generalized Information Network Analysis Methodology for

There are eight unique failure states. Seven of these states feature failures in a single DSP

channel and up to six transmitter channels. The remaining state is characterized by seven

transmitter failures. System failure occurs when either a DSP fails or seven transmitters fail,

whichever occurs �rst. The system can tolerate up to six transmitter failures by allocating

extra tra�c through the remaining channels, up to the bandwidth limit of 92 Mbit/s. A

DSP channel failure results in an unavoidable loss of throughput, since it is assumed that

multiplexed data streams from the satellite receivers cannot be split up and redistributed

upstream of the remaining DSP channels.

Notice from Figure 4-12 that, with the failure rates used, the probability of system

failure is very high and approaches unity after only three or four years. This is dominated

by the reasonably likely probability of a single DSP failure. This is the reason for imple-

menting redundancy in all information paths through a network. The current plans for

Spaceway include fully redundant cross-connected DSP's and 64-for-48 redundancy in the

transmitters. Such levels of hardware redundancy, mated with technological improvements

that reduce the component failure rates, result in a very small probability of payload failure

(less than 0.1 over 10 years). It can be assumed that a sensible design would feature such

redundancy, and at least for this example, we can ignore the e�ects of degraded payload

operations.

The second type of system failure corresponds to a total loss in operational capability

of the satellite. This satellite vehicle failure, SVF, can occur when the support modules,

de�ned in Section 4.2.2, fail to provide the functional modules with essential resources.

For example, the power and propulsion subsystems, the guidance and navigation subsystem

(G&N), and the spacecraft control computer (SCC) must all work under normal operations.

Calculating the probability of SVF simply involves building a simple model of the satellite

bus resources. For this simple example, the spacecraft was modeled to include two paral-

lel SCC's, two G&N's, and an integrated bus module representing propulsion, power and

structural components. One out of every ten failures in the G&N and SCC are assumed

to be unrecoverable. The equivalent channel failure rates, again taken from SMAD [3], are

�SCC = 0:0246, �G&N = 0:0136, and �bus = 0:072, all per year. The resulting probabilities

of the SVF modes are shown as a function of time in Figure 4-13.

For this example, the failure probability is dominated by the probability of a bus failure.

The overall probability of satellite failure exceeds 0.5 after 10 years. This value is perhaps

high compared to existing Geostationary satellite systems, although several failures, at-

tributed to SCC's and power subsystems, have recently been seen in similar designs after

less than 5 years in orbit. The generalized Performance of this example satellite system is

the complement of the net failure rate, dropping from unity at time 0, to a value just less

than 0.5 after 10 years.

114

Page 113: The Generalized Information Network Analysis Methodology for

Failure state1 = (2*SCC)Failure state2 = (1*G&N,2*SCC)Failure state3 = (2*G&N)Failure state4 = (2*G&N,1*SCC)Failure state5 = (1*Bus)Failure state6 = (1*Bus,1*SCC)Failure state7 = (1*Bus,1*G&N)Failure state8 = (1*Bus,1*G&N,1*SCC)

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Time (years)

Pro

babi

lity

FS1FS2FS3FS4

FS5

FS6

FS7FS8

Pf

Figure 4-13: Failure state probabilities for a modeled Ka-band communication satellite

115

Page 114: The Generalized Information Network Analysis Methodology for

4.7 The Cost per Function Metric

The Cost per Function (CPF) metric is perhaps the most important concept introduced

within this analysis framework. Its de�nition is completely generalizable and straightfor-

ward:

The cost per function metric is a measure of the average cost incurred to provide

a satisfactory level of service to a single O-D pair within a de�ned market. The

metric amortizes the total lifetime system cost over all satis�ed users of the

system during its life.

The mathematical form of the metric follows immediately from this de�nition, and is

the same across all applications,

CPF =Lifetime Cost

Number of Satis�ed Users(4.23)

Note that number of users of the system is represented by the number of O-D pairs

and the information symbols they exchange. For example, the number of users of a com-

munication service is de�ned by the total number of bits transferred through the system.

Equivalently the number of users for a space based search radar is the total area that is

searched. However, this alone is insu�cient, since the de�nition of a market implicitly in-

cludes minimum requirements on the isolation between sources, and the rate and integrity

of the information being exchanged. Users within the market are only satis�ed when the

information transfers occur between the correct O-D pairs at the correct rate and with the

correct integrity. The metric is therefore based on the number of satis�ed users, referring

to the total number of symbols transferred through the system that satisfy requirements.

Before proceeding, it is helpful to introduce some examples of the CPF for di�erent ap-

plications, in order to concrete understanding of the principal terms. Table 4.2 summarizes

the CPF for a mobile voice communication system [4], a broadband communication system

[5], a surveillance radar system for the detection of ground moving targets (GMTI), and an

astronomical telescope [6], [10].

Table 4.2: Cost per Function metrics for example applications

Cost per Satis�ed UserMobile comms Cost per Billable voice-circuit minuteBroadband comms Cost per Billable T1-minuteGMTI radar Cost per Protected km2 of theaterAstronomical telescope Cost per Useful Image

Both of the communication systems must support a quality of service that people will

be willing to pay for; a service that is \billable". The market for voice requires symbol rates

116

Page 115: The Generalized Information Network Analysis Methodology for

that can support a voice-circuit, de�ned as a full duplex voice connection of predetermined

quality between two users. The quantity of these voice-circuits can be measured in minutes.

For broadband service, the information rate must be higher, with multimedia applications

requiring data rates around T1 (1.544 Mbit/sec).

The surveillance radar must provide a level of service that allows a theater of a given size

to be adequately protected. This requires that each square kilometer be \safety-checked"

every minute. The total number of protected square-kilometers is then total area protected

each minute, multiplied by the number of these minute-long intervals in the lifetime of the

system. As a result, the time dimension is not explicitly stated in the metric, but is implicit

in the de�nition of \protected". Similarly, for the telescope the concept of the \useful

image" implies a satisfactory resolution, update rate and image integrity. Again, the time

dimension does not appear explicitly, being swallowed by the \useful" construct.

In every case, the CPF has the dimensions of dollars-per-information symbol. Recall

however that information symbols represent users of the system. Therefore, although the

dimensions of a symbol are strictly bits, a symbol generally has an interpretation, such as a

voice-circuit or an image. Indeed, by de�nition, the dimensionality of the CPF metric must

be equivalent to dollars-per-user.

4.8 Calculating the Cost per Function Metric

In order to calculate the Cost per Function metric, the impact of improved Performance on

the cost of a system must be determined. If the value of Performance can be quanti�ed,

the system cost can be modi�ed to correspond to a common level of Performance. The

modi�ed system cost should represent the total lifetime cost of a system, where lifetime

cost is de�ned to be the total expenditure necessary to continuously satisfy the top level

system requirements.

4.8.1 The System Lifetime Cost

The baseline cost Cs accounts for the design, construction, launch, and operation of the

system components. This baseline cost does not however account for the expected cost

of failures of system components. Since the system must satisfy requirements throughout

its design life, expenditure will be necessary to compensate for any failures that cause a

violation of this condition. These additional failure compensation costs Vf [3] must be

added to the baseline system cost to give the total lifetime cost CL,

CL = Cs + Vf (4.24)

117

Page 116: The Generalized Information Network Analysis Methodology for

As long as it is used consistently, any parametric cost model can be used to calculate

the baseline system cost. Note that a premium is paid for more reliable components.

Since some costs are incurred at di�erent times within the lifetime of the system, the

cost is actually represented as a cost pro�le. This pro�le has to be modi�ed to account for

the time value of money. Costs incurred later in the system lifetime have a lesser impact

on the overall system cost. A dollar is always worth more today than it is tomorrow;

capital expenditure can earn interest if invested elsewhere. The yearly costs are therefore

discounted according to an assumed discount rate corresponding to an acceptable internal

rate of return (IRR). In order to attract investors to commercial systems, the high risk

associated with space ventures necessitates a high IRR of around 30% [45]. For government

projects, a discount rate of 10% is often used in costing analysis [3].

The discounted cost pro�le cs(t) must then be integrated over the system lifetime to

obtain the total baseline system cost Cs,

Cs =Xlife

cs (t) (4.25)

4.8.2 The Failure Compensation Cost

The failure compensation cost Vf can be estimated from an expected value calculation,

Vf = E [Vf ] =Xlife

Xstates

ps (t) � vs (t)!

(4.26)

where ps(t) is marginal probability of entering failure state s at time t and vs(t) is the sum

of the economic resources required to compensate for the failure. Strictly, this calculation

should involve all likely failure states. However, for complex systems this is prohibitive. A

reasonable approximation is to truncate the model and include only the states representing

the most likely failure modes.

Note that vs includes the costs of replacement satellites or components, launch costs

and any opportunity costs representing the revenue lost during the downtime of the system.

The calculation of vs is architecture speci�c, and in most cases depends strongly on the

nature of the failure mode. A failure mode and e�ects analysis (FMEA) may be required to

estimate the replacement costs. Estimation of the opportunity costs is di�cult, requiring

a prediction of the failure duration. Despite these problems, vs can be estimated with

reasonable con�dence using predictive methods and for simple systems, and simulations for

more complex systems. Of course, good market models are also required.

The marginal probabilities ps (t) of the most likely failure states are the derivatives of

the failure state probabilities Ps (t) that are evaluated during the Markov calculation for

118

Page 117: The Generalized Information Network Analysis Methodology for

the generalized Performance. It is therefore through the failure compensation costs that

Performance impacts the system lifetime costs. A higher Performance system will have a

lower probability of transitioning to a failure state and consequently a lower expected value

of the compensation costs.

4.8.3 The System Capture

In a perfect market scenario, the system captureMc (equivalent to the total number of sat-

is�ed users) can be chosen using the Capability characteristics and the Performance pro�les.

This would simply be the maximum number of users that the system could satisfy, given

a set of requirements. Essentially there is a trade-o� between providing basic service to a

large number of users or ensuring high Performance to a small number of users. For exam-

ple, a system that can serve a small number of users with a high probability could instead

target a larger number of users at a lower (but still satisfactory) availability. This strat-

egy carries the risk of being more sensitive to component failures, essentially incorporating

less performance redundancy. The optimum strategy depends on the expected revenue and

the estimated compensation costs, and in particular the opportunity costs associated with

dissatis�ed customers.

Note however that it is usually incorrect to assume a perfect market, and it is then

necessary to include a comparison to the size of the expected market. This step is called

demand matching and is critical because a system cannot outperform the demand. Extra

capacity beyond the market size brings no additional revenue or bene�t, but may incur

increased costs.

Comparing the design capacity to the size of the local demand and taking the minimum

gives the achievable capacity of the system. This is de�ned as the market capture. Since

the size of the local demand Q is almost always time and spatially varying, the demand

matching calculation involves an integration over the entire coverage region for each year

of the satellite lifetime, to give a market capture pro�le mc (t),

mc (t) =X

market

min [design capacity; Q] (4.27)

Recall that the Performance, and hence failure compensation depended strongly on the

number of users addressed. Of course, the opportunity costs associated with lost revenue

during down time is also dependent on the number of addressed users. The market capture

pro�le can be used to determine the maximum number of users that can be served at

di�erent times over the lifetime of the satellite.

A further complication arises if the system operation results in monetary income, as is

the case for commercial communication systems. In this situation, the time value of money

119

Page 118: The Generalized Information Network Analysis Methodology for

means that there is also a bias in the relative \value" of market capture, with a weighting

toward the start of the systems lifetime. In general, revenue should be earned as close as

possible to the time that the associated costs are incurred. For example, revenue earned from

the transmission of bits early in the life of a communication satellite are more important

than revenue earned late in the lifetime. For this reason, for each year of the lifetime of the

satellite, the capture pro�le mc (t) must also be correctly discounted according to the same

discount rate as was used to discount the costs.

The total number of satis�ed users or system captureMc is then calculated by summing

the capture pro�le over the entire lifetime of the system,

Mc =Xlife

mc (t) (4.28)

Having determined the total system capture, the CPF can now be calculated,

CPF =CL

Mc(4.29)

4.8.4 Example CPF Calculation for a Ka-Band Communication Satellite

The cost per billable T1-minute is the CPFmetric used in the analysis of broadband satellite

systems. It is the cost per billable T1-minute that the company needs to recover from

customers through monthly service fees, ground equipment sales, etc., in order to achieve a

speci�c (30%) internal rate of return.

Once again referring to the example Ka-band system described in Table 4.1, the cost

per billable T1-minute can be calculated from an estimation of the system's market capture

and the system costs. The system is assumed to reach initial operating capability (IOC) in

1999, and be active through the year 2010, requiring a satellite lifetime of 12 years. The

calculations are all performed in �scal year 1996 dollars (FY$96), since this would represent

a reasonable project inception date, given an IOC in 1999. All costs are adjusted using

the O�ce of the Secretary of Defense estimates [3], and discounted back to a present value

in 1996 with a 30% discount rate. Consider �rst the evaluation of the achievable market

capture of the system.

The market capture depends on the size of the market accessible to the system and on

the system Capability characteristics. The limiting e�ects of market demographics, access

and exhaustion can be quanti�ed only with an adequate market model. For an earlier study,

Kelic, Shaw and Hastings [5] constructed several reasonable models for the global broadband

communications market, based on current and projected internet usage and computer sales

growth. Using these market models, computer simulations of several broadband satellite

systems have been performed to estimate their market capture. Figure 4-14 shows the

120

Page 119: The Generalized Information Network Analysis Methodology for

resulting market capture pro�le for the modeled satellite.

0

500

1000

1500

2000

2500

3000

1998 2000 2002 2004 2006 2008 2010 2012Year

T1-

conn

ectio

ns

Exponential growth market model

"Last-mile" market model

Figure 4-14: Market capture pro�le for a modeled Ka-band communication satellite.The two market models represent di�erent projections for the size and distribution ofthe European residential broadband market

The achievable capacity of the satellite initially grows as the market develops. After

2005, the market capture saturates at around 2800 simultaneous users. If additional users

were addressed, the supported availability would drop below requirements, as seen in the

Capability characteristics of Figure 4-11.

The total market capture is the sum over all years of the market capture pro�le, after

discounting at a rate of 30% per year to represent the net value of the revenue stream in

1996. For the exponential market model, the resulting market capture in equivalent �scal

year 1996 simultaneous T1-users is only 2560. Note that this discounted total is smaller

than the true value in any individual year from 2004 onwards. This is a direct result of the

diminishing value, in real terms, of any revenue earned later in the lifetime of commercial

projects. The value ofMc used in the cost per billable T1-minute metric is then simply this

number of equivalent simultaneous T1-users, multiplied by the total number of minutes in

a year, so that Mc = 1:346� 109 T1-minutes.

The total baseline cost of the satellite system is estimated including recurring and non-

recurring costs for development, construction, launch, insurance, gateways and control cen-

ter operations, and terrestrial internet connections. The cost model used for this example

is the same as that used by Kelic [5], drawing on industry experience and observed trends.

The Theoretical First Unit (TFU) cost for communication satellites can be estimated rea-

121

Page 120: The Generalized Information Network Analysis Methodology for

sonably well assuming $77,000 per kg of dry mass. The non-recurring development costs

for commercial systems can be approximated at three to six times the TFU cost, depending

on the heritage of the design. For this example, launch costs to GEO can be assumed at

$29,000 per kg, with insurance at 20%. For linking to the terrestrial network, each OC-3

(155 Mbit/s) connection costs $8,500 installation and $7,900 per month. This cost scales

with the market capture.

The expected failure compensation costs are calculated from the SVF probability pro�le,

pf (t), shown in Figure 4-13 and the market capture curves of Figure 4-14. A satellite

failure can be assumed to result in the loss of a single years' revenue, together with the

cost of building and launching a replacement satellite. The calculation of the opportunity

costs from lost revenue requires an assumption for the average service charge per user. A

conservative estimate of $0.05 per T1 connection is used for this example. The baseline

system cost and the failure compensation costs can be summed to give cL, the system cost

pro�le. The baseline costs cs(t), failure compensation costs vf (t), and total system costs

cL(t) are shown in Table 4.3.

Table 4.3: System cost pro�le for a single Ka-band communication satellite

Year cs ($M) pf vs ($M) vf ($M) cL1997 264.000 264.0001998 264.000 264.0001999 145.000 0.070 0.500 0.035 145.0352000 1.000 0.067 14.720 0.980 1.9802001 1.000 0.063 14.490 0.911 1.9112002 2.000 0.059 14.360 0.852 2.8522003 2.000 0.056 14.260 0.794 2.7942004 3.000 0.052 14.320 0.749 3.7492005 3.000 0.049 14.140 0.693 3.6932006 3.000 0.046 13.780 0.630 3.6302007 3.000 0.043 13.250 0.564 3.5642008 3.000 0.040 8.170 0.324 3.3242009 3.000 0.037 5.150 0.190 3.1902010 3.000 0.034 2.440 0.083 3.083

Discounting the system cost pro�le at 30% per year gives the net present value of the

costs in �scal year 1996 dollars. Summing over all years of the discounted pro�le gives the

total lifetime cost, CL = $429M . The cost per billable T1-minute metric for this system in

an exponentially growing broadband market, is therefore simply,

Cost per billable T1-minute =CL

Mc= $0:32

This implies that the company must be able to charge users at least 32 cents per minute

for this broadband service in order to obtain a 30% return on the investment.

122

Page 121: The Generalized Information Network Analysis Methodology for

4.9 Utility of the Cost per Function Metric

Part of the utility of this cost per function metric is that it permits comparative analysis

between di�erent systems with large architectural di�erences, scaling their cost according

to their Performance and market capture. Very large and ambitious systems can be fairly

compared to smaller, more conservative systems. The cost per function metric can also be

used to assess the potential bene�ts of incorporating new technology in spacecraft designs.

New technology should only be included in the design of a new satellite if it can o�er

reduced cost or improved Performance. This can be evaluated with the metric, provided

that both the cost and the expected reliability of the new technology can be estimated.

Commonly, the largest problem encountered with incorporating new technology in space

programs is schedule slip. This can have an adverse e�ect on the overall success of the

program, extending the period of capital expenditure, while delaying operations that bring

revenue. These e�ects can also be captured by the cost per function metric. Some typical

amount of program slip can be included in the cost pro�le cs (t) , and the corresponding

delay can be applied to the market capture pro�le. The combined e�ects of including the

new technology will then be apparent, by comparing the cost per function metric to those

corresponding to designs featuring more established technologies.

4.10 The Adaptability Metrics

The adaptability metrics judge how exible a system is to changes in the requirements,

component technologies, operational procedures or even the design mission. It is convenient

to de�ne two types of adaptability, di�erent in both their meaning and their mathematical

formulation.

� Type 1 adaptability assesses the sensitivity of the Capability, cost and Performance

of a given architecture to realistic changes in the system requirements or component

technologies. A quanti�able measure of this sensitivity allows the system drivers to be

identi�ed, and can be used in comparative analyses between candidate architectures.

As will be shown in this section, the mathematical form of the Type 1 adaptability

also makes it entirely compatible with conventional economic analyses of commercial

ventures. This adds enormous utility to the metric for investment decision-making

and business planning.

� Type 2 adaptability measures the exibility of an architecture for performing a

di�erent mission, or at least an augmented mission set. This is particularly important

for government procured systems. In todays budget controlled environment, expensive

123

Page 122: The Generalized Information Network Analysis Methodology for

military and civilian space assets must be able to ful�ll multiple mission needs cost

e�ectively.

Each of these two types of adaptability has a quanti�able mathematical de�nition that

is a simple extension of the CPF metric.

4.10.1 Type 1 Adaptability: Elasticities

Concisely stated, Type 1 adapatabilities represent the elasticity of the CPF metric with

respect to changes in the requirements or the component technologies. Elasticity is a math-

ematical construction most often used in microeconomics. To introduce and formalize no-

tation, it is valuable to brie y summarize the concept of elasticity within the conventional

context of microeconomics.

Elasticity is de�ned as the percentage change that will occur in one variable in response

to a one percent change in another variable [46]. For example, the price elasticity of demand

measures the sensitivity of the demand for a product to changes in its price, and can be

written

Ep =�Q=Q

�P=P=P

Q

�Q

�P(4.30)

where Q is quantity of demand and P is price. Most goods have negative elasticities since

price increases result in demand decreases. If the price elasticity is greater than one in

magnitude, the demand is termed price elastic because the percentage change in the quantity

demanded is greater than the percentage change in price. Consequently, a reduction in the

price results in an increase in the total expenditure since disproportionately more goods

are sold. An increase in the price results in a reduction of total expenditure as much fewer

goods are sold. Conversely, if the price elasticity is less than one in magnitude, the demand

is said to price inelastic, and the opposite trends are observed. Finally, a value of unity

for the elasticity implies that the total expenditure remains the same after price changes.

Any price increase leads to a reduction in demand that is just su�cient to leave the total

expenditure unchanged.

Eqn. 4.30 speci�es that the elasticity is related to the proportional change in P and Q.

The relative sizes of P and Q change at di�erent points on the demand curve. Therefore

the elasticity must be measured at a particular point, and will usually have very di�erent

values at di�erent points along the curve. This of course means that the elasticity for a

change in price from P1 to P2 can be quite di�erent from the elasticity calculated in the

other direction, from P2 to P1. To avoid this confusion, the arc elasticity represents the

average elasticity over a small range,

124

Page 123: The Generalized Information Network Analysis Methodology for

Ep =�Q= �Q

�P= �P=

(P1 + P2)

(Q1 +Q2)

�Q

�P(4.31)

The choice between using point elasticities and arc elasticities is really the prerogative

of the engineer. In general, the arc elasticity is a more consistent measure of sensitivity.

For the remainder of this document, the term elasticity is taken to imply arc elasticity, and

the overbar is omitted from equations.

Return now to the generalized analysis framework. Analogous to the elasticity of de-

mand, the elasticity of the CPF metric is the percentage change in its value in response to

a one percent change in some other relevant variable. The \relevant variable" here may be

a system requirement, or a system component parameter. Indeed, it is straightforward to

formulate the di�erent requirement elasticities of the CPF at a given design point,

Isolation Elasticity, EIs =�CPF=CPF

�Is=Is(4.32)

Rate Elasticity, ER =�CPF=CPF

�R=R(4.33)

Integrity Elasticity, EI =�CPF=CPF

�I=I(4.34)

Availability Elasticity, EAv =�CPF=CPF

�Av=Av(4.35)

where Is, R, I , and Av are the set of system requirements on isolation, rate, integrity and

availability.

Note that �CPF is the change in the CPF value as a result of changing a system

requirement, and is formed by direct subtraction of the CPF values for the two di�erent

cases. However, the denominator of the CPF metric carries an implicit reference to these

same system requirements, as discussed in Section 4.7. It is initially tempting therefore

to question the validity of simply subtracting two CPF values that have entirely di�erent

denominators. The solution to this apparent problem lies in the fact that the CPF metric

is de�ned as the cost per satis�ed user. The denominators in all CPF metrics are therefore

equivalent to a single user, and �CPF can be calculated directly. For example, consider the

service options that can be provided by a broadband communication system. The cost per

billable T1-minute can be compared directly with the cost per 14 -T1-minute without any

modi�cations. The di�erence in value �CPF represents the di�erence in cost that must be

charged to each broadband user if the data rate provided to them is changed.

In a similar fashion, the technology elasticities can be de�ned. These can be formed for

any particular component of the system that may have an impact of the overall performance

125

Page 124: The Generalized Information Network Analysis Methodology for

or cost. Example technology elasticities are shown below,

Launch Cost Elasticity, EClaunch=

�CPF=CPF

�Claunch=Claunch

(4.36)

Manufacture Cost Elasticity, ECmfr=

�CPF=CPF

�Cmfr=Cmfr

(4.37)

Reliability Elasticity, ERs=

�CPF=CPF

�Rs=Rs(4.38)

where Claunch is the budgeted launch cost for the system, Cmfr is the manufacturing cost,

and Rs is the satellite reliability. In each case, some technology is varied, while the system

requirements are held constant. Technology elasticities can be formed for each essential

system component, re ecting the likely changes in available technology, or the variations

in the system parameters that span the design trade space. This allows a quanti�able

assessment of design decisions and can identify the most important technology drivers.

Utility of the Elasticities for Economic Analysis

The mathematical form of the elasticities are identical to the conventional elasticities used

in econometric analysis. This allows the results from a generalized analysis of a proposed

satellite system to be used in the investment decision making process. For example, con-

sider a broadband communication system that had been originally planned to provide users

with 14T1 connections. The marketing department then suggests that providing a full T1

connection would give the company a competitive advantage over all others in the market-

place. In addition, they have all the demand curves to prove it. The system engineer can

respond by calculating the rate elasticity of the CPF, as described above, for a change from14T1 to full T1. Since the CPF represents the average cost to provide service to a user, it

can be taken to be a surrogate for price. The rate elasticity of CPF (or price) can therefore

be multiplied by the price elasticity of demand, calculated from the demand curves, to give

some number X that represents the change in demand in response to the increase in price

associated with improved service. Comparing this value to the rate elasticity of demand

exhibited by the demand curves, a decision can be made about the rate that maximizes

revenue. If X is higher than the rate elasticity of demand, then an increase in the rate

results in a disproportionately larger increase in the price, averting more customers than

are attracted by the improved service. Marketings' idea to o�er higher rates can be crushed

by the system engineer. Alternatively, if X is smaller than the rate elasticity of demand,

the engineer can con�rm marketings' suggestion with quantitative numbers. Either way,

the correct decision can be made, and the engineer looks good!

126

Page 125: The Generalized Information Network Analysis Methodology for

4.10.2 Type 2 Adaptability: Flexibility

Type 2 adaptability corresponds to the change in the CPF of a system as the design role is

changed or augmented. Recall that a mission is de�ned by a market and a set of associated

derived system requirements. A change in the design mission therefore represents a change in

the market addressed and all the system requirements. A classical elasticity formulation that

relates a proportional response to proportional variations in the input cannot be constructed

because there is no obvious scalar representation of the input variations. Instead, Flexibility

F is simply de�ned to be the proportional change in the CPF in response to a particular

mission modi�cation,

F jX =�CPF

CPF

����X

(4.39)

where X is just an identi�er to specify the mission modi�cation. This is a useful metric for

comparing competing designs since it measures just the sensitivity of the CPF to mission

modi�cations, normalizing any di�erences in the absolute values of the initial CPF's. The

exibility can be an important factor in deciding between alternate architectures during

the conceptual design phase of a program, especially if the mission is likely to change over

the lifetime. For example, an architecture that is highly optimized for the baseline mission

may have a low CPF but a very high exibility, implying it is very unsuited to perform any

other modi�ed mission. In all but the most predictable markets, a more prudent design

choice would be a less optimized system with a lower exibility, even at the expense of a

higher CPF.

4.11 Truncated GINA for Qualitative Analysis

For purely qualitative analysis, the GINA methodology can be truncated signi�cantly, while

still providing the engineer with valuable insight. In particular, mapping the application

into the generalized framework organizes the thought process and allows an unambiguous

comparison to be made between competing architectures. The most important discrimi-

nators between the systems will be clearly apparent, allowing attention to be focused on

the de�ciencies or bene�ts of each architecture. For example, Table 4.4 shows a qualitative

comparison of two very di�erent architectures that have been proposed for a space based

radar to detect ground moving targets.

Discoverer-II or simply \D-2", [47], proposed by Defense Advanced Research Projects

Agency (DARPA), the National Reconnaissance O�ce (NRO) and the Air Force, is a con-

stellation of 24 satellites in LEO, each operating independently. The nominal design features

satellites in the 1500kg class, with peak RF power of 2 kW and antenna area of 40 m2, each

127

Page 126: The Generalized Information Network Analysis Methodology for

costing less than $100M. Advanced radar processing techniques, such Space-Time Adaptive

Processing (STAP) will be used to cancel clutter for the Ground Moving Target Indica-

tor (GMTI) mission and principles of Synthetic Array Radar (SAR) will support terrain

imaging.

On the other hand, Techsat21 [48], as proposed by the Air Force Research Laboratory

(AFRL) features symbiotic clusters of small satellites (approximately 100 kg, 200W RF,

1 m2 of aperture) that form sparse arrays to perform the same mission. The number of

clusters is at the moment undecided, depending on the eventual coverage requirements, but

for comparison purposes can be taken to be the same as the number of satellites in D-2.

The concept was introduced in Section 3.1.2 and is the dedicated focus of Chapter 7. Table

4.4 shows that there are several signi�cant discriminators between these two architectures.

Table 4.4: Qualitative comparison between Techsat21 and Discoverer-II space basedradar concepts using truncated GINA. The discriminators between the architecturesare written in slanted font.

Discoverer-II Techsat21Classi�cation Collaborative constellation, ns = 1 Symbiotic clustellation, ns=8{16

IsolationCluttercompensation

Clutter cancellation though adaptiveclutter processing, nulling, etc.

Clutter rejection through sparse aper-ture synthesis giving narrow mainlobe and low sidelobes

Resolution Limited by PRF and antennadimensions

Limited by sparse aperturebeamwidth (cluster dimensions)

Rate (search rate) Large aperture has small FOV, and sosupports a small ASR, unless a smalldwell time can be tolerated

Small apertures have wide FOV thatcan be �lled with multiple receivebeams so ASR can be high

Integrity (PD) High power needed to overcome ther-mal noise

n2s coherent processing gain allowslower power transmitters

Availability Dominated by coverage statistics (ac-cess, range to target, grazing angle)

Dominated by coverage statistics (ac-cess, range to target, grazing angle)

Performance Depends on reliability and survivabil-ity of single satellite

Improved reliability from in-built re-dundancy and graceful degradation

CPF Moderate number of large satellitesleads to baseline costs around $3B.Poor performance leads to high fail-ure compensation costs

Large number of small satellites leadsto baseline costs around $3B. Higherperformance leads to smaller failurecompensation costs

Adaptability Resolution, rate, and integrity are�xed by power and aperture re-sources. System can easily supportSAR imaging, but cannot performairborne MTI

Capabilities can be improved by aug-menting with more cluster satellites.Imaging is supported, and AMTI ispossible with more satellites

128

Page 127: The Generalized Information Network Analysis Methodology for

4.12 The GINA Procedure { Step-by-Step

The systematic procedure for applying the GINA methodology is summarized:

1. De�ne the mission objective. What is the real application of the system, in terms of

the user needs?

2. Map the user needs into the generalized Capability parameters of isolation, rate,

integrity and availability. These de�ne the features of the information transfer that

are perceived by the users as quality of service.

3. Construct the network representation from a functional decomposition of the system.

4. Determine functional behavior of each module, in terms of what it does to impact

the isolation, rate, integrity and availability. The modules generally interact with the

information via the signal, noise and interference power.

5. Determine the statistical inputs to each module. Some of the modules require inputs

relating to the system characteristics or other parameters, such as elevation angle,

coverage or clutter statistics.

6. Choose a number of O-D pairs that will be served, and determine their isolation

characteristics (domain of separation, spacing in that domain, signal spectrum etc.)

7. For that number of O-D pairs, calculate the integrity of information transfers for a

variety of rates. These are the Capability characteristics.

8. Set values for the Capability parameters corresponding to user requirements for the

market.

9. Assign failure rates to each functional module that represents real hardware.

10. Use Markov modeling to calculate the state probabilities corresponding to di�erent

combinations of failed components. The sum of the probabilities for those states that

satisfy requirements is the generalized Performance. Those states that do not satisfy

requirements are the failure states.

11. Calculate lifetime cost as the sum of the baseline cost and the failure compensation

costs, which are the products of the failure state probabilities and the costs required

to compensate for the failures.

12. For a realistic market scenario, calculate the market capture as the maximum number

of users that can be addressed satisfactorily.

129

Page 128: The Generalized Information Network Analysis Methodology for

13. Calculate the CPF as the ratio of the lifetime cost and the market capture.

14. Calculate Adaptability metrics by repeating the analysis after changing either a re-

quirement or a technology.

4.13 Summary

A generalized analysis methodology has been developed that allows systems with dramat-

ically di�erent space system architectures to be compared fairly on the basis of cost and

performance. The initial motivation was to undertake quantitative analyses of distributed

satellite systems compared to traditional singular deployments. The framework is however

very generalizable, and can be applied to all satellite missions in communications, sensing or

navigation. The most important concepts of the Generalized Information Network Analysis

(GINA) can be stated concisely7:

� Satellite systems are information transfer systems that serve O-D markets for the

transfer of information symbols.

� The Capabilities of a system are characterized by the isolation, rate, integrity and

availability parameters.

� Each market speci�es minimum acceptable values for these Capability parameters.

These are the functional requirements placed on the system.

� Performance is the probability that the system instantaneously satis�es the top-level

functional requirements. It is here that component reliabilities make an impact.

� The Cost per Function (CPF) metric is a measure of the average cost to provide a

satisfactory level of service to a single O-D pair within a de�ned market. The metric

amortizes the total lifetime system cost over all satis�ed users of the system during

its life.

� The Adaptability metrics measure the CPF sensitivity to changes in the requirements,

component technologies, operational procedures or the design mission.

These concepts extend across almost all applications. In the next chapter the methodology

is validated by applying it to the existing GPS system, and is then used in a comparative

analysis of the proposed broadband communication systems, and �nally a design study of

a military space based radar.

7A more detailed summary is included in the conclusions of Chapter 8

130

Page 129: The Generalized Information Network Analysis Methodology for

Part II

Case Studies and Results

131

Page 130: The Generalized Information Network Analysis Methodology for
Page 131: The Generalized Information Network Analysis Methodology for

The previous chapters have introduced a generalized analysis framework for distributed

and traditional satellite systems and have de�ned metrics for the quanti�cation of cost and

performance, capability and adaptability. It now remains to demonstrate the application of

this methodology on some realistic space missions. All the results in the following chapters

were produced using \GINALab", a Matlab/Simulink implementation of the generalized

analysis methodology8

Throughout the previous chapters, the generality of the approach had been stressed. To

prove this claim, in the next chapters the GINA technique is applied to communications,

remote sensing and navigation missions. Furthermore, in addition to demonstrating the

utility of the GINA method for comparative analysis of di�erent systems that compete in

the same market, it is also shown how it may be used during the conceptual design process.

The proposed broadband communication systems provide the context for the comparative

analysis, while the design study addresses the military need for a space based radar to

detect ground moving targets. First though, the methodology must be validated by appli-

cation to an existing distributed satellite system, giving results that are not only meaningful

and reasonable, but also di�cult to obtain by less sophisticated analysis techniques. The

NAVSTAR Global Positioning System is ideal for this purpose, since it is a very complicated

system with a large archive of measured data.

8GINALab is publicly releasable software developed by the author. To obtain a copy of the source code,contact Prof David Miller, Space Systems Lab, Dept of Aeronautics & Astronautics, MIT, [email protected]

133

Page 132: The Generalized Information Network Analysis Methodology for

134

Page 133: The Generalized Information Network Analysis Methodology for

Chapter 5

The NAVSTAR Global Positioning

System

5.1 System Overview

This section introduces the operational concept of the Global Positioning System, and is

provided to familiarize the reader with the important issues before proceeding with the gen-

eralized analysis. A great deal of the text in this section is taken from the excellent references

\Global Positioning System: Theory and Applications" edited by Bradford Parkinson and

James Spilker [49] and \The Global Positioning System|A Shared National Asset", [50] a

National Research Council report on possible future improvements to the system.

\Over a long Labor Day weekend in 1973, a small group of armed forces o�cers and

civilians, sequestered in the Pentagon, were completing a plan that would truly revolution-

ize navigation. It was based on radio ranging (eventually with millimeter precision) to a

constellation of arti�cial satellites called the NAVSTARs. Instead of angular measurements

to natural stars, [a method used by mariners for six thousand years] greater accuracy was

anticipated with ranging measurements to the arti�cial NAVSTARs" [49]. The operational

objectives of GPS were to provide:

� High-accuracy, real-time position, velocity and time for military users on a variety of

platforms, some of which have high dynamics, e.g. high-performance aircraft. \High-

accuracy" implied 20 m three-dimensional rms position accuracy or better.

� Good accuracy to civilian users. The objective for civil user position accuracy was

originally taken to be 500 m or better in three dimensions.

135

Page 134: The Generalized Information Network Analysis Methodology for

� Worldwide, all-weather operation, 24 hours a day.

� Resistance to intentional (jamming) or unintentional interference for all users, with

enhanced jamming resistance for military users.

� A�ordable, reliable user equipment. This eliminates the possibility of requiring high-

accuracy clocks or directional antennas on user equipment.

A quarter century later, the Global Positioning System (GPS) is almost identical to that

proposed in 1973 (although achieves better performance) and consists of three segments:

the space segment, the control segment, and the user segment, as shown in Figure 5-1. The

control segment tracks each NAVSTAR satellite and periodically uploads to the satellite its

prediction of future satellite positions and satellite clock corrections. These predictions are

then continuously transmitted by the satellite to the user as a part of the navigation message.

The space segment consists of the 24 NAVSTAR satellites, each of which continuously

transmits a ranging signal that includes the navigation message stating current position

and time correction. The user receiver tracks the ranging signals of selected satellites and

calculates a navigation solution [49].

Figure 5-1: The NAVSTAR GPS architecture (courtesy of the Aerospace Corporation)[50]

The fundamental navigation technique for GPS is to use one-way ranging from the GPS

satellites. A ground receiver simultaneously tracks several satellites using a low gain antenna

136

Page 135: The Generalized Information Network Analysis Methodology for

feeding a bank of matched �lters. Pseudoranges are measured to at least four satellites

simultaneously in view by matching (correlating) the incoming signal with a user-generated

replica signal and measuring the received phase against the user's (relatively crude) crystal

clock [49]. The actual observable is a pseudorange since it includes the user clock bias,

ionospheric and tropospheric delays, plus relativistic e�ects and other measurement errors.

The ionospheric group delay can be corrected by using dual frequency signals. The delay is

proportional to the inverse square of the frequency, and so measurement at two frequencies

allows its e�ect to be calculated. With ranges to four satellites and appropriate geometry,

four unknowns can be determined: latitude, longitude, altitude, and a correction to the

user's clock. If altitude or time are already known, a lessor number of satellites can be used

[49].

5.1.1 The GPS Space Segment

The GPS space segment consists of 24 satellites, in 6 orbital planes. The period of the orbits

is 12 sidereal hours and the inclination is 55 degrees. This con�guration was determined from

the requirements of full global four-fold coverage of the Earth. Geostationary orbits were

not used so that, in addition to the code phase/delay measurements, carrier phase/Doppler

methods could also be used for navigation solutions.

The GPS satellites are three-axis stabilized, and use solar arrays for primary power. The

ranging signal is transmitted using a shaped beam antenna to illuminate the Earth with

the same signal power at all elevation angles. The satellite design is mostly doubly or triply

redundant, and the Phase I satellites demonstrated average lifetimes in excess of 5 years

(and in some cases over 12) [49]. The Block II/IIA satellites that currently populate the

constellation were built by Rockwell International Satellite and Space Electronics Division,

and were designed to operate for 7.5 years. A typical Block II/IIA GPS satellite is shown

in Figure 5-2.

One of the enabling technologies for GPS was the development of extremely accurate

timing sources that were portable enough to be placed on satellites. Indeed, the placement

of a very stable time reference in a position where users have maximum access is the basis

for modern satellite navigation. The rubidium and cesium atomic frequency standards used

in GPS allow all the satellite clocks to remain synchronized to within one part in 1013 over

periods of 1{10 days [49]. To give a sense of scale, this accuracy is equivalent to an error of

about 1mm in the distance between the Earth and the Sun.

137

Page 136: The Generalized Information Network Analysis Methodology for

Figure 5-2: A typical Block II/IIA GPS satellite (courtesy of the Aerospace Corpora-tion) [50]

5.1.2 The GPS Ranging Signal

Each of the satellites transmits a ranging signal consisting of a low rate (50bits/sec) naviga-

tion message spread over a large bandwidth by a high rate pseudorandom noise (PRN) code.

The resulting signal is used to modulate a carrier at two frequencies within the L-band: a

primary signal at 1575.42 MHz (L1) and a secondary broadcast at 1227.6MHz (L2). These

signals are generated synchronously, so that a user who receives both signals can directly

calibrate the ionospheric group delay and apply appropriate corrections.

The PRN spreading signals are chosen such that the signals from di�erent satellites are

orthogonal, providing a multiple access technique. Two di�erent PRN codes are generated:

1. C/A or Clear Acquisition Code This is a short PRN code with period of 1023

bits, broadcast at a bit rate of 1.023MHz. This is the principal civilian ranging signal,

and is always broadcast in the clear (unencrypted). It is also used to acquire the

much longer P-code. The use of the C/A code is called Standard Positioning Service

or SPS. It is always available, although it may be somewhat degraded. At this time,

and for the projected future, the C/A code is available only on L1 [49].

2. P or Precise Code This is a very long code with a period of 37 weeks (reset at the

beginning of each week) and a bit rate of 10.23MHz, ten times that of the C/A code.

Because of its higher modulation bandwidth, the code ranging signal is somewhat more

precise. This signal provides the Precise Positioning Service or PPS. The military has

138

Page 137: The Generalized Information Network Analysis Methodology for

encrypted this signal in such a way that renders it unavailable to the unauthorized

user. This ensures that the unpredictable code (to the unauthorized user) cannot be

spoofed1. This feature is known as antispoof or AS. When encrypted, the P code

becomes the Y code. Receivers that can decrypt the Y code are frequently called P/Y

code receivers [49].

The L1 signal carries both the C/A and the P signals as in-phase and quadrature

components. The L2 carrier is biphase modulated by either the C/A or the P signal.

The frequency spectra of the GPS ranging signals are shown in Figure 5-3. This �gure

is not exactly correct since the short period C/A code has a discrete spectrum with line

components spaced at the code epoch rate (the code repetition rate is 1KHz). The more

correct representation of the spectrum is discussed later during the generalized analysis.

Figure 5-3: Characteristics of the L1 and L2 (courtesy of the Aerospace Corporation)[50]

Once the C/A code has been received and decorrelated, the navigation message con-

tained therein speci�es the satellite location and the correction necessary to apply to the

spaceborne clock, the health of the satellite, the locations of the other satellites, and the

necessary information to lock on to the P code.

The military operators of the system have the capability to degrade the accuracy of the

C/A code intentionally by desynchronizing the satellite clock, or by incorporating small

errors in the broadcast ephemeris. This degradation is called Selective Availability, or

SA and is intended to deny an adversary access to the high accuracy (<20m) navigation

solutions. The magnitude of these ranging errors is typically 20m, and results in rms

1Spoo�ng involves an enemy creating a mimicked GPS signal that would provide incorrect navigationinformation to authorized users

139

Page 138: The Generalized Information Network Analysis Methodology for

horizontal position errors of about 50m [49]. Due to the ine�ectiveness of SA against

di�erential methods of navigation using GPS, and due to strong political pressures from

the FAA, the US government recently promised that SA will be turned o� as soon as an

alternate method of selective denial can be implemented. This would give civilian receivers

access to high accuracy code ranging during peacetime.

Most receivers can construct a replica of the GPS carrier, allowing relative carrier phase

to be tracked. This is much more accurate, although carries an ambiguous initial phase, so

can only be used for relative measurements.

5.1.3 System Requirements

As stated earlier, the operational GPS exceeds the capabilities originally envisioned. The

requirements on the navigation capabilities that must be provided by GPS as speci�ed by

the GPS Joint Program O�ce are summarized in Figure 5-4. The accuracies given in the

table are for a variety of di�erent availabilities. The drms value is the one-sigma deviation

away from the mean, while the 2drms represents approximately the 90th percentile. The

50th percentile accuracy is often called the Spherical Error Probable or SEP, and is equal to

the radius of the sphere that would contain 50% of the errors. The shaded boxes represent

the formal system requirements for the di�erent services provided.

Figure 5-4: PPS and SPS speci�ed accuracies (courtesy of the GPS JPO) [50]

Note that the SPS service is now formally speci�ed to guarantee 100m position accuracy

in two dimensions with a 90% availability. The large change since the initial speci�cation

140

Page 139: The Generalized Information Network Analysis Methodology for

in 1972 of 500 m arose due to the unexpected range accuracy provided by the C/A code.

In fact, were it not for selective availability, the SPS would easily satisfy much more strin-

gent accuracy requirements, similar to the values for the PPS. The 100m two-dimensional

positioning error (90%) for the SPS is now a matter of US federal policy and can only be

changed by order of the President of the United States.

5.1.4 Measured navigation performance

There are �ve worldwide monitoring stations that are part of the GPS Operational Control

Segment (OCS). Since each of these stations continuously measure the ranging errors to all

the satellites in view, these measurements provide a convenient statistic for the basic, static

accuracy of GPS. Table 5.1 summarizes over 11,000 measurements taken from 15 January

to 3 March 1991, during operation \Desert Storm". CEP is the circular error probable; the

two dimensional analogue of SEP.

Table 5.1: PPS measured accuracies in terms of SEP and CEP navigation errors. Mea-sured at the OCS monitor stations during Desert Storm [49]

Criteria All Colorado Springs Ascension Hawaii Diego Garcia KwajaleinSEP (m) 8.3 7.8 6.8 9.0 9.1 9.0CEP (m) 4.5 4.5 3.8 5.1 4.6 5.0

5.2 Fundamental Error Analysis for GPS

There are several sources of error that a�ect the accuracy of a GPS-derived position. Un-

derstanding the nature of these errors and quantifying their impact is an essential step in

the analysis of the system. This section, reproduced from Parkinson [49], develops the GPS

error equation, starting with the fundamental measurements, and includes the e�ects of the

di�erent error sources.

Ideal Measurement

The \true" measurement is the GPS signal arrival time. This is equal to the signal

transmission time delayed by the vacuum transit time and corrected for the true

additional delays caused by the ionosphere and the troposphere,

tA = tT + (D=c) + T + I (5.1)

where tA= true arrival time (s); tT= true transmit time (s); D= true range (m); c=

vacuum speed of light (m/s); T= true tropospheric delay (s); and I= true ionospheric

delay (s).

141

Page 140: The Generalized Information Network Analysis Methodology for

Measured Arrival Time

The measured arrival time re ects the user's clock bias and other measurement errors,

tAu = tA + bu + � (5.2)

where tAu= arrival time measured by the user (s); bu= user clock bias estimate (s); and

�= receiver noise, multipath and interchannel error (di�erent for each satellite).

Satellite Transmission Time

The satellite clock correction transmitted by the satellite can also be in error (the

dominant error may be due to selective availability),

tTs = tT + B (5.3)

where tTs= value speci�ed as the transmission time in the current satellite message (s);

and B= true error in the satellite's transmission time, including SA (s).

True Range

The true range is the absolute value of the vector di�erence between the true satellite

position and the true user position,

D = j�rs � �ruj =�ls � [�rs � �ru] (5.4)

where �rs= true satellite position; �ru= true user position; and �ls= true unit vector from

user to satellite. Note that the estimated user position can be used to calculate the unit

vector from the user to the satellite. Even errors of several hundred meters in user or

satellite location have a very small e�ect (less than 1mm) on the range calculated by the

dot product in Eqn. 5.4.

Pseudorange

The user receiver actually measures the \pseudorange" � given by the following,

� = c � (tAu � tTs) (5.5)

This is called the pseudorange because it is a linear function of the range to the satellite,

but it also corrupted by the user's clock bias, which must be estimated and removed. In

addition, it must be corrected for the satellite time bias and for variations in the speed of

transmission. Substituting Eqs. 5.3, 5.2 and 5.1 into Eqn. 5.5 gives the following result,

� = D + c � (bu � B) + c � (T + I + �) (5.6)

142

Page 141: The Generalized Information Network Analysis Methodology for

Using Eqn. 5.4 in this expression gives,

� =�ls � [�rs � �ru] + c � (bu �B) + c � (T + I + �) (5.7)

To account for the estimated value (^) and the estimate error (�), each of the terms in

the above equation can be broken into two parts,

�rs = �rs ���rs where �rs = satellite position reported

in the transmitted message (m)

�ru = �ru ���ru where �ru = users estimated position (m)

�ls = �ls ���ls where �ls = unit vector from user to satellite

estimated from �rs and �ru

bu = bu ��bu where bu = user clock bias estimate common

to a set of simultaneous measurements (s)

B = B ��B � S where B = satellite transmitted clock bias (s)

�B = error in control system prediction (s)

S = error in the transmit time due to SA (s)

T = T ��T where T = estimated tropospheric delay (s)

I = I ��I where I = estimated ionospheric delay (s)

Eqn. 5.7 can then be modi�ed to account for the estimated values,

�j =��lsj ���lsj

�� ��rsj ���rsj � �ru + ��ru

�(5.8)

+c ��bu ��bu � Bj + �Bj + Sj

�+c �

�Ij ��Ij + Tj ��Tj + �j

where the j subscript is the satellite number and has been added to point out quantities

unique to each satellite. Rearranging,

�lsj|{z} �(b)

�ru � c � bu| {z }(a)

� �lsj ���ru + c ��bu| {z }(d)

= �lsj � �rsj � �j + c ��Ij + Tj � Bj

�| {z }

(b) (5.9)

+ ��lsj ���rsj ���lsj ���rsj � �ru

�+ c � (�Bj + Sj)� c � (�Ij + �Tj) + c � �j| {z }

(c)

+ higher order terms

The terms grouped as (a) are the user's position and clock errors to be solved, the terms

(b) are estimated or measured by the user, and the terms (c) are unknown errors that

143

Page 142: The Generalized Information Network Analysis Methodology for

produce the solution errors given by (d). The right-hand portion of (b) includes �cj or

corrected pseudorange, de�ned as,

�cj = �j � c ��Ij + Tj � Bj

�(5.10)

Next, de�ne the following matrices for the K satellites in view (K = 4 is the minimum

number of measurements),

Gk�4 �

266666664

�lT

s1 1

�lT

s2 1...

...

�lT

sK 1

377777775

Ak�3K �

266666664

�lT

s1 0

�lT

s2

. . .

0 �lT

sK

377777775

�x4�1 �24 �ru

�c � bu

35 ��x4�1 �

24 ��ru

�c ��bu

35

�R3K�1 �

26666664

�rs1

�rs2...

�rsK

37777775 ��R3K�1 �

26666664

��rs1

��rs2...

��rsK

37777775

� ��cK�1 �

26666664

��1 + c ��I1 + T1 � B1

���2 + c �

�I2 + T2 � B2

�...

��K + c ��IK + TK � BK

37777775 =

26666664

��c1��c2...

��cK

37777775

�K�3K �

26666664

��lTs1 0

�lT

s2

. . .

0 ��lTsK

37777775

�P3K�1 �

26666664

�ru

�ru...

�ru

37777775

and � �B, �S, ��I, � �T , and �� are all obvious. Now Eqn. 5.9 can be rewritten in a

convenient matrix form (neglecting higher-order terms),

G � �x�G ���x = A � �R� ��c �A ���R (5.11)

+c � �� �B + �S ���I ���T + ���+ � � � �R� �P

�The user does not know the last terms of Eqn. 5.11, which are the errors, and calculates

144

Page 143: The Generalized Information Network Analysis Methodology for

the position based on the following,

G � �x = A � �R� ��c (5.12)

to �nd, in the general case when K > 4,

�x =�GTG

��1GT �A � �R� ��c

�(5.13)

using the pseudoinverse of G. This is the basic position calculation. Note that G, the

geometry matrix, is constructed from the set of approximate directions to the satellites, as

is the matrix A. The vector �R is constructed from the location of the satellite given in the

navigation message, and �c is the corrected pseudorange. Inserting Eqn. 5.13 into Eqn.

5.11 cancels appropriate terms and gives the fundamental error equation,

G ���x = c � ��� �B � �S +��I +� �T � ���� � � � �R� �P

�+A �� �R � ��� (5.14)

Thus, the right-hand side contains all of the ranging errors and calculation errors

expressed in meters. Finally,

��x =�GTG

��1GT��� (5.15)

where ��x is the position error in meters.

Geometric Dilution of Precision

Obviously, satellite geometry can a�ect the accuracy of a navigation solution, and the

impact is quanti�ed by the geometric dilution of position (GDOP). Its derivation is

straightforward. First de�ne the covariance of position (in meters),

cov(position) = E���x ���xT

�(5.16)

where E is the expectation operator. The calculation of the position covariance contains

the covariance of range E���� ����T

�. If all ranging errors have the same variance

��2R�

and are uncorrelated, with zero mean, then Eqn. 5.16 becomes,

cov(position) = �2R �hGTG

i�1(5.17)

Therefore,hGTG

i�1is the matrix of multipliers of ranging variance to give position

variance. It is known as the GDOP matrix. If the position coordinates are the ordered

145

Page 144: The Generalized Information Network Analysis Methodology for

right-hand set, east, north and up, then,

cov(position) = �2R �

2666664(EastDOP)2 covariance terms

(NorthDOP)2

(VerticalDOP)2

covariance terms (TimeDOP)2

3777775

(5.18)

The scalar GDOP is de�ned as the square root of the trace of the GDOP matrix.

Similarly, the PDOP =q((EastDOP)2 + (NorthDOP)2 + (VerticalDOP)2). The

fundamental error equation and the PDOP are essential concepts for the modeling and

subsequent generalized analysis of GPS.

5.3 GINA Modeling of GPS

To place GPS in the context of the GINA framework, it must be �rst appreciated that the

system does not directly address the market for navigation services, although it satis�es the

market's needs. Speci�cally, the information symbols required by the market for navigation

are user position and velocity solutions. Within the framework of GINA, the quality-of-

service provided by a system that serves a market demand is measured by the isolation,

rate, integrity and availability of the information symbols transferred between O-D pairs.

For the military navigation market, these translate to the following:

� Isolation is the access control, antispoo�ng, and jamming resistance of the system.

The providers of the navigation information would like to deny access to unauthorized

users; this sets an isolation constraint on information delivered to the destination

nodes. Even more importantly, the authorized users of the system must be con�dent

that the navigation information delivered to them did not originate from an enemy;

this is an isolation constraint on the information transferred from the origin nodes.

� Integrity is equivalent to SEP, and measures the error (in meters) of a user navigation

solution.

� Rate is equivalent to the update rate on navigation solutions. This must match the

dynamics of the user platform, allowing accurate navigation and velocity estimation.

� Availability is the probability that authorized users are able to obtain navigation

solutions with a given value for the SEP at a given update rate.

As previously stated, NAVSTAR GPS does not address this market directly; it delivers

to the users information about the range to a set of satellites. To satisfy the navigation

146

Page 145: The Generalized Information Network Analysis Methodology for

market's demand for position accuracy, these range measurements must correspond to a

subset of satellites with favorable geometry, according to the dilution of precision construct.

The PDOP is therefore the coupling term that allows the market for navigation to be

transformed into a market for range measurements.

To analyze the capabilities of GPS in the navigation market, all that is required is to

calculate its (statistical) capabilities in providing ranging signals to users anywhere on the

globe, and then combine this, through the joint-probability equations, with the statistics of

the supported PDOP.

The capabilities of GPS for providing ranging signals can be measured by the same

generalized quality-of-service parameters: isolation characterizes the ability to provide au-

thorized users with simultaneous measurements of the range to multiple satellites (with

acceptably low levels of interference) while also being able to deny access to unauthorized

users; the update rate speci�es how often independent range measurements can be made

(s); integrity measures the size of the ranging errors (m); and availability is the joint prob-

ability of achieving given values of the other parameters. The next sections quantify these

capability characteristics for GPS, and go on to calculate the Performance and the CPF

metrics.

5.3.1 GPS Network Architecture

In order to calculate the capability characteristics, the system is functionally decomposed

into the most important modules that contribute to the isolation, rate and integrity of the

information delivered to the users. The resulting network architecture is shown in Figure

5-5.

Spaceloss InterferersGPSTX

SatelliteClock

SinkIo/TropoGPSRX

Userclock

GPSprocessor

Constellation

Ephemeriserrors

Figure 5-5: The network representation of GPS used in GINA

Notice that the control segment is not included; the e�ects on the range error of inac-

curate corrections in the satellite position and time uploaded to the satellite are captured

in the \Satellite clock" and \Ephemeris errors" modules. During peacetime operations the

reliability of the control segment uplink can be assumed to be very high; of course, the well-

known location of Falcon Air Force Base makes this link the most vulnerable component

in the entire system during war-time operations. This fact is ignored in this analysis. Also

notice that there are no interconnections in the network, there being a single undivided

147

Page 146: The Generalized Information Network Analysis Methodology for

path from the source to the sink. By de�nition, this would suggest that the system be

classi�ed as collaborative. This disagrees with the truth that GPS, as a navigation sys-

tem, is most de�nitely symbiotic since at least 4 satellites are always needed to obtain a

navigation solution. However, recall that the goal is to quantify the capabilities of GPS

in delivering range signals. The navigation capabilities are calculated by combining the

ranging capabilities with the achievable PDOP statistics. For simple ranging calculations,

and assuming all range errors have the same variance, the system can be modeled as a

collaborative architecture, provided the PDOP is accounted for correctly.

5.3.2 The Constellation Module- Visibility and PDOP

The Constellation module provides three critical inputs to the model:

1. The statistics for the Visibility, de�ned as the number of satellites in view of each

ground location. This is important to determine the availability of the system, and

also the amount of multiple access interference that a GPS receiver experiences on

each satellite tracking channel.

2. The statistics for the PDOP corresponding to those satellites in view of each ground

location. This is important to relate the range errors to position errors.

3. The statistics for the average elevation angle of all satellites in view of each ground

location. This is important to calculate the likely signal attenuation from free space

loss.

These statistical distributions were calculated by simulating the orbits of the full constel-

lation, propagating the satellites over a whole day. For each hour, the elevation and azimuth

angles to each satellite, from every ground location within �60o latitude were recorded. Us-ing this data, the visibility, average elevation angle, and PDOP can be calculated for each

ground location at each time interval. Figures 5-6 and 5-7 show instantaneous snapshots of

the visibility and the PDOP for each ground location across the Earth.

Calculating the histogram of this data using all ground locations and all times gives

the probability distribution functions for the visibility, elevation angle and PDOP. These

distribution functions are shown in Figures 5-8, 5-9, 5-10, and are used as inputs to the

generalized analysis. Note that the �gures also show the distribution functions for degraded

GPS constellations, with only 22, 20 or 18 satellites operating. These curves were created

the using the same procedure as for the full GPS-24 constellation, with the exception that

random selections of satellites were removed from the orbit simulation prior to calculating

the histograms.

148

Page 147: The Generalized Information Network Analysis Methodology for

−150 −100 −50 0 50 100 150−60

−40

−20

0

20

40

60

Longitude

Latit

ude

2

3

4

5

6

7

8

9

10

11

12

13

Figure 5-6: A snapshot of the visibility of the GPS-24 constellation

5.3.3 Signal structure

The signal structure a�ects the capabilities through the magnitude of the multiple access

interference present in the output of the receiver decorrelators. To a lesser extent, the

signal characteristics determine the maximum timing accuracy that can be achieved with

the phase-locked loops.

The GPS P (or Y) code is a long pseudorandom noise code, with a period of 37 weeks,

reset at the beginning of every week. The spectral density of this code is a continuous

sinc-squared function,

GP(f) =1

fc

sin2 (�f=fc)

(�f=fc)2 (5.19)

where fc = 10:23MHz is the chipping rate, and the signal is �ltered to include only the

main lobe.

The C/A code is a much shorter Gold code, with a period of only 1023 chips. At the

chipping rate of 1.023MHz, the code repeats every 1ms. The power spectral density therefore

has discrete line components spaced at the code epoch rate (1KHz). The spectrum is the

product of the sinc2 corresponding to a single square chip of length (1=1:023�106) seconds,

and the characteristic spectrum of the pseudorandom noise sequence that constitutes the

Gold code. The spectrum for an ideal maximal length sequence sk of period P is a picket-

fence of delta functions [36] each of area (P + 1), with a low k = 0 component that has an

149

Page 148: The Generalized Information Network Analysis Methodology for

−150 −100 −50 0 50 100 150−60

−40

−20

0

20

40

60

Longitude

Latit

ude

1.5

2

2.5

3

3.5

4

4.5

5

5.5

6

Figure 5-7: A snapshot of the PDOP for the GPS-24 constellation

area 1=P 2. The power spectral density of the GPS C/A can thus be approximated as,

GC/A(f) =1

P 2�(f) +

1Xk=�1

(P + 1)

P 2

sin2 (�f=fc)

(�f=fc)2 �

�f � kfc

P

�(5.20)

In general the short Gold code only approximates an ideal maximal length sequence

and its spectral lines are not of equal area; variations in the line component amplitudes

of �3dB compared to the at spectrum are common. These variations are neglected in

this analysis, and spectra described by Eqs. 5.19 and 5.20 are created in the \SV clock"

module, propagated through the network and ultimately used to calculate the multiple

access interference.

5.3.4 Ephemeris and Satellite clock errors

The GPS control segment generates predicted satellite ephemerides and clock corrections

that are uploaded daily to the satellites. These predictions are then included in the broad-

cast navigation message and is eventually used by GPS receivers to estimate satellite coor-

dinates and clock corrections.

Ephemeris errors occur when the satellite does not broadcast the correct satellite coor-

dinates. The errors are typically small, and increase slowly with time from the last control

station upload. The errors in the prediction of each component of the satellite position have

150

Page 149: The Generalized Information Network Analysis Methodology for

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.91

24

68

1012

Vis

ibili

ty

Cumulative Probability

GP

S-2

4G

PS

-22

GP

S-2

0G

PS

-18

Figure

5-8:TheprobabilitydistributionfunctionforthevisibilityoftheGPSconstel-

lationbetween�60

olatitude

beenmeasuredsince

June1992bytheJetPropulsionLaboratory(JPL),usingaglobally

distributednetwork

of20-40precision

P-codereceivers[51].Thedatafrom

each

stationwas

usedto

compute

precise

orbitsolutionsthat

canbecompared

tothebroadcast

ephem

eris.

Ahistogram

oftherm

serrors

foreach

positioncomponentfortheperiodJuly

4,1993

through

October

23,

1993are

show

nin

Figure

5-11.They-axisherecorrespondsto

the

number

ofseparateobservations(days)of

each

rmserrorbetweenthebroadcastephem

eris

andtheprecise

orbitsolution.

Theimportance

oftheseerrorsto

thecapabilitiesofGPSarein

theirimpacton

ranging

errors.In

general,therelative

importance

ofeach

componentisdi�erentdueto

thedi�erent

projectionsof

theerrorsinto

therange

direction.Thisisquanti�ed

bytheA��

� Rterm

in

thefundam

entalerrorequation,andcanbeestimated

from

,

�2 u=kr�2 r+ky;a

� �2 y+�2 a

�(5.21)

where�uistheuserequivalentrange

error;�r,�yand�aaretheephem

eriserrors

inthe

radial,cross-track

andalong-track

directionsrespectively;krandky;ameasuretheimpactof

geom

etry.Itisshow

ninreference

[49]thatkr�0:959

andky;a�0:0204

aregoodestimates

foranaveragegeom

etry.

Withthisequationandthestandardjoint-probabilityintegral,thedataof

Figure

5-11

canbeusedto

calculate

thestatisticsoftherange

errordueto

ephem

eriserrors.Theresult

isshow

nin

Figure

5-12.

151

Page 150: The Generalized Information Network Analysis Methodology for

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.91

02

46

810

12

PD

OP

Cumulative Probability

GP

S-2

4G

PS

-22

GP

S-2

0G

PS

-18

Figure5-9:TheprobabilitydistributionfunctionforthePDOPoftheGPSconstellation

between�60

olatitude

Unmodeled

satelliteclock

errorshaveadirectimpacton

therange

error.In

fact,dueto

thefarfrom

pedestrianspeedof

light,even

verysm

alltimingerrors

canlead

tosigni�cant

range

errors.Thesesatelliteclock

errors

canbeseen

inthe�B

term

inthefundam

ental

errorequationanda�ectboththeSPSandPPSusers

equally.Theatom

icclocksusedby

GPSareextrem

elystable,butmeasurable

errors

stilloccur.

TheJP

Ltrackingnetwork

has

compared

thebroadcast

clock

solution

withprecise

solutions.

Zumberge

[51]

presents

asamplehistogram

fortheclock

errors

usingdatacollectedforeach

satellitefrom

July

4,

1993

toOctober

22,1993.Theseresultswereusedto

construct

theprobabilitydistribution

functionforrm

ssatelliteclock

errors,show

ninFigure5-12.Notethat

thee�ectsofSAwere

not

modeled,since

they

representarti�cial

degradation;theanalysisismeantto

quantify

theachievablecapabilities,notpurposefullydegraded

capabilities.

5.3.5

Spaceloss

Referringto

theGPSnetworkdiagram

ofFigure

5-5,

thenextfunctional

moduleaccounts

forfree-spaceloss.Thesignaltransm

ittedbytheGPSsatellites

isattenuated

byther2

spaceloss,whereristherange

tothesatellites,whichisafunctionof

theelevationangle.

Thestatistics

ofthisattenuationcanbecalculated,once

againusingthejoint-probability

equations,andtheelevationanglestatistics

ofFigure

5-10.Thesignal

pow

erisimportant

forSNR

calculationsin

theGPSreceiver

module

todetermineacquisitionandtracking

constraints,andalsoreceiver

trackingerrors. 152

Page 151: The Generalized Information Network Analysis Methodology for

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.91

020

4060

80

Ele

vatio

n an

gle

(deg

rees

)

Cumulative Probability

GP

S-2

4G

PS

-22

GP

S-2

0G

PS

-18

Figure

5-10:Theprobabilitydistributionfunctionfortheaverageelevationangle

of

GPSsatellitesin

viewofgroundlocationsbetween�60o

latitude

5.3.6

IonosphericandTroposphericerrors

Thepresence

offree

electronsintheionospheremeansthat

GPSsignalsdonot

travelat

the

vacuum

speedoflight;thegroupvelocity

isdelayed,whilethephasevelocity

isadvanced

bythesameam

ount.

Thevelocity

change

is,to

�rstorder,proportional

totheinverseof

thecarrierfrequency

squared

[49].

SPSreceiverscorrectfortheionospheric

groupdelay

usingasimple

internal

diurnal

model

ofthedelays.

Theparametersof

themodel

areupdated

usinginform

ationin

the

navigationmessage,although

theaccuracy

oftheseupdates

isnot

yet

clearlyestablished

[49].For

thegeneralized

analysis,theinaccuracy

oftheionosphericmodelwas

assumed

to

give

rise

torm

srange

errors

of4meters,avaluequoted

byParkinson[49]as

reasonable.

PPSuserscantake

advantage

ofdualfrequency

measurements,andpartiallycancelthe

e�ects

oftheionosphere.

Forthegeneralized

analysis,thistechniquewas

assumed

togive

rmsrangingerrorsof1meter,again

from

Parkinson'sestimates

[49].

Thetropospherealsohasan

e�ectof

therangingaccuracy.Variationsin

temperature,

humidityandpressure

a�ectthespeedof

lightthrough

theatmosphere.

Boththecodeand

carrierwaveexperience

thesamedelays.

Simplemodelsfortheimpactof

thetroposphere

areassumed

accurate

towithin

about0.5meter

[49].

153

Page 152: The Generalized Information Network Analysis Methodology for

050100

150

200

250

300

350

400

450

02

46

810

Err

or (

m)

Number of days observed

Cro

ss tr

ack

erro

rs

Alo

ng tr

ack

erro

rs

Rad

ial e

rror

s

Figure

5-11:ComparisonoftheGPSbroadcast

ephemeriswiththepreciseorbital

solutionfortheperiodJuly

4,1993throughOctober22,1993[51]

5.3.7

Interferers

Theinterferersmodule

simply

accounts

forthemultiple

satellitesignalsthat

entereach

receiver

trackingchannel.Thenumber

ofinterferingsources

isequalto

thevisibilityof

the

constellation,givenbytheprobabilitydistribution

functionof

Figure

5-8.

Theisolation

characteristics

ofthesystem

willdeterminethelevelofinterference

pow

erinducedbythese

interferingsources.

5.3.8

GPSreceivermodel

Theuserclock

modulein

thenetworkdiagram

istheoscillator

that

setstheupdaterate

at

whichnavigationsolutionsaredesired,e�ectively

determiningthetrackingbandwidth

of

each

receiver

channel.Itspresence

inthenetworkdiagram

externalto

thereceiver

module

isdueonly

toform

atrestrictionsthat

Matlab/Simulinkplacedupon

themodel.

TheGPSreceiver

perform

smanyfunctions:

(1)receivingtheGPSsatellitesignals,

�lteringout-of-bandnoise,anddow

nconversion

toan

interm

ediate

frequency;(2)split-

tingthesignal-plus-noise

into

multiple

channelsforsignal-processingof

multiple

satellites

simultaneously;(3)generatingthereference

PRN

codes

ofthesignals;

(4)acquiringthe

satellitesignals;(5)trackingthecodeandcarrierof

thesatellitesignals;(6)dem

odulating

thenavigationmessage

from

thesatellitesignals;(7)extractingcodephase(pseudorange)

154

Page 153: The Generalized Information Network Analysis Methodology for

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.91

01

23

45

6

Ran

ge E

rror

s (m

)

Cumulative Probability

Eph

emer

is

SV

Clo

ck

Figure

5-12:Theprobabilitydistributionfunctionforrangeerrors

attributable

toephemeriserrors

andunmodeledsatelliteclock

errors,constructedfrom

data

in[51]

measurements

from

thePRNcodeof

thesignals;(8)extractingcarrierfrequency

(pseudo-

range

rate)andcarrier-phasemeasurements

from

thecarrierof

thesignals;(9)estimating

arelationship

toglobalGPSsystem

time;and(10)

calculatingnavigationsolutions.

Forthegeneralizedanalysis,

themostimportantfunctionsto

model

arethereceiver

front-endthatsets

theSNR

ofsignalsenteringthetrackingloops,

theacquisitionand

trackingofthecodeandcarrier,thedem

odulation

ofthenavigationdata,andtheextraction

ofcodephaseforpseudorangeestimation.Multipathe�ectsarenot

included

inthemodeling

because

proper

placementandselectionof

anantennashould

mitigatemostmultipath

problems.

Startingatthereceiver

front-end,thesatellitesignalsenterazero

gain

antenna,

and

are�ltered

to10.23M

Hzbeforedow

nconversion.Thee�ectivesystem

noise

temperature

is

assumed

tobe290oK.

Afterpassingthroughtheinterm

ediate

frequency

�lters,thesignals-plus-noise

enterthe

signal-processingcomponents.Hereabankof

correlatorsareusedto

acquireandlock

on

toindividualsatellites.Thecorrelatorsmultiply

thereceived

signal

withadelayed

replica

ofeach

PRNcode,shiftingthereference

phaseuntilapeakcorrelationoutputisobtained.

Thecodephaseat

thepeakcorrelationidenti�es

thesignaltiming;

thedespread

signalcan

then

bedem

odulatedto

obtain

thenavigationdata.

Fortotallyorthogonalsignals,thereisnocross-correlationbetweentwodi�erentPRN

codes.TheGPSPRNcodes

are

closeto

orthogonal,butthereisa�nitecross-correlation

155

Page 154: The Generalized Information Network Analysis Methodology for

signal that appears in the output of the correlators, acting as an interference term. This

interference degrades the e�ective SNR and can cause a false lock in the receiver code

search and acquisition operation. This can be made more severe if the user receiver antenna

happens to have more gain in the direction of the interfering satellite, or if the free space

loss is less for that satellite. Of course, if there are M satellites in view, there are (M � 1)

interfering signals for each channel. The amount of multiple access interference must be

estimated to quantify the system capabilities.

In general, when a PRN code with power spectral density Gs1 is the input to a correlator

tuned to a di�erent PRN code with spectrum Gs2, the normalized output power spectrum

is the convolution of the two input spectra,

Gma(f) =Z 1

�1Gs1(�)Gs2(� � f)d� (5.22)

The correlator output is despread and �ltered to a very narrow band, equal to the either

the tracking bandwidth (< 10Hz) or the data bandwidth (50Hz). As a result, only the part

of the interference spectrum near f = 0, Gma(0) is passed through the �ltering process. For

the continuous spectra of the GPS P-code, this equation accurately predicts the multiple

access interference from a single interfering satellite, with Gs1 = Gs2 = GP from Eqn. 5.19.

The value of this integral is 2=(3fc), which is negligibly small given the high chip-rate of

10.23MHz.

For the discrete spectrum of the C/A code, things are a little (lot) more complicated.

The presence of line components means that the cross-correlation property is dependent

signi�cantly on both Doppler o�set and code o�set. Summarizing, the correlator involves

the multiplication of two PRN codes, and for the C/A code, these are two short-period

balanced Gold codes. From the cycle-and-add property of these codes [36], the product is

itself a Gold code. Depending on the code o�set, this new code will in general be unbalanced

in that it has unequal numbers of ones and zeros. The time averaged output in the absence

of any Doppler o�set is therefore the time average of an unbalanced Gold code of length

1023 bits, which can result in worst-case cross-correlation side lobes of approximately -24dB.

Fortunately, there is an averaging e�ect in that not all signals will produce the worst case

cross-correlation at the same time. The result is that the cross-correlation output averages

to the random maximal length sequence result of 1=P 2 = -60dB (at f=0), where P is the

code period. This is very similar to the result that would have been given by Eqn. 5.22.

In the presence of Doppler o�set between the codes, there exists the possibility that

di�erent spectral line components of the product may fold into the pass band of the nar-

rowband �lters. Recall that the PRN line components at frequencies away from f = 0 are

much higher (see Eqn. 5.20). The cross-correlation output in the presence of a Doppler

156

Page 155: The Generalized Information Network Analysis Methodology for

o�set can therefore be very much worse; as high as -21.6dB. However, if there is a Doppler

o�set, then the delay di�erence is by de�nition changing between the codes in time. Thus

the worst case cross-correlation sidelobes with Doppler are temporary in nature (of order

seconds). This may mean that one particular satellite cannot be acquired or tracked for a

short period, but if there are greater than 4 satellites in view this may not be important.

For the purposes of calculating the capabilities of GPS, the adverse but temporary impacts

of Doppler upon the cross-correlation output can be neglected.

Consequently, for the generalized analysis, the average cross-correlation output Gma(0)

for a single pair of satellites is estimated using Eqn. 5.22 for both C/A and P-codes. To

account for all satellites in view, the normalized multiple access interference is calculated as

a probability distribution function from the product of Gma(0) with the satellite visibility

distribution function of Figure 5-8 (after subtracting 1). The total multiple access inter-

ference power for a given channel is then calculated by combining this derived distribution

function with the statistics of the input signal power.

The resulting interference power distribution is added to statistics of the thermal noise

density, to obtain the total e�ective noise density. This is combined with the probability

distribution of the signal power to obtain the statistics of the e�ective signal to noise density

ratio (C=N0).

The (C=N0) is used to check that individual C/A-code signals can be acquired, tracked

and demodulated. Acquisition involves searching through all possible code o�sets, usually

in half-chip increments, to �nd the maximum correlation peak. The criterion for successful

acquisition is a SNR of 22dB, based on a noise bandwidth equivalent to the desired search

rate in chips per second. The tracking constraint is more stringent, requiring a SNR >25dB.

However, the e�ective noise bandwidth is now only a few Hertz, equivalent to the required

navigation update rate. Finally the data demodulation constraint checks for a SNR>10dB

over a noise bandwidth of 50Hz, to satisfactorily interpret the navigation message. This is

particularly important, since the navigation message contains the information required to

search and acquire the P-code. Any part of the distribution function that violates these

constraints is removed from consideration and will eventually be manifested as a loss of

availability.

The receiver tracking error statistics can now be calculated. For a noncoherent Delay

Lock Loop, as is used in most GPS receivers, the rms noise-induced delay error (in meters)

is given by,

�2rx � c2Bt

2f2c (C=N0)(5.23)

where Bt is the tracking bandwidth. The receiver range error can then be root sum squared

157

Page 156: The Generalized Information Network Analysis Methodology for

with all the other range errors, to obtain the statistics of the total user equivalent range

error.

Finally, the accuracy of the navigation solutions can be calculated from combining the

derived user equivalent range error statistics with the probability distribution of the PDOP,

shown earlier in Figure 5-9. The resulting distributions de�ne the capability characteristics

of GPS for the military navigation mission, since they quantify the probability of achieving

given navigation accuracies for di�erent values of the tracking bandwidth. The results of

this quantitative modeling are discussed in the next section.

5.4 GINA Capabilities of GPS for the Navigation Mission

Figures 5-13 and 5-14 show the capability characteristics for the full GPS-24 constellation

for the PPS and the SPS navigation missions. The Integrity axis is the positioning accuracy

(m) and the Availability is the probability of achieving this level of Integrity. The Rate

measure is the tracking bandwidth, and is related to the user dynamics. The number of

users is not relevant for GPS since it is a broadcast service, and so was set to zero.

The form of the curves for both services are similar. The di�erences between the PPS

and the SPS accuracies lies primarily in the ionospheric delay errors (that were modeled as

a constant delay) and in the receiver errors. The Capability characteristics for the SPS tend

to round o� gradually at the higher availabilities, whereas the PPS curve has a steeper and

more sudden transition. There is little change in the capabilities with update rate, since

this only impacts the receiver tracking errors which are generally small. The achievable

SEP (50%) and 2drms (90%) accuracies are summarized in Table 5.2.

Table 5.2: Calculated PPS and SPS accuracies in terms of SEP (50%) and 2drms (90%)navigation errors.

Criteria SPS (no SA) PPSSEP (m) 11 82drms (m) 20 13

Comparing these calculated results with the measured data given in Table 5.1 indicates

an agreement within 3% for the SEP supported by the PPS. This excellent result helps to

validate the GINA methodology.

5.5 GINA Performance of GPS

The bene�t of the generalized analysis is that it allows the e�ects of architectural changes

to be modeled very easily. For GPS it is important to consider the capabilities of the system

after su�ering several satellite failures.

158

Page 157: The Generalized Information Network Analysis Methodology for

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Rate= 2Rate=10

Figure 5-13: The Capability Characteristics of GPS-24; PPS

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Rate= 2Rate=10

Figure 5-14: The Capability Characteristics of GPS-24; SPS; SA o�

159

Page 158: The Generalized Information Network Analysis Methodology for

The probability of failures occurring is a function of the satellite reliability. For this

analysis, the dominant satellite failure mode is assumed to be failures of the atomic clocks.

This is based on the fact that these are critical system components, and of the 9 GPS

satellites that have so far failed, 5 of those were attributed to the clock [49]. To provide

redundancy, each Block II/IIA Total Navigation Payload (TNP) has one cesium and two

rubidium atomic standards. The Cs GPS Block II clocks were designed to have a reliability

of 0.663 over 5.5 years, while the Rb clocks were required to have a reliability of 0.763 for

the same lifetime. This translates into very low e�ective failure rate for the TNP of 0.0035

failures per year. Such a low value leads to a very low probability that more than two or

three satellites will fail over any reasonable system lifetime. Figures 5-15 and 5-16 show

the impact of two, four or six satellite failures on the Capability characteristics for the PPS

and the SPS.

These results show that the degradation of the capabilities with satellite failures is

measurable but not very large. A loss of two satellites hardly changes the SEP at all; losing

four satellites increases the SEP by 2-3 meters and the 2drms accuracy by about 7 meters;

six satellite failures degrades the SEP less than 4 meters, but can result in 10-12 meters

of additional error at the 90th percentile. In general, the losses due to satellite failures are

greater when higher availabilities are desired. This nonlinear di�erence in sensitivity means

that the performance of the system (probability of satisfying requirements) will be a strong

function of the availability requirement.

It can be seen for Figure 5-15 that even after six satellite failures, the achievable SEP

for the PPS exceeds the 16 meter requirement discussed in Section 5.1.3. The likelihood

of failing the requirements is therefore vanishingly small, requiring more than six satellites

be inoperative all at once. Note that if six satellite were to fail, there would be ground

locations where the navigation accuracy would be compromised a great deal and probably

much worse than the 16 m requirement. However, the SEP requirement only demands that

50% of the observations are compliant with this accuracy constraint. This is the reason for

the apparent survivability displayed by the system, implying that GPS has a generalized

performance of e�ectively unity.

It is insightful to consider a requirement imposed at a higher availability. The military

users of GPS have gotten very used to accurate navigation solutions being readily available

(greater than 90%). Should satellite failures cause the capability to degrade so much that

high accuracy solutions become less than 90% available, it is certain that pressure would be

placed on the DoD to take compensatory action. Therefore, for academic interest, consider

a requirement that the system satisfy the 16 meter accuracy at 90% availability. Now, two

satellite failures constitutes a failure of the mission since the Capability characteristic shown

in Figure 5-15 has an Integrity � 18m at 90%. The probability of losing two satellites as a

160

Page 159: The Generalized Information Network Analysis Methodology for

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity (m)

Ava

ilabi

lity

GPS22ppsGPS20ppsGPS18pps

Figure 5-15: The Capability Characteristics of the PPS with 2, 4 or 6 satellite failures

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity (m)

Ava

ilabi

lity

GPS22spsGPS20spsGPS18sps

Figure 5-16: The Capability Characteristics of the SPS with 2, 4 or 6 satellite failures

161

Page 160: The Generalized Information Network Analysis Methodology for

function of time, given the failure rates discussed earlier is shown in Figure 5-17. This chart

suggest there is a 20% probability of failing system requirements after 10 years, representing

a generalized performance of 80%.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1989 1991 1993 1995 1997 1999

Year

Pro

babi

lity

No failures

1 Failure

At least 2 failures

Figure 5-17: The Performance of GPS-24 SPS in satisfying 2drms (90%) navigationaccuracy; Satellite failure rate=0.0035 per year

5.6 The CPF Metric for GPS

Given the Capability and Performance results described in the previous sections, the Cost

per Function can be calculated, where the function is to provide users with satisfactory

navigation solutions. There is a complication here, since GPS is a broadcast system and its

capabilities do not depend on the number of user receivers that utilize the service. However,

the GPS requirements clearly state that navigation services must be provided worldwide.

In addition, it was shown earlier that GPS actually addresses a market to deliver range

signals to ground locations. The notion of the \user" can therefore be taken to be ground

locations between �60o latitude. The CPF metric is then the Cost per Navigated km2 per

day, where the \Navigated" construct captures the navigation requirements of the users.

Since the generalized analysis of GPS is primarily presented here for validation purposes,

the CPF metric has very little implication, but is included for completeness.

The GPS system cost can be estimated from published sources or from basic cost models.

Using data from the U.S. General Accounting O�ce [52] that describes the �xed-price

multi-year procurement contract between the US Air Force and Rockwell International,

the average unit satellite cost can be estimated at approximately FY95$53 million. This

agrees reasonably well with a calculated estimate of FY95$66 million, based on a TFU cost

162

Page 161: The Generalized Information Network Analysis Methodology for

of $77K per kg of dry mass and a learning curve discount factor of 15% realized during

the production run of 29 satellites. The nonrecurring costs for the development of the

Block II/IIA satellites can be assumed at seven times the TFU, based on trends in military

satellite programs [3]. Each GPS satellite was launched on a dedicated Delta II, at a cost

of approximately FY95$50 million. The launch manifest is well published [49], and so the

baseline cost pro�le including the e�ects of development, production, and launch is easily

obtained. For this analysis, operations costs have been omitted. The constant year costs

must then be discounted back to its net value at the contractual award date, since this is

when the government had to begin paying for the system. The discount rate used here is

the 10% commonly used for government programs. Finally, for convenience, the cost pro�le

is multiplied by estimated in ation rates to give values in FY98$. The resulting baseline

cost pro�le cs for a 10 year period is shown in Table 5.3.

Table 5.3: System cost pro�le for GPS

Year cs (FY98$M) vf (FY98$M) cL (FY98$M)1987 370.34 370.341988 336.67 336.671989 982.14 982.141990 491.97 0.24 492.211991 85.24 0.62 85.861992 448.45 0.88 449.331993 389.63 1.05 390.681994 202.50 1.15 203.651995 1.21 1.211996 1.22 1.221997 1.20 1.201998 1.17 1.171999 1.11 1.11Total 3,317

The failure compensation costs are calculated assuming that a compensatory action is

needed in the event of a failure to satisfy the (arti�cial) requirement for a 2drms position

accuracy of 16 meters. Since this can occur when two or more satellites fail, the form of this

action involves the launch and possibly construction of any replacement satellites. It has

been assumed that there are 6 ground spares already constructed and available for launch.

The probabilities of losing di�erent numbers of satellites for each year are then multiplied

by the corresponding cost of the replacements to give the expected failure compensation

cost pro�le vf , shown in Table 5.3.

With this costing information, and noting that the Earth has approximately 4:4� 108

square kilometers between �60o latitude, the Cost per Navigated km2 per day is $0.0021.

This value, at around 1=5 of a cent per square-km per day, seems a very small cost for the

bene�ts o�ered by global navigation.

163

Page 162: The Generalized Information Network Analysis Methodology for

5.7 Improvements by Augmenting GPS

There have been several plans for augmenting the GPS-24 constellation to improve its ca-

pabilities. One possible method for augmenting GPS is to add an equatorial plane of three

Geostationary satellites to improve the visibility at mid latitudes. Quantifying the impact

of this augmentation is easily done within the framework of GINA. All that is required is

to construct new distribution functions for the PDOP and visibility, in the same way as

described earlier. The rest of the analysis is identical. The resulting Capability charac-

teristics for the augmented PPS are shown in Figure 5-18. The most important feature in

these curves is that the capabilities do not degrade as signi�cantly with satellite failures

as compared to the case without the GEO satellites. Even after the GPS constellation

is reduced to only 18 operational satellites, the augmented system can provide PPS users

with navigation accuracies of better than 16m with 90% availability. This means that the

performance of the augmented system in satisfying these requirements is essentially 100%.

Compare this to the previous result of 80% for the baseline GPS-24 constellation that is

shown in Figure 5-17; the bene�ts of the augmentations are very clear. The question to be

asked is if the cost of the extra Geostationary satellites outweigh these bene�ts. Answering

this question is precisely when the CPF metric is very useful, although its application here

is left for future work.

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity (m)

Ava

ilabi

lity

GPS24+GEOsGPS22+GEOsGPS20+GEOsGPS18+GEOs

Figure 5-18: The Capability Characteristics of the PPS service for GPS augmentedwith 3 GEO satellites, after zero, two, four, or six satellite failures

164

Page 163: The Generalized Information Network Analysis Methodology for

5.8 Summary

The Global Positioning System is a remarkable national asset, and for many di�erent rea-

sons, not the least of which are the enormous bene�ts it has brought to both the military

and the general public. Perhaps equally remarkable however is that, despite its incredi-

ble complexity and sophistication, it actually works, and better than had been originally

planned.

Quantifying the capabilities of GPS is most easily done empirically, buying a receiver for

a few hundred dollars and taking measurements. Predicting the system's capabilities, and its

variations, is a much more di�cult task, requiring detailed models for many di�erent system

components. The GINA framework allows the analysis to proceed logically, modeling each

component separately and accounting only for the ow of information to the users. The

capabilities of GPS calculated using GINA and presented in this chapter agree to within

3% of measured capabilities. This would appear in itself remarkable. It must however be

appreciated that some of the inputs to the analysis had been derived from measured data

(the satellite clock and ephemeris errors). This does not take anything away from the value

of the result, especially since the goal was to validate the methodology; using a controlled

input to obtain a controlled result is essential for validation.

With the methodology in place, and at least partially validated by successful application

to GPS, we can now proceed on to the comparative analysis of some proposed broadband

communication systems.

165

Page 164: The Generalized Information Network Analysis Methodology for

166

Page 165: The Generalized Information Network Analysis Methodology for

Chapter 6

Comparative Analysis of Proposed

Ka-Band Satellite Systems

Teledesic is the kind of thing that James Bond used to have to stop.

Matt Bacon Associate Editor, Virgin.Net, U.K.

In September 1995, the Federal Communications Commission (FCC) accepted appli-

cations for licenses to construct and launch broadband satellite communications systems

operating in the Ka-band. Fourteen companies �led, six of which proposed systems that

would provide global coverage. Of these, Spaceway [40, 41, 42] (Hughes), CyberStar (SS/L)

[53], and Teledesic [54] (Gates/McCaw) appear to have signi�cant corporate support, mak-

ing them serious competitors in the broadband telecommunications market. In June 1997,

Motorola Inc. joined the foray by �ling to build the Celestri system [55]. Since then, in

May 1998, Teledesic LLC and Motorola Inc. announced that they would become partners in

the further development and deployment of the Teledesic system, combining the technical

e�orts that had been made toward the "Internet-in-the-Sky" system proposed by Teledesic

and the Celestri broadband satellite system conceived by Motorola. As a result, Motorola

will redirect design and development work from Celestri to the new joint e�ort. The terms

of this agreement, and the relative maturity of the Celestri design compared to that of the

original Teledesic plan would suggest that the system that is eventually launched will more

resemble the 63 satellite Celestri system than the 288 satellite Teledesic design.

In this chapter, the GINA methodology is used to present a comparative analysis of these

systems, representing di�erent architectural concepts and a range in the level of corporate

investment. Spaceway is an eight satellite GEO system, designed to serve the global market;

Cyberstar is a smaller three satellite GEO system, addressing several regional markets

across the Earth; and Celestri/Teledesic (Celedestric??) will be a very large LEO system

167

Page 166: The Generalized Information Network Analysis Methodology for

representing an enormous corporate commitment in excess of $9 billion. To place this in

context, the global market value for the providers of broadband services is estimated by

Bellcore to reach $1.5 trillion in the �rst decade of the new millennium [55]. Assuming that

70-80% of this demand is served by terrestrial �ber and cable leaves an unserved global

demand exceeding $300 billion. This is the goldmine that is driving the rapid development

of these systems, and is the reason that a comparative analysis between the systems is both

academically and �nancially interesting.

The organization of this chapter is to �rst introduce the modeled systems, describing

their physical characteristics and their representation within the GINA framework. The

Capability characteristics of each system are then presented, followed by their generalized

performance relative to a set of market-derived requirements. As discussed in Chapter 4,

calculating the CPF for these commercial systems requires an estimate of the achievable

market capture. The simulation program that was developed to perform this step is brie y

described, although the focus remains on the results. After costing each system, the rele-

vant CPF is calculated and used as the primary discriminator in the comparative analysis.

Finally, the Adaptability metrics for each system are introduced, calculated and interpreted.

6.1 The Modeled Systems

Three di�erent systems are modeled: Cyberstar is a reasonably small system, covering

just the well developed market regions; Spaceway is a larger venture, with as many as

eight geostationary satellites covering most of the world's potential markets; and the new

Teledesic/Motorola joint venture, modeled using the parameters of the previously proposed

Celestri system. The decision to go with the Celestri architecture was made partly because

this is the most likely eventuality, and partly because Celestri has more detailed published

speci�cations. The system parameters for each of the modeled systems were obtained

directly from the FCC �lings.

6.1.1 Proposed System Speci�cations: Cyberstar [53]

CyberStar, proposed by Space Systems Loral, consists of three geostationary satellites, one

each servicing North America, Europe, and Asia. CyberStar plans to provide on-demand,

high-speed compressed digital communications to residential, educational and commercial

users for a variety of applications including telephony and video conferencing. The system

will have a 12 year lifetime, with the �rst launch planned for 1999 and initial operating

capability in 2000.

The CyberStar satellites will be three-axis stabilized spacecraft, operating at 110o WL,

29:5o EL and 105:5o EL in the Ka frequency band. The uplink and downlink will use a total

168

Page 167: The Generalized Information Network Analysis Methodology for

of 750 MHz of frequency bandwidth for a total theoretical capacity of 4.9 Gb/s per satellite.

The CyberStar satellite design employs polarization and frequency reuse to achieve a total

e�ective bandwidth of 6.75 GHz. Each regional satellite will provide coverage through 27

regional beams, with each beam size adjusted to compensate for the high rain attenuation

areas. In addition, it is planned that each satellite have intersatellite links to carry tra�c

destined between regions. The satellites will have on-board processing and switching.

Each of the 27 regional beams on the satellites will have two transponder channels of 125

MHz bandwidth in two orthogonal polarizations. Each transponder will use traveling wave

tube ampli�ers (TWTAs) with an output power of 60Watts. The power subsystem has been

sized to allow simultaneous operation of all 54 transponders at saturation for a minimum

of 12 years with 100% eclipse capability. The gain of the downlink beams vary between

39dB and 42dB, depending on their target location. Each downlink beam will operate a

single Time Division Multiplexed (TDM) carrier at a data rate of 92 Mb/s. The uplink will

employ FDM/TDMA (frequency division multiplexing/time division multiple access) at a

nominal data rate of 384 Kb/s. Both uplink and downlink will employ frequency reuse using

spatial separation and orthogonal polarizations. The goal for the end-to-end bit error rate

for communication through CyberStar was to be 10�10. The system parameters, extracted

from the information contained in the FCC �ling, are summarized in Table 6.1.

6.1.2 Proposed System Speci�cations: Spaceway [40, 41, 42]

Hughes Communication's Spaceway is a network of eight geostationary satellites to provide

two-way voice, data, image, video and video telephony communications to business and

individual users. Spaceway has two satellites in each of four orbital slots, servicing North

America, Europe/Africa, South America and Asia/Paci�c. The system will have a 15 year

lifetime, and the �rst launch was originally planned for 1998, with initial operating capability

in 1999. This schedule has probably now slipped a year or so, although for the purposes of

this analysis it has been assumed to have been maintained.

The Spaceway satellites will be three-axis stabilized spacecraft based on the HS 702

series. The orbital locations servicing the four regions are 101o WL (North America), 25o

EL (Europe/Africa), 101o WL (S.America) and 110o EL (Paci�c Rim). From each of these

positions, two satellites will each use 500 MHz of bandwidth in the Ka band for uplink

and downlink via 48 narrow spot beams, each with a gain of 46.5 dB. The satellites use

spatial separation and orthogonal polarizations on the spot beams to achieve a twelve fold

frequency reuse, resulting in a total e�ective bandwidth of 6 GHz and a theoretical data

throughput of 4.6 Gb/s. There are also options for Ku-band wide-area beams to serve

low density markets. The Spaceway system will use on-board digital signal processing and

169

Page 168: The Generalized Information Network Analysis Methodology for

Table 6.1: System parameters for Cyberstar [53]

Units Value

Miscellaneous System ParametersMission Broadband communicationsMarket Regional residential usersNumber of satellites 3Orbit GEO (110o WL, 29:5oE,105:5o EL)

Uplink parametersMultiple access scheme Spot beams + TDM/FDMAModulation QPSK, 1/2-rate Viterbi error correctionFrequency GHz 30USAT1 EIRP dBW 44.5Number of uplink spot beams 27 � 2Satellite antenna gain dB 42 (average)System temperature dBK 27.8Losses dB 1.5

Downlink parametersMultiple access scheme Spot beams + TDMAModulation QPSK, 1/2-rate Viterbi error correctionFrequency GHz 20Number of downlink spot beams 27Channels per beam 2Channel bandwidth MHz 125Channel capacity Mb/s 92Satellite EIRP dB 59.5USAT antenna gain dB 41System temperature dBK 24.4Losses dB 1.5

switching, and may employ intersatellite links for inter-regional tra�c routing.

Each of the 48 high-powered spot beams has a single 125 MHz transponder channel

in one of two opposing circular polarizations. The transponder channels use solid state

power ampli�ers (SSPA) to provide an output RF power of 20 W. The power subsystem

has been sized to allow simultaneous operation of all 48 transponders at saturation for the

satellite lifetime with 100% eclipse capability. Each downlink beam will operate a single

TDM carrier at a data rate of 92 Mb/s. The uplink will receive FDM/TDMA carriers at a

nominal burst data rate of 384 Kb/s. The end-to-end bit error rate for data communication

through Spaceway is designed to be 10�10. The system parameters, extracted from the

information contained in the FCC �ling, are summarized in Table 6.2.

170

Page 169: The Generalized Information Network Analysis Methodology for

Table 6.2: System parameters for Spaceway [41, 42]

Units Value

Miscellaneous System ParametersMission Broadband communicationsMarket Global residential usersNumber of satellites 8Orbit GEO (4@101o WL, 2@25oE, 2@110o EL)

Uplink parametersMultiple access scheme Spot beams + TDM/FDMAModulation QPSK, 1/2-rate Viterbi error correctionFrequency GHz 30USAT2 EIRP dBW 44.5Number of uplink spot beams 48Satellite antenna gain dB 46.5System temperature dBK 27.6Losses dB 1.5

Downlink parametersMultiple access scheme Spot beams + TDMAModulation QPSK, 1/2-rate Viterbi error correctionFrequency GHz 20Number of downlink spot beams 48Channels per beam 1Channel bandwidth MHz 125Channel capacity Mb/s 92Satellite EIRP dB 59.5USAT antenna gain dB 43System temperature dBK 24.4Losses dB 1.5

6.1.3 Proposed System Speci�cations: Celestri [55]

The Celestri system, originally proposed by Motorola, and perhaps adopted by Teledesic,

comprises 63 LEO satellites and the associated terrestrial gateways and is designed to

deliver a full range of multimedia and real-time connection services (video, data and voice)

to consumers and small businesses anywhere in the world. The �rst launches are planned

for 2001, with initial operating capability in 2003.

The Celestri constellation design consists of 63 operational satellites, placed in 7 orbital

planes of 9 equally-spaced satellites per plane, with up to 7 on-orbit spares. The orbits are

circular, at an altitude of 1400km and an inclination of 48o. The satellites use 1GHz of

bandwidth in the Ka band in each of the uplink and downlink directions. This bandwidth

is used to serve distributed users with 432 spot beams on the uplink and 260 spot beams for

the downlink. This supports very high levels of frequency reuse and a maximum satellite

throughput exceeding 13Gbit/s. The directional gain of the spot beams is varied across the

171

Page 170: The Generalized Information Network Analysis Methodology for

satellite footprint to compensate for the additional path loss at lower elevation angles. The

Celestri satellites will use on-board digital signal processing and switching, and will use 6

intersatellite links for network connectivity and tra�c routing.

Each of the 260 high-powered downlink beams is allocated a section of the 1GHz band-

width according to demand, up to about 100MHz. The assigned bandwidth can be used to

create a variety of di�erent QPSK-modulated FDM/TDM channel options; three 32MHz

channels supporting a rate of 16.384 Mbit/s for residential users, or a single 97MHz chan-

nel supporting a data rate of 51.84 Mbit/s for small businesses. The service uplinks will

use demand-assigned FDM/TDMA channels at a variety of data rates up to a maximum

of 51.84Mbit/s. Using sophisticated forward error correction coding, the end-to-end bit

error rate for data communication through the Celestri network is designed to be 10�9.

The system parameters, extracted from the information contained in the FCC �ling, are

summarized in Table 6.3.

Table 6.3: System parameters for Celestri [55]

Units Value

Miscellaneous System ParametersMission Broadband communicationsMarket Global residential usersNumber of satellites 63Orbit LEO (7 planes at 1400km, 48o)

Uplink parametersMultiple access scheme Spot beams + FDM/TDMAModulation QPSK, forward error correction ( 6dB coding gain)Frequency GHz 30USAT3 EIRP dBW 39Number of uplink spot beams 432Satellite antenna gain dB 35.6System temperature dBK 27.6Losses dB 1.5

Downlink parametersMultiple access scheme Spot beams + FDM/TDMModulation QPSK, forward error correction ( 6dB coding gain)Frequency GHz 20Number of downlink spot beams 260Channels per beam variableChannel bandwidth MHz upto 100MhzChannel capacity Mb/s 51.84Satellite EIRP dB 44.8USAT antenna gain dB 35.6System temperature dBK 28.2Losses dB 1.5

172

Page 171: The Generalized Information Network Analysis Methodology for

6.1.4 Information network representations: Cyberstar and Spaceway

The information networks for collaborative systems are characterized by decoupled, parallel

paths from the sources through a single satellite to the sinks. In the absence of cross-regional

tra�c, the geostationary communication systems studied here feature this property and can

be classi�ed as collaborative. Although the designs for Spaceway and Cyberstar feature

interstallite links, it can be assumed that the fraction of the total uplink that will be routed

through the crosslinks to the other satellites will be small. This is because the largest

number of consumers will use the broadband satellite systems only as a means of connecting

to the terrestrial network, requiring simply a link to the regional gateway. A consequence

of this is that intersatellite links can be ignored and only a single satellite is needed in

the network diagrams for the geostationary systems. Each satellite of the system can be

treated independently, with its own set of capabilities and its own generalized performance.

Later, the CPF metric is calculated to re ect the entire system, since the baseline and

failure compensation costs, and the total market capture, include contributions from all the

satellites.

The information network for Ka-band communications through a single satellite from a

GEO system is shown in Figure 6-1. The same network applies to all eight satellites of the

Spaceway system and all three satellites of the Cyberstar system, with the only di�erences

being the values of the input parameters for each functional module. Some of these inputs

are detailed in Tables 6.1 and 6.2, and the others are discussed in a later section.

SpacelossSpaceloss

Interferers

Interferers TX

TXSource SinkRX

RX

RainRain DSP

DSP

Figure 6-1: Information network for Ka-band communications through Cyberstar orSpaceway satellites

6.1.5 Information network representations: Celestri

The network diagram for Celestri is di�erent since the system relies heavily on intersatellite

links for network connectivity. Essentially, the region instantaneously served by a given

satellite is very small, and so tra�c routed through intersatellite links makes up a signi�cant

fraction of the total uplink. The system must be classi�ed as general, since in most cases

173

Page 172: The Generalized Information Network Analysis Methodology for

more than one satellite is involved in the pathway from source to sink. The dynamic nature

of the constellation means that the routing from source to sink changes over time. It would

seem then that the network model for the system is very complicated. However, there are

some basic model reduction techniques that can be used to simplify the analysis.

Consider a single information exchange. Each satellite uplinks data symbols from a

source node and can route them either to a downlink spot beam, or more likely, to the

rest of the Celestri network via an intersatellite link. The symbols that are routed through

the network pass through an arbitrary number of satellites before they eventually arrive

at a satellite that is in view of the end-user. On receiving the data from an intersatel-

lite link, this satellite streams the symbols to a downlink spot beam. Although the path

through the network can be arbitrarily complicated, for every information exchange there

must be a single uplink satellite and a single downlink satellite that actually addresses the

users. The model therefore only really needs modules representing the uplink process, the

interconnected \network", and the downlink process.

Fortunately, since each satellite has 6 intersatellite links, there are a very large number

of redundant paths through which information may be routed through the network. Con-

sequently, the availability and reliability of the network as an information pipeline can be

assumed to be almost 100%. The capabilities and performance of the system are therefore

dominated by the uplink and downlink characteristics, and only the functional modules in

these two sections of the path need to be modeled to a high degree of �delity. The resulting

network representation for a path through the Celestri system is shown in Figure 6-2.

6.1.6 The Capability Characteristics

The quality of service variables that de�ne the capabilities of communication systems are:

the isolation or multiple access constraint which enforces that multiple users can access the

system simultaneously, and with minimal interference; thesymbol rate which measures the

quantity of the information delivered to the users; the Integrity or bit error rate (BER)

which measures the quality of the information; and the Availability which measures the

likelihood of obtaining service at these values.

For these communication systems, the rate of information transferred through the system

for each O-D pair is a design decision. The integrity of that information, as measured by

the symbol error rate, has a statistical distribution depending on the number of users, and

the rate at which they transmit. The resulting availability of service varies across the range

of operating conditions, de�ning the Capability characteristics.

The calculation procedures for evaluating the Capability characteristics of a Ka-band

communication satellite system were described in Chapter 4 to demonstrate the methodol-

174

Page 173: The Generalized Information Network Analysis Methodology for

SpacelossSpaceloss

Interferers

Interferers

TX

Source Sink

DownlinkSatellite

UplinkSatellite

RX

RainRain

Network ISLISL

DSP

Constellation

Figure 6-2: Information network for Ka-band communications through the Celestrisystem

ogy. The same procedures are used here to calculate the capabilities for Cyberstar, Spaceway

and Celestri. So that this section remains concise, only the components and inputs that

discriminate the three di�erent systems are discussed.

Elevation Angle Statistics

The elevation angle of the line of sight to the satellite from a user on the ground is an

important input to several functional modules. For example, the rain attenuation, a critical

issue for Ka-band systems, is heavily dependent on the elevation angle since this determines

the extent of the atmosphere traversed by a radio signal [43]. Similarly, the free space loss

increases greatly at low elevation angles. The quality of service provided to a user is therefore

very sensitive to the elevation angle from that user to the satellite. Of course, the di�erent

users within the �eld of view of a satellite have very di�erent elevation angles to it, and

the distribution will depend on the relative geometry between the satellite system and the

addressed region.

The elevation angle probability distribution for users accessing the satellites in the Cy-

berstar system are shown in Figure 6-3. These curves were generated by calculating the

elevation angles to each satellite from all ground locations served by that satellite. These

ground locations are de�ned by the coverage of the particular satellite's spot beams, as

speci�ed in the FCC �lings. Cyberstar 2, covering North America, has the worst elevation

angle statistics since the satellite location in orbit is signi�cantly further west (110o WL)

175

Page 174: The Generalized Information Network Analysis Methodology for

than most of its addressed market.

The equivalent curves for the �rst four Spaceway satellites are shown in Figure 6-4.

Note that for Spaceway satellites 1, 2 and 4 which serve North America, Europe and the

Paci�c Rim, the maximum elevation angle is between 40o and 45o. This is because the

addressed market is located in the mid-latitudes, and must always look south to view a

satellite. Satellite 3, which addresses the South American market has some higher elevation

angles corresponding to coverage nearer the equator.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

15 30 45 60 75 90

Elevation angle (degrees)

Pro

babi

lity

Cyberstar1

Cyberstar2

Cyberstar3

Figure 6-3: The probability distribution function for the elevation angle to a Cyberstarsatellite from the ground locations served by the system

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

15 30 45 60 75 90

Elevation angle (degrees)

Pro

babi

lity

Spaceway1

Spaceway2

Spaceway3

Spaceway4

Figure 6-4: The probability distribution function for the elevation angle to a Spacewaysatellite from the ground locations served by the system

For the Celestri constellation, users distributed anywhere between �60o latitude will

176

Page 175: The Generalized Information Network Analysis Methodology for

choose from among all those in view, the satellite at the highest elevation. The distribution

function of this maximum elevation angle is shown in Figure 6-5. Notice that the curves

are much smoother, since the satellites can be in view at all elevation angles between zero

and 90o. This �gure also shows the elevation angle statistics for the constellation after

di�erent numbers of satellite's have failed. This will be used later in the calculation of the

performance, since the degraded elevation statistics after failures may result in a quality of

service that violates the market requirements.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

15 30 45 60 75 90

Elevation angle (degrees)

Pro

babi

lity

Celestri-63Celestri-61Celestri-59Celestri-57Celestri-55

Figure 6-5: The probability distribution function for the elevation angle of the highestcelestri satellite in view of each ground location between �60o latitude

Rain attenuation module

Each uplink and downlink signal passes through the atmosphere, which attenuates the power

(and introduces noise) by varying degrees depending on the local climate, the frequency of

the RF carrier, and the elevation angle of the line of sight. For a given region and a given

elevation angle the probability distribution for the attenuation can be predicted reasonably

well using the familiar Crane rain attenuation model [43]. The model in general predicts

that about 98% of the time there will be little or no attenuation, with increasing levels

of attenuation up to a worst case that occurs with a very small probability (<0.01%).

This worst case attenuation can be very severe at Ka-band, particularly in regions with

characteristically high rainfall, resulting in extremely poor SNR for received signals.

Each of the geostationary satellites being modeled will in general address a di�erent

geographical location with a di�erent climate. Each climate region has an associated rain

rate distribution that is used to calculate the likely signal attenuation. For the geostationary

satellites, an \average" rain rate distribution was assumed for the entire coverage region of

177

Page 176: The Generalized Information Network Analysis Methodology for

the satellite; a moderate level of rainfall for USA and Europe (ITU rain region D, Temperate

[43]), with higher rain rates in S. America and the Paci�c Rim (ITU rain region G, Tropical

[43]). For the LEO system, a worldwide average rain rate distribution (corresponding to the

temperate climate of the largest market segments) gave reasonable results. This leads to an

optimistic estimate for the attenuation in the heavy rain regions around South East Asia.

However, Celestri plans to use dynamic power control to counteract the attenuation here,

and so the estimate may not incur signi�cant errors in terms of the achievable availability.

Capability Characteristics: Cyberstar

The resulting capability characteristics for the three Cyberstar satellites are shown in Fig-

ures 6-6 { 6-8, for two di�erent symbol rates (1=4-T1 and T1) and two di�erent numbers of

users (2500 and 3000).

The complicated structure of these curves is the result of the combinations of several

independent probability distributions. The important trends in these charts can be sum-

marized:

� Cyberstar 1, covering Western Europe can support availabilities exceeding 98% for

rates up to 1=4-T1 and BER > 10�10. The availability is poor at T1, dropping below

90% at a BER of only around 10�4. This is a direct result of the fact that Cyberstar

was not designed to support T1 rates on the uplink. The satellite antenna gain is

some 4{5dB lower than that of Spaceway, leading to Eb=N0's that are just too small

to support high-integrity interpretation of the symbols.

� Cyberstar 2, covering North America, has capabilities that are considerably worse

than Cyberstar 1, dropping below 95% at a BER of 10�9 for 1=4-T1 connections.

This is due mostly to the poorer elevation angle statistics, as shown in Figure 6-3.

� Cyberstar 3, covering Asia and the Paci�c Rim, has similar capabilities to Cyberstar

1. It has marginally lower availabilities at high BER's due to the higher rain rates

exhibited by this region, but has improved availability at high levels of Integrity

(BER< 10�10) due to the higher average elevation angles (see Figure 6-3) that limits

the free space loss.

Capability Characteristics: Spaceway

The Capability characteristics of the �rst four Spaceway satellites are shown in Figures 6-9

|6-12, for the same two symbol rates (1=4-T1 and T1) and numbers of users (2500 and

3000). Only the �rst four satellites are shown, since Satellites 5{8 address the same market

178

Page 177: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Cyberstar1. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Cyberstar1. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-6: The Capability Characteristics of Cyberstar1 in addressing the broadbandcommunications market in Western Europe

179

Page 178: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Cyberstar2. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Cyberstar2. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-7: The Capability Characteristics of Cyberstar2 in addressing the broadbandcommunications market in North America

180

Page 179: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Cyberstar3. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Cyberstar3. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-8: The Capability Characteristics of Cyberstar3 in addressing the broadbandcommunications market in the Paci�c Rim

181

Page 180: The Generalized Information Network Analysis Methodology for

segments, providing additional capacity as the market develops. Their capabilities should

therefore be the same as the �rst four satellites.

The capabilities for Spaceway are in general better than for Cyberstar. The extra gain

of the satellite antenna means that the SNR is higher and the system can support higher

integrities at higher availabilities. Summarizing the behavior across the four Spaceway

satellite regions:

� Spaceway 1 and 2 that address the North American and Western European markets

can provide 2500 users with very high availabilities (over 98%) for both 1=4-T1 and

T1 rates, over all BER's up to 10�15. For 3000 users there is a drop in the availability

at T1 due to the queueing delays (the transponder capacity has been exceeded).

� Spaceway 3, serving South America, has poor capabilities at T1 rates, due to the

high rain rate and the fact that the elevation angle statistics have a signi�cant tail

at low elevation angles (15% probability of being less than 30o). The combination of

these e�ects means that high levels of attenuation are likely and so the availability of

service at high rates and high levels of integrity drops to below 90%. This behavior is

somewhat arti�cial, since the real system will feature dynamic power control for low

elevation angles, and this has not been modeled here.

� Spaceway 4, targeting the Paci�c Rim, exhibits behavior similar to Spaceway 1 and 2.

Despite a high rain rate, the angle of elevation varies over only a small range between

40o{50o. The attenuation is therefore limited, and the availability for 2500 users is

over 97% for all BER's of interest.

Capability Characteristics: Celestri

Finally, the Capability characteristics for the Celestri network are shown in Figure 6-13,

for three di�erent symbol rates up to 2.048Mbit/s, and for 2000 users and 3500 users per

satellite.

The most noticeable features of the characteristics for Celestri is the apparent insen-

sitivity to both symbol rate and BER. Recall that Celestri is designed to cater to users

demanding high symbol rates (around 2Mbit/s), and so there is a great deal of margin

available for communication at lower rates. Notice however, that even at the low rates, the

availability does not exceed 97%. This is entirely a result of the coverage statistics sup-

ported by Celestri. As shown in Figure 6-5 there is a �nite (approximately 3%) probability

that the highest satellite in view is still below 15o meaning that those ground locations

lie outside of the antenna pattern of the satellite. Celestri therefore provides more even

capabilities over the Earth with no penalties for providing high rates, but the maximum

182

Page 181: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway1. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway1. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-9: The Capability Characteristics of Spaceway1 in addressing the broadbandcommunications market in North America

183

Page 182: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway2. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway2. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-10: The Capability Characteristics of Spaceway2 in addressing the broadbandcommunications market in Western Europe

184

Page 183: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway3. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway3. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-11: The Capability Characteristics of Spaceway3 in addressing the broadbandcommunications market in South America

185

Page 184: The Generalized Information Network Analysis Methodology for

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway4. Number of users =2500

Rate=3.86E+05

Rate=1.544E+06

10-15

10-10

10-5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Spaceway4. Number of users =3000

Rate=3.86E+05

Rate=1.544E+06

Figure 6-12: The Capability Characteristics of Spaceway4 in addressing the broadbandcommunications market in the Paci�c Rim

186

Page 185: The Generalized Information Network Analysis Methodology for

10−15

10−10

10−5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Celestri. Number of users =2000

Rate=3.86E+005

Rate=1.544E+006

Rate=2.048E+006

10−15

10−10

10−5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:Celestri. Number of users =3500

Rate=3.86E+005

Rate=1.544E+006

Rate=2.048E+006

Figure 6-13: The Capability Characteristics of the Celestri network in addressing theglobal broadband communications market (assumed average rain rate for ITU regionD: Temperate)

187

Page 186: The Generalized Information Network Analysis Methodology for

availability is compromised somewhat. It would �rst seem that a 3% loss of availability is

insigni�cant, but it must be realized that this represents almost 45 minutes per day when

the system would not be available. The question as to whether this is acceptable can be

answered only when a set of market-derived requirements are set. This is the subject of the

next section.

6.2 Generalized Performance

The performance of the broadband systems can only be measured relative to a set of re-

quirements that specify the minimum acceptable levels for the quality of service provided to

the users. The broadband systems being compared are marketed as high speed data connec-

tions for multimedia communications, and for these applications, T1 rates (1.544 Mbit/s)

are considered acceptable and competitive with terrestrial services. It can be assumed that

the broadband users will require at least 95% availability of service at these rates, with

BER no greater than 10�9.

Unfortunately, the results of the previous section showed that Cyberstar cannot satisfy

these requirements, due to a link budget that provides insu�cient Eb=N0 for these low

BER's. The satellite antenna gain for the spot beams as speci�ed in the FCC �ling are

suspiciously 4{5dB lower than those of Spaceway, which target similar geographical loca-

tions from the same orbital altitude. It is the authors belief that: (1) the satellite gains

speci�ed in the �ling are unrepresentative of the real system; or, (2) Loral and Qualcomm (a

likely partner in this venture) will use a sophisticated error correction scheme to achieve an

additional 3-4 dB of margin. This is the approach that Celestri has taken, using advanced

convolutional codes to give approximately 6 dB of coding gain. An assumption is therefore

made that Cyberstar will make changes to the design speci�ed in the FCC �ling such that

it can satisfy reasonable market requirements for broadband applications. This assumption

at least allows us to proceed with a comparative analysis based on cost and performance.4

The capabilities of Spaceway3 (South America) are also calculated to be inadequate at

T1 rates due to the low angle of elevation to some ground locations, but as stated earlier,

this behavior may be corrected with dynamic power allocation.

Recall that the performance measures the probability of being in an operational state,

where \operational" is de�ned to be a system state that complies with the system require-

ments. This is one area where there is a di�erence between the GEO and LEO systems.

Consider �rst the GEO systems, in which there are basically two di�erent types of failure

4Specifying requirements for a rate of 386Kbit/s would have obviated the need for this assumption.However, the market models that will be later used to calculate the CPF metric were constructed based onthe number of T1 users

188

Page 187: The Generalized Information Network Analysis Methodology for

state; those that correspond to degraded payload operations that violate the requirements,

and those that constitute a total loss of the satellite.

As was discussed in Chapter 4, the level of payload redundancy (SSPA's etc.) on the

GEO satellites is so high as to make degraded payload operations an unlikely failure mode;

Spaceway, for example, has 64 transmitter ampli�ers driving the 48 spot beams, and so a

total of 16 transmitters can fail before even a single spot beam is lost. For the GEO systems

then, the dominant failure mode is that which involves a total loss of the satellite operation.

These satellite vehicle failures, SVF, can involve failures in the propulsion subsystems, the

guidance and navigation subsystem (G&N), and the spacecraft control computer (SCC)

etc. Modeling the spacecraft bus to include two parallel SCC's, two G&N's, and an inte-

grated bus module representing propulsion, power and structural components, and using

the failure rates given in SMAD [3], as discussed in Chapter 4, the probability of failure for

a representative GEO broadband communication satellite is plotted as a function of time in

Figure 6-14. Each of the di�erent subsystems contribute to the overall probability of failure,

but with the failure rates used, the system performance is dominated by the reliability of

the integrated bus module. This results in a system failure probability that exceeds 50%

after 10 years on orbit.

For this analysis it can be assumed that each of the GEO satellites from either Spaceway

or Cyberstar have similar reliability pro�les, approximated by Figure 6-14. This sets the

performance of the GEO systems for the broadband mission. Note that the satellite must

be able to satisfy requirements when everything works, but beyond that, the performance is

insensitive to the speci�c requirement values, because the satellite either works or it doesn't.

In this way, setting less stringent system requirements has no impact on the performance

of the system.

This behavior is in stark contrast to that of the LEO Celestri system. Here, the avail-

ability is directly related to the number of operational satellites, and to a certain degree,

the Celestri constellation can su�er some number of satellite failures without compromising

the availability requirement. Eventually however, the coverage statistics of the degraded

constellation is so bad that the availability drops below the minimum acceptable level. For

the speci�ed availability requirement of 95% at T1, 10�9 BER, this occurs after the con-

stellation loses eight or more satellites, that is, all of its seven spares and any others. This

is shown by the Capability characteristics for the degraded constellation, plotted in Figure

6-15. This result was calculated in the same way as the other characteristics, but with the

elevation angle statistics of the 62 satellite constellation.

The probability of losing a total of 8 satellites from the entire constellation is plotted in

Figure 6-16. In creating this chart, it has been assumed that one out of every ten failures

in the G&N, SCC, power or propulsion units results in a loss of the spacecraft. The failure

189

Page 188: The Generalized Information Network Analysis Methodology for

Failure state1 = (2*SCC)Failure state2 = (1*G&N,2*SCC)Failure state3 = (2*G&N)Failure state4 = (2*G&N,1*SCC)Failure state5 = (1*Bus)Failure state6 = (1*Bus,1*SCC)Failure state7 = (1*Bus,1*G&N)Failure state8 = (1*Bus,1*G&N,1*SCC)

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Time (years)

Pro

babi

lity

FS1FS2FS3FS4

FS5

FS6

FS7FS8

Pf

Figure 6-14: Failure state probabilities for a typical (modeled) Ka-band GEO commu-nication satellite

190

Page 189: The Generalized Information Network Analysis Methodology for

10−15

10−10

10−5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:celestri62. Number of users =2000

Rate=3.86E+005

Rate=1.544E+006

Rate=2.048E+006

10−15

10−10

10−5

100

0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Integrity

Ava

ilabi

lity

Model:celestri62. Number of users =3500

Rate=3.86E+005

Rate=1.544E+006

Rate=2.048E+006

Figure 6-15: The Capability Characteristics of the degraded Celestri network afterlosing all seven spares and any other satellite

191

Page 190: The Generalized Information Network Analysis Methodology for

rates, taken from SMAD [3], are �SCC = 0:0246, �G&N = 0:0136, �power = 0:036, and

�prop=0.035, all per year.

0

0.2

0.4

0.6

0.8

1

2002 2003 2004 2005 2006 2007 2008 2009 2010

Year

Pro

babi

lity

8 or more SVF

Figure 6-16: Failure probability for the Celestri constellation, relative to a 95% avail-ability requirement for T1 connections, 10�9 BER.

The probability of losing these 8 satellites and failing requirements is signi�cant, and

approaches unity after only a few years. After this time, regular replacements must be

launched to maintain operations.

6.3 The CPF Metric: The Cost per Billable T1-Minute

The cost per billable T1-minute is the CPFmetric used in the analysis of broadband satellite

systems. It is the cost per billable T1-minute that the company needs to recover from

customers through monthly service fees, ground equipment sales, etc., in order to achieve

a speci�c (30%) internal rate of return. The cost per billable T1-minute can be calculated

from an estimate of the system's market capture and the system costs. The market capture,

or achievable capacity, depends on the size of the market accessible to the system and on

the system capability characteristics.

6.3.1 Modeling the Broadband Market

Recently a urry of studies have being conducted in order to quantify user behavior on

the internet. Many corporations are interested in tapping into the sales potential that

exists on the internet, thus studies range from internet usage to user demographics and

purchasing patterns. Historical data exists on the tra�c that traversed the old National

192

Page 191: The Generalized Information Network Analysis Methodology for

Science Foundation (NSF) backbone between 1991 and 1995. Drawing from these stud-

ies and through independent research, market models for the broadband communications

systems were constructed by Kelic, Shaw and Hastings [5] for a 1995 study of these same

satellite systems.

Three di�erent market scenarios were developed to attempt to simulate the potential

growth of the broadband market. A third-order growth model and an exponential growth

model were developed by projecting the NSF data forward. The third order model is a

very conservative estimate; the growth of internet commerce and the beginnings of internet

telephony would suggest that the third order market is an unlikely scenario. The results

presented in this chapter do not include the third order market model. The exponential

model is considered an upper bound since the internet is still in its infancy and growth

rates of technology are typically exponential in the early years and then begin to level o�.

Since the third order and exponential market models are based on projections of the growth

of internet tra�c, they represent a volume of data symbols in the market. To obtain the

market growth models for broadband users, the total volume in bits was divided by the

connection speed for a typical user, that being T1 (1.544 Mbit/s).

The �nal market model is based on computer growth trends. This \last-mile" market

is the di�erence between the growth of computers worldwide and the growth of internet

hosts. Providing the \last-mile" link from local providers to these unconnected computers

is a potential market for satellite broadband data services. This market model predicts the

number of broadband users directly. These three growth models are shown in Figure 6-17,

in terms of the total number of simultaneous broadband T1 connections (1.544 Mbit/s).

1.0E+02

1.0E+03

1.0E+04

1.0E+05

1.0E+06

1.0E+07

2000 2002 2004 2006 2008 2010

Year

Num

ber

of s

imul

tane

ous

T1-

conn

ectio

ns

Exponential

Last-mile

Third-order

Figure 6-17: Broadband market growth models

193

Page 192: The Generalized Information Network Analysis Methodology for

Data for each of the markets were globally distributed among countries based on either

GDP or GDP per capita.5 Population density was used to distribute the market within

countries. The result is a map of the predicted broadband market, discretized into 5o

longitude/latitude cells, for each year from 2000|2012. An example distribution is shown

in Figure 6-18. These market models are used to calculate the market capture of the

modeled systems.

−180−150−120−90 −60 −30 0 30 60 90 120 150 180−90

−60

−30

0

30

60

90

10

100

1000

Figure 6-18: The last-mile market in 2005, GDP distribution

6.3.2 Calculating the market capture

The limiting e�ects of market demographics, access and exhaustion can be quanti�ed only

with a simulation of the satellite system in a realistic market scenario. For an earlier study

[5], a computer tool was developed that performs this function, simulating system operation

and calculating the achievable capacity.6

The program propagates satellites and projects spot beams, with beamwidths corre-

sponding to antenna gains, onto the Earth from the satellite positions. The beam patterns

5The results presented in this chapter re ect only the GDP distributions, since the previous study showedonly minor di�erences in the results for the two di�erent distributions.

6This program has been updated and modi�ed to run under Matlab on any PC. It is a gui-clad completesimulation package including the calculations of link budgets, rain attenuation, cross-channel interference,and market access. The SkynetPro executable is publicly releasable, and may be obtained by contactingProf David Miller, MIT Space Systems Lab, [email protected]

194

Page 193: The Generalized Information Network Analysis Methodology for

Figure 6-19: Cyberstar's market capture map; exponential market model in 2005, GDPdistribution:2400GMT

for the systems were modeled using information given in the FCC �lings. The market ac-

cessible to each beam is then calculated, using the market models described in the previous

section. The realistically achievable capacity for each channel is the minimum of the sup-

portable capacity of the beam, in terms of users, and the size of the market to which it

has access. Example outputs from this program for Cyberstar and Celestri are shown in

Figures 6-19 and 6-20 respectively.

Referring �rst to Figure 6-19, the image shows the projection of Cyberstar's 27 spot

beams onto the Earth for the European, North American and Asian regional satellites,

de�ning the coverage regions used to calculate the achievable capacity. Notice that the

beamwidths vary across the patterns, since some beams have higher gains to counteract

the higher rain attenuation in those regions. The shading of these beams indicates the

amount of market captured at a given instant (in this case 2400GMT), with lighter colors

representing a larger number.

Figure 6-20 is the corresponding result for the Celestri model, but at 1200GMT. Only

the satellites over land masses are shown in this plot. Note that the spot beams are very

small and numerous. Note that at this instant the system is well used in Europe where it

is noon-time, but not in the United States, where it is still very early morning.

The total market capture for a particular instant is the sum of the achievable capacities

195

Page 194: The Generalized Information Network Analysis Methodology for

Figure 6-20: Celestri's market capture map; exponential market model in 2005, GDPdistribution; 1200GMT

for all of the spot beams. This is calculated at several times over the day and then averaged

to account for daily usage behavior. The simulation is performed for each year of the system

lifetime for each market model to give the market capture pro�les.

Market Capture: Cyberstar

CyberStar was simulated for the di�erent market scenarios over its expected lifetime. The

years of the simulation ran from 2000 to 2012. The deployment strategy assumed for the

simulations is the same as that outlined in the FCC �ling: the North American satellite is

deployed in 2000 and the European and then Asian satellites are launched in 2002 and 2003.

The achievable system capacity, assuming this nominal deployment strategy, is shown as a

function of time in Figure 6-21.

The exponential and last mile market models result in similar achievable capacity pro-

�les. Initially the system capacity is small over all the market models, with only the North

American market being accessed. During this early period when the market is immature,

the available market is generally small compared to the link capacity of the spot beams.

The accessible market is therefore small, and even with only one satellite operational, the

system is under-utilized. This of course implies that the system will bring in poor revenue

during the early years, a fact only compounded by the large expenditures incurred during

196

Page 195: The Generalized Information Network Analysis Methodology for

0.0E

+00

1.0E

+09

2.0E

+09

3.0E

+09

4.0E

+09

5.0E

+09

6.0E

+09 20

0020

0120

0220

0320

0420

0520

0620

0720

0820

0920

10

Yea

r

T1-minutes per year

Exp

onen

tial

Last

-mile

Figure

6-21:

Themarketcapture

pro�lefortheCyberstarsystem

thebeginningof

theprogram

.Thelast

milemarkets

givethelargestcapacityduringthe

earlyyears.Duringthedeploymentperiod,theachievablecapacityof

thesystem

increases

rapidly

forthislast

milemarket

scenario.Bythetimethetotalcomplimentof

satellites

hasbeenlaunched

in2003,

theachievable

capacityof

thesystem

has

begunto

approach

thesaturateddesigncapacity.

Thesystem

has

aslow

erincrease

incapacityunder

the

exponential

model,approachingsaturation

later,in

2009.

MarketCapture:Spaceway

Spacew

aywas

simulatedforthedi�erentmarketscenariosover

itsexpectedlifetimefrom

1999to

2012.Thedeploymentstrategy

assumed

forthesimulation

isthesameas

that

outlined

intheFCC�ling:

�1999

-NorthAmerica(1)and(5),Europe(2),Asia(4),South

America(3)

�2000

-Europe(6),Asia(8),South

America(7)

Theachievablesystem

capacityforthefulldeploymentof

Spacew

ayisshow

nas

afunction

oftimein

Figure

6-22.

Eachof

thethreemarketmodelsresultin

adi�erent,sm

oothcurve,withnodiscernible

perform

ance

plateaus.

Thisbehaviorisadirectresultof

theearlydeploymentof

thefull

Spacew

ayconstellation.Once

allsatelliteresources

areon

orbitby2000,theachievable

capacitycloselyfollow

sthematuration

curveofthemarkets,untilsystem

saturation

occurs.

Thelast

milemarketgives

thelargestcapacityduringtheearlyyears(1999-2003),and

increasessteadilytoward

saturation

in2008.

Thesystem

capacityfortheexponential

197

Page 196: The Generalized Information Network Analysis Methodology for

0.00

E+

00

2.00

E+

09

4.00

E+

09

6.00

E+

09

8.00

E+

09

1.00

E+

10

1.20

E+

10 1998

2000

2002

2004

2006

2008

2010

Yea

r

T1-minutes per year

Last

-mile

Exp

onen

tial

Figure

6-22:

Themarketcapture

pro�lefortheSpacewaysystem

market

increasesin

acorrespondinglyexponential

way

through

themiddle

periodof

the

simulation

(2002-2005),andreaches

saturation

around2007.

MarketCapture:Celestri

TheCelestrisystem

was

simulatedassumingthedeploymentschedule

intheFCC

�ling,

givingan

IOCin

2003.Theachievablemarket

capture

pro�lesforbothmarket

modelsare

show

nin

Figure

6-23.

1.0E

+10

2.0E

+10

3.0E

+10

4.0E

+10

5.0E

+10

6.0E

+10

7.0E

+10

8.0E

+10 20

0320

0420

0520

0620

0720

0820

0920

10

Yea

r

T1-minutes per year

Exp

onen

tial

Last

-mile

Figure

6-23:Themarketcapture

pro�lefortheCelestrisystem;both

marketmodels;

GDPdistribution

Neitherofthecapacitypro�lesshow

saturation.Thismeansthat

overtheentirelifetime

198

Page 197: The Generalized Information Network Analysis Methodology for

of the system, Celestri has a larger link capacity than the global market can support. The

trends shown in the �gure are a direct consequence of this. The last mile market gives

the largest achievable capacity until 2005. The capacity for the exponential market is the

largest after 2005. It is interesting to compare these trends with the trends of the actual

market growth models, shown in Figure 6-17. After accounting for the di�erent scales

(T1-connections or T1-minutes per year) the graphs are almost identical in shape and are

very close in magnitude. This means that Celestri basically swallows most of the market

available, over the entire globe. The implication of these trends is that at least for the early

years of the broadband market, Celestri is over-designed. In actuality, this gives Celestri

a lot of headroom to compliment revenue with the telephony market. It is a fact that the

largest market for satellite telephony lies in the same underdeveloped regions of the world

in which Celestri has spare link capacity. This is an e�cient use of the available resources

and should give a large potential revenue.

Market Capture by Each Satellite

For additional insight, and to assist in the calculation of failure compensation in the next

section, these market capture pro�les can be broken down into that of each individual

satellite. These are shown in Figures 6-24{6-26.

500

1000

1500

2000

2500

3000

3500

2000 2002 2004 2006 2008 2010 2012

Year

Sim

ulta

neou

s T

1-co

nnec

tions

Sat1: Europe

Sat2: Noth America

Sat3: Asia

Figure 6-24: The market capture pro�les of the Cyberstar satellites; exponentialmarket;GDP distribution

For example purposes, consider the Spaceway satellites shown in Figure 6-25. After 2005,

both of the USA and one of European satellites saturates at around 2800 simultaneous

users. If additional users were addressed, the supported availability would drop below

requirements, as seen in the Capability characteristics of Figure 6-9. However, in the same

199

Page 198: The Generalized Information Network Analysis Methodology for

0.00

E+

00

5.00

E+

02

1.00

E+

03

1.50

E+

03

2.00

E+

03

2.50

E+

03

3.00

E+

03 2000

2002

2004

2006

2008

2010

2012

Yea

r

Number of simultaneous usersS

at1:

US

AS

at2:

Eur

ope

Sat

3: S

.Am

eric

aS

at4:

Pac

ific

Rim

Sat

5: U

SA

Sat

6: E

urop

e/A

fric

aS

at7:

S. A

mer

ica/

US

AS

at8:

Pac

ific

Rim

Figure6-25:Themarketcapturepro�lefortheSpacewaysatellites;exponentialmarket;

GDPdistribution

year

theSouth

Americansatellitehas

averysm

allmarketcapture

dueto

theimmaturity

ofthemarketthere.

Thiswould

suggestthat

someresources

havebeenallocatedunwisely.

Satelliteresourcesarebeingwasted

overSouth

Americawherethey

areunder-utilized,while

themarketsin

theUSAandEuropecould

supportan

increasedservice.

Adecisionmade

toreallocate

resourceswould

surelyresultin

anincreasedsystem

capacityifmorespectrum

could

bemadeavailable.

6.3.3

System

cost

Thecalculationsareallperform

edin

�scal

year

1996

dollars

(FY$96)

since

thisrepresents

theprojectinceptiondate

foratleastSpacew

ayandCyberstar.Allcostsareadjusted

using

theO�ce

oftheSecretary

ofDefense

estimates

[3],anddiscountedbackto

apresentvalue

in1996witha30%

discountrate.

Thetotalbaselinecost

ofeach

satellitesystem

isestimated

includingrecurringand

non-recurringcostsfordevelopment,construction,launch,insurance,gatewaysandcontrol

centeroperations,andterrestrialinternetconnections.Thecostmodelusedforthisexam

ple

isthesameas

that

usedbyKelic[5],drawingon

industry

experience

andobserved

trends.

TheTheoreticalFirst

Unit(T

FU)cost

forcommunicationsatellites

canbeestimated

rea-

sonablywellassuming$77,000per

kgofdry

mass.Thenon-recurringdevelopmentcostsfor

commercialsystem

scanbeapproximated

atthreeto

sixtimes

theTFUcost,dependingon

theheritageof

thedesign.Launch

coststo

GEO

canbeassumed

at$29,000per

kg,

with

insurance

at20%

.Celestricanexpectlaunch

costsaround$10,000per

kgto

LEO,with

thesame20%

insurance.Forlinkingto

theterrestrialnetwork,each

OC-3

(155

Mbit/s)

200

Page 199: The Generalized Information Network Analysis Methodology for

0

500

1000

1500

2000

2500

3000

3500

2000

2002

2004

2006

2008

2010

Yea

r

Simultaneous T1-connections

Ave

rage

Cel

estr

i SV

Figure

6-26:

Themarketcapture

pro�le

foratypicalCelestri

satellite;exponential

market;GDPdistribution

connection

costs$8,500

installationand$7,900

per

month.Thiscostscales

withthemarket

capture.

Theexpectedfailure

compensation

costsarecalculatedfrom

thesatellitefailure

proba-

bilitypro�lesandthemarketcapture

curves.For

theGEO

system

s,asatellitefailure

can

beassumed

toresultin

thelossofasingleyears'revenue,together

withthecostofbuilding

andlaunchingareplacementsatellite.

Thecalculation

oftheopportunitycostsfrom

lost

revenuerequires

anassumption

fortheaverageservicecharge

per

user.

Aconservative

estimateof$0.05per

T1connection

isusedforthisexam

ple.For

theLEOCelestrisystem

,

therearenoopportunitycostsandreplacementsaremadeonlyafter8satellites

arelost,but

must

then

continuethroughoutthesystem

lifetimeto

maintain

aconstellation

ofat

least

63satellites.Thebaselinesystem

cost

andthefailure

compensation

costscanbesummed

togivec L,thesystem

costpro�le.

Thebaselinecostsc s(t),failure

compensation

costsv f(t),andtotalsystem

costsc L(t)

areshow

nforeach

system

(for

anexponential

market)

inTables6.4{6.6.

Discountingthe

system

cost

pro�lesat30%

per

yeargivesthenet

presentvalueof

thecostsin

�scal

year

1996

dollars.

Summingover

allyears

ofthediscountedpro�legives

thetotallifetimecost,

CL.Theseare

givenbelow

inTable6.7.

6.3.4

Cost

perBillableT1-M

inute

Results

Thesystem

lifetimecostsandthetotalmarketcapture

isusedto

calculate

theCPFmetric.

TheCostperbillableT1-minuteforeach

ofthesystem

s,acrosstheexponentialandlast-m

ile

market

scenarios,areshow

nin

Figure

6-27. 20

1

Page 200: The Generalized Information Network Analysis Methodology for

Table 6.4: System cost pro�le for Cyberstar; constant year FY96$

Year cs ($M) vf ($M) cL1996 81.96 0.00 81.961997 329.39 0.00 329.391998 377.25 0.00 377.251999 192.23 0.00 192.232000 262.22 1.86 264.082001 55.53 13.46 69.002002 185.33 17.10 202.432003 188.40 34.03 222.432004 33.94 45.14 79.082005 34.24 43.23 77.472006 34.31 40.88 75.182007 34.30 38.38 72.682008 34.30 36.18 70.482009 34.30 22.53 56.832010 34.30 10.49 44.79

This is perhaps the most important chart in this chapter, and is deserving of some

discussion. As can be seen, there is only a small di�erence in absolute terms in the cost per

billable T1-minute across the systems, varying by at most 10 cents. This is characteristic

of the high �xed costs that dominate these ventures. Summarizing the trends in the chart:

� Spaceway shows a large variation in the CPF between the market scenarios, with

the exponential market giving high values for the CPF, due to the early deployment

schedule of the system. Before 2005, the exponential market is immature and so the

system can achieve only a low market capture to o�set the high net value of the

costs incurred before IOC. The last-mile market is more developed in the early years,

leading to a higher utilization of the system and consequently a considerably lower

CPF. Spaceway is therefore very sensitive to how the market develops, and should

revise their deployment schedule to match the future predictions of the market as it

develops.

� Cyberstar shows a smaller variation in the CPF across markets, but has a higher

average value. The small variation is due mostly to the delayed deployment of space

assets, but is also a result of the fact that Cyberstar saturates very quickly under

almost any reasonable market scenario. The system is relatively modest compared

to the other systems, and does not need to capture a large market share to fully

saturate its transponders. Of course, the smaller capacity means that the system

cannot amortize the high �xed costs over as many users, and so the average CPF

is higher, perhaps leading to smaller pro�t margins in a competitive environment.

Essentially the modest size of Cyberstar makes the venture a little less risky, but

comes at the cost of smaller returns.

202

Page 201: The Generalized Information Network Analysis Methodology for

Table 6.5: System cost pro�le for Spaceway; constant year FY96$

Year cs ($M) vf ($M) cL1996 286.78 0.00 286.781997 817.88 0.00 817.881998 650.44 0.00 650.441999 636.18 2.19 638.382000 324.11 91.96 416.072001 36.02 143.79 179.802002 39.56 140.76 180.322003 43.29 138.54 181.832004 47.40 136.88 184.272005 50.85 135.26 186.112006 52.30 130.14 182.442007 52.87 122.74 175.612008 53.12 72.86 125.982009 53.10 45.27 98.362010 53.20 21.04 74.25

Table 6.6: System cost pro�le for Celestri; constant year FY96$

Year cs ($M) vf ($M) cL1999 593.33 0.00 593.332000 2272.57 0.00 2272.572001 3024.83 0.00 3024.832002 3505.16 0.00 3505.162003 55.02 351.49 406.512004 67.83 1114.44 1182.272005 86.66 691.72 778.382006 107.22 591.96 699.192007 127.19 536.74 663.932008 145.75 487.26 633.012009 161.43 442.39 603.822010 175.39 0.00 175.39

� Celestri achieves lower CPF's and smaller variations than either Cyberstar or Space-

way. This double bene�t comes as a result of a late deployment, allowing the market

to develop before expending costly assets, and an immediately massive market capture

to quickly o�set the �xed costs. This is the ideal strategy, provided the system is able

to capture users from the other systems that are already in place.

An interesting conclusion drawn from these trends is that architectural di�erences are

not as signi�cant as either the deployment strategy or the overall market capture. Cyberstar,

Table 6.7: Lifetime costs CL for the modeled systems (net present value in FY96$)

Cyberstar $0.667 BillionSpaceway $1.48 BillionCelestri $2.33 Billion

203

Page 202: The Generalized Information Network Analysis Methodology for

$0.1

0

$0.1

2

$0.1

4

$0.1

6

$0.1

8

$0.2

0

$0.2

2

$0.2

4

$0.2

6

Cost per billable T1-minute

Exp

onen

tial

Last

-mile

Exp

onen

tial

$0.2

4$0

.24

$0.1

5

Last

-mile

$0.1

5$0

.20

$0.1

6

Spa

cew

ayC

yber

star

Cel

estr

i

Figure

6-27:TheCost

perbillable

T1-m

inute

metric

forCyberstar,

Spacewayand

Celestri

withthesameGEOarchitectureasSpacew

aybutwithadelayed

deployment,islesssensitive

tomarketvariations.ThehighthroughputofSpacew

ayandCelestriresultin

loweraverage

CPFvalues.

Based

ontheCost

per

billable

T1-minute

metric,

allof

thesystem

sstudiedhavethe

potentialto

becompetitive,withCelestrihavingaslightadvantage.Although

theresults

arenot

presentedhere,thesimulation

tooliscapableofmodelingacompetitiveenvironment

inwhichsystem

scompeteforthesamemarket.Theearlierstudy[5]show

edthat

atleast

twoofthesesystem

scouldco-existandstillobtaina30%internalrateofreturn

under

these

situations.

6.4

Type1AdaptabilityMetrics

Type1adaptabilitiesrepresenttheelasticity

oftheCPFmetricwithrespectto

changesin

therequirem

ents

orthecomponenttechnologies.

6.4.1

TheRequirementElasticities

For

thebroadbandcommunicationsystem

s,changesin

thesystem

requirem

entscorrespond

todi�erentserviceoptionsthatcanbeprovided

totheusers.Theimpactof

thesechanges

onthelikelycostper

billableminute

canbemeasuredwiththerequirem

entelasticities.As

de�ned

inChapter4,therequirem

entelasticities

oftheCPFat

agivendesignpointare,

204

Page 203: The Generalized Information Network Analysis Methodology for

Isolation Elasticity, EIs =�CPF=CPF

�Is=Is(6.1)

Rate Elasticity, ER =�CPF=CPF

�R=R(6.2)

Integrity Elasticity, EI =�CPF=CPF

�I=I(6.3)

Availability Elasticity, EAv =�CPF=CPF

�Av=Av(6.4)

where Is, R, I , and Av are the set of system requirements on isolation, rate, integrity and

availability. For communication systems, altering the isolation requirement simply changes

the multiple access speci�cations. For systems limited by self-interference, such as CDMA

systems, this may result in changes to the CPF, but has no e�ect for the systems considered

here. Changing the rate provided to users is obviously a design alternative that a�ects

the CPF. Conversely, there is no real bene�t in o�ering improved Integrity, and the very

nature of multimedia application prohibits BER's higher than about 10�7. Changing the

integrity is therefore not an option. Finally, assessing the impact of changing the availability

requirement is valuable, especially for Celestri which su�ers from availability problems in

the event of satellite failures. The corresponding elasticities are discussed in the following

sections.

Rate Elasticity of the CPF

One option open to the designers of the broadband systems is to lower the standard data

rate provided to the users. This could be expected to improve the supportable integrity

and, more importantly, allow more users to be served. Since broadband users essentially

just require connection services at \broadband" rates, it can be assumed that they will still

purchase services at rates marginally lower than T1 if the price is right. Provided that the

increase in the number of users served is more than enough to compensate for the reduction

in the price charged per user, the net revenue of the system will be increased. It is valuable

therefore to consider the impact on the cost per user (CPF) of lowering the rates to 1=4-T1,

as measured by ER, the Rate Elasticity of the CPF.

Calculating ER involves repeating all the analysis that led to the cost per billable T1-

minute metric, but with the rate changed to 386 Kbit/s. The cost per billable 1=4-T1-minute

then can be compared directly with the cost per T1-minute to calculate ER. The di�erence

in value (�CPF) represents the di�erence in cost that must be charged to each broadband

user if the data rate provided to them is changed.

The largest change in the calculation of the cost per billable 1=4-T1-minute is in the

205

Page 204: The Generalized Information Network Analysis Methodology for

estimation of the market capture. Strictly, to estimate the number of 1=4-T1-minutes

captured by a system, a market model for the number and distribution of 1=4-T1 users is

required. However, the market for 1=4-T1 users can be assumed to be the same as the market

for T1 users, since the notion of a user is in this case a human consumer. Ignoring the e�ects

of elasticity of demand (lower rates may deter consumers from purchasing service), the total

number of consumers in the marketplace should not change signi�cantly by changing the

rate o�ered to them. The same simulations can therefore be used to evaluate the market

capture of 1=4-T1 users. The most important features of these simulations compared to

those for T1 users are:

� Early in the lifetime, there can be no increases in the number of 1=4-T1 users served

compared to T1 users, since the systems are market limited.

� The saturation point occurs later, since more users can be addressed at lower rates.

� In general, the total market capture of 1=4-T1 users over the lifetime is greater than,

but not 4 times greater than, the market capture of T1-users.

The baseline system costs are the same, but the failure compensation costs must re ect

the fact that the requirements for quality of service have changed, resulting in a lower

probability of failing system requirements due to degraded operations. Having accounted

for all these issues to calculate the new CPF, the ER can be formulated. The resulting ER

for Spaceway, Cyberstar and Celestri are shown in Figure 6-28.

0.70

0.75

0.80

0.85

0.90

0.95

1.00

Rat

e-E

last

icity

of C

PF

Exponential

Last-mile

Exponential 0.89 0.99 0.88

Last-mile 0.87 1.00 0.72

Spaceway Cyberstar Celestri

Figure 6-28: The rate elasticity of the CPF for Cyberstar, Spaceway and Celestri

The results shown in this chart must be interpreted carefully. An ER = 1 indicates

that the reduction in the cost per user exactly compensates for the increased number of

206

Page 205: The Generalized Information Network Analysis Methodology for

users served. Referring to the chart, this means that Cyberstar can charge 1=4-T1 users

a quarter of the price charged to T1 users, and obtain the same revenue. Equivalently, if

they charged more than this, perhaps only half the price, reducing the rate of service results

in a doubling of their revenue stream. This works for Cyberstar because the transponders

are almost always saturated, even for low rate users. This is not the case for Celestri and

Spaceway. These systems have ER < 1, and so must charge each of the 1=4-T1 users more

than a quarter of the price of a T1 user if the revenue stream is to be preserved (note

that if ER � 0, the cost per user is not e�ected by rate, and lower-rate service cannot be

o�ered at a discount price). Therefore, the smaller size of the Cyberstar system presents

a greater opportunity for marketing the system at lower rates. Celestri (and to a lesser

extent Spaceway) must hope that the market demand for broadband services is rate elastic,

meaning that they will be able to attract more high-price users at the higher rates to increase

their revenue stream.

Availability Elasticity of the CPF

Consider the impact of lowering the availability requirement to 92%. This may be an option

if the systems are sold as bulk-data transfer systems that don't need to provide continu-

ously available real-time access. The GEO systems are una�ected by this reduction in the

availability requirement, since the dominant failure mode is a total loss of the spacecraft.

Celestri however could operate with fewer satellites if the availability requirement were low-

ered. This should improve the CPF. In fact, the availability elasticity of CPF for Celestri

has been calculated to be 0.174. This is a surprisingly low value, much lower than the

rate elasticities calculated in the previous section. Taken in context with the GEO systems

which have CPF's largely independent of availability, this result implies that broadband

communication satellite systems are reasonably insensitive to the availability requirement.

6.4.2 The Technology Elasticities

The technology elasticities can be de�ned for any particular component of the system that

may have an impact of the overall performance or cost. This allows a quanti�able as-

sessment of design decisions and can identify the most important technology drivers. For

communication systems, the program components that seem to dominate cost are launch,

manufacture and reliability. The corresponding elasticities are,

Manufacture Cost Elasticity, ECmfr=

�CPF=CPF

�Cmfr=Cmfr

(6.5)

207

Page 206: The Generalized Information Network Analysis Methodology for

Launch Cost Elasticity, EClaunch=

�CPF=CPF

�Claunch=Claunch

(6.6)

Failure Rate Elasticity, E�s =�CPF=CPF

��s=�s(6.7)

where Claunch is the budgeted launch cost for the system, Cmfr is the manufacturing cost,

and �s is the (average) satellite failure rate.

These elasticities are calculated simply by re-costing the system, including the ex-

pected failure compensation costs, after changing (reducing) the relevant variable by a

small amount, say 20%. The resulting elasticities for each of these technologies have been

calculated for Spaceway, Cyberstar and Celestri are are shown in Figures 6-29{6-31.

0.50

0.55

0.60

0.65

0.70

0.75

Man

ufac

ture

Cos

t Ela

stic

ity o

f CP

F

Exponential

Last-mile

Exponential 0.60 0.71 0.62

Last-mile 0.60 0.71 0.74

Spaceway Cyberstar Celestri

Figure 6-29: The manufacture cost elasticity of the CPF for Cyberstar, Spaceway andCelestri

The most important features of these charts are summarized:

� For all systems, manufacturing cost savings are the most important to the CPF, with

ECmfr� 0.6{0.75. This is almost twice as large as the sensitivity to savings in the

launch cost, in which EClaunch� 0.2{0.45. The e�ect of improving failure rate is

relatively insigni�cant, having an elasticity less than 0.1 for all systems. The reason

for the relative importance of manufacture costs is that they are incurred at the very

start of the program. The time value of money biases these up-front costs as the most

signi�cant components to the system lifetime cost.

� Of the three systems, Spaceway is the least sensitive to manufacturing costs. This

is because Spaceway, with eight satellites, realizes a larger bene�t from production

learning than Cyberstar, without having to build 70 satellites like Celestri.

208

Page 207: The Generalized Information Network Analysis Methodology for

0.20

0.25

0.30

0.35

0.40

0.45

0.50

Launch Cost Elasticity of CPF

Exp

onen

tial

Last

-mile

Exp

onen

tial

0.46

0.21

0.37

Last

-mile

0.45

0.22

0.37

Spa

cew

ayC

yber

star

Cel

estr

i

Figure6-30:Thelaunch

costelasticityoftheCPFforCyberstar,SpacewayandCelestri

�Thelaunch

costelasticityforSpacew

ay(E

Claunch=0:46)ismuch

higherthan

thevalue

forCyberstar

(0.21),even

though

bothsystem

sinvolvelaunchinglargesatellites

to

GEO.Thereasonisthedi�erence

indeploymentschedule.Cyberstar

has

adelayed

launch

manifest,andas

soonas

thesatellites

arelaunched,they

becom

ealmostfully

utilized.Spacew

ay,ontheother

handspendsagreatdealof

money

launchingsatel-

litesearlyin

theproject

life,when

money

has

anincreasedvalue,even

though

these

satellites

generate

littlerevenuefor4or5years.

Celestriismoderatelysensitive

to

launch

costssimply

because

they

haveto

loftat

least70

satellites

plusreplacements

toorbit.

�Thefailure

sensitivityisalmostinsigni�cantcompared

totheother

twotechnology

elasticities,butthereare

someimportanttrends.Spacew

ayisunexpectedly

themost

sensitive

tofailure

rates.

Thereason

isagainthedeploymentschedule,since

the

Spaceway

satellites

areon

orbitearlierandso

arelikely

tofailearlier,givinghigher

expectedreplacementcosts.

Anyimprovements

tothefailure

rate

canreduce

the

likelihoodoftheseexpenditures.

Celestrihas

ahigher

failure

rate

elasticity

than

Cyberstarbecause

thesheernumber

ofsatellites

meanthat

failuresareverylikely.

6.5

Summary

Thischapterhas

described

adetailedcomparativeanalysisof

threeproposed

broadband

communicationsatellitesystem

susingtheGINA

methodology.

Modelswereconstructed

forCyberstar,Spaceway

andCelestribased

onthedesignslisted

intheirFCC�lings.Using

209

Page 208: The Generalized Information Network Analysis Methodology for

0.00

0

0.02

0

0.04

0

0.06

0

0.08

0

0.10

0

Failure Rate Elasticity of CPF

Exp

onen

tial

Last

-mile

Exp

onen

tial

0.07

70.

030

0.04

5

Last

-mile

0.08

30.

033

0.04

5

Spa

cew

ayC

yber

star

Cel

estr

i

Figure6-31:Thefailure

rate

elasticityoftheCPFforCyberstar,SpacewayandCelestri

thesemodels,theCapabilitycharacteristicsforeach

system

werecalculated.Theresults

suggestthatCyberstar,

asit

appears

inthe�ling,

isunsuited

forprovidingbroadband

communicationsat

rateshigher

than

386K

bit/s.BothSpacew

ayandCelestriareable

to

supporthighrate(T

1)services

withhighlevelsofintegrity(BER�10

�10)andavailabilities

exceeding97%.

Thecostper

billableT1-minute

metricisusedto

comparethepotentialforcommercial

successofeach

system

.Itisthecostper

billableT1-minute

that

thecompanymustrecover

from

custom

ersthroughfees

inorder

toachieve

a30%

internal

rate

ofreturn.

Itwas

assumed

that

improvem

entsare

madeto

Cyberstar

inorder

foritto

beableto

competein

thismarket.ThecalculationsoftheCostper

billableT1-minute

involvedthedevelopment

ofseveralmarketmodelsbasedon

currentinternet

andcomputersalesgrow

thtrends.

Simulationsoftheoperationsofeach

ofthesystem

swithin

therealisticmarket

scenarios,

accountingforthee�ectsof

market

penetration,access

andexhaustionwerecarriedoutto

evaluatethemarket

capture

pro�les.

Thesystem

lifetimecostswereestimated

including

thecontributionsfrom

satelliteconstructionanddevelopment,launch,insurance,gateways,

internet

connectionhardware,gateway

andcontrolcenteroperations,andexpectedfailure

compensationcosts.Theresultingcostper

billableT1-minutemetrics

show

edthat

allthree

system

swillbeable

too�er

competitivelypricedservices

tousers.Celestriachievedthe

lowestcostperbillableT1-minute,andhad

thesm

allestvariationacrossmarketmodels.

An

importantconclusion

tocomefrom

theseresultsisthat

deploymentismoreimportantthan

architecture

forthismarket.

Thedi�erencesbetweenarchitectures(G

EO

versusLEO)do

not

impact

thecost

per

billable

T1-minute

asmuch

asthee�ectivedeploymentstrategies

210

Page 209: The Generalized Information Network Analysis Methodology for

or the overall market capture. A well designed deployment strategy, tailored to match the

predicted market growth is less sensitive to variations in that market, while a large overall

throughput allows the high �xed costs to be amortized over more users.

The Type 1 adaptability metrics were calculated, and basically emphasized the impor-

tance of a sensible deployment strategy. Contrary to popular belief, achieving lower launch

costs is not as e�ective for commercial bene�t as lowering the cost of the manufacture pro-

cess. The driving requirement for the broadband systems is data rate, and smaller systems

o�er the potential for discounted service at lower rates that may realize higher revenues

through increased yield.

These results are clearly signi�cant, and can be readily applied to an economic analysis

of the systems. This is a very nice feature of the objective, quantitative nature of the

analysis methodology. By judging the systems only on how well they address a de�ned

market, and by scaling their cost accordingly, the GINA methodology has enormous utility

for comparative analysis. It now remains to demonstrate the usefulness of GINA for the

design process.

211

Page 210: The Generalized Information Network Analysis Methodology for

212

Page 211: The Generalized Information Network Analysis Methodology for

Chapter 7

Techsat21; A Distributed Space

Based Radar for Ground Moving

Target Indication

The Air Force has traditionally built and operated very large and complex satellites such as

MilStar, the Defense Support Program, and the Defense Metereological Satellite Program.

Recently though, the Air Force has recognized the potential for low cost and improved

capabilities that may be possible with distributed systems of small satellites. In the New

World Vista's report [56], published in 1996, the Air Force Scienti�c Advisory Board Space

Technology Panel identi�ed the development and implementation of systems featuring for-

mation ying satellites that create sparse apertures for remote sensing and communications

as an important goal for the Air Force in the next century. To this end, the Air Force

Research Laboratory (AFRL) has initiated the Techsat21 program, an innovative concept

for performing the Ground Moving Target Indication (GMTI) radar mission using a dis-

tributed satellite system. The key to the concept is a cluster of microsatellites (less than

100kg) that orbit in close proximity, each with a receiver to detect coherently not only the

return from its own transmitter, but the bistatic responses received from the orthogonal

transmit signals of the other satellites as well [48]. This provides the opportunity to collect

many independently sampled radar signals, each with di�erent phase, that can be used to

form a large, post-processed sparse coherent array with a very high resolution and large

main-lobe gain. This chapter describes some of the work that has been carried out at MIT

in collaboration with the AFRL to assist in the conceptual design phase of this project.

In particular, Sections 7.1 and 7.2 introduce the most important principles of space based

radar, Section 7.3 describes the Techsat21 concept, and Sections 7.4 and 7.5 present a gen-

eralized analysis and some design trades for the system. This demonstrates the application

213

Page 212: The Generalized Information Network Analysis Methodology for

of the GINA methodology for a real design study.

7.1 Space Based Radar

There are clear advantages to placing radars in space for the purpose of surveillance. since

the �eld of view is increased allowing large areas to be searched. The major disadvantages

are the increases in range leading to large signal attenuation from free space loss, and more

importantly, the fact that the large area of the Earth illuminated produces severe clutter

conditions. In the Space Based Radar Handbook [57], Andrews states that,

. . . the fundamental design considerations for a space-based radar (SBR) de-

signed for air or surface search are: (1) the radar must have enough power-

aperture product to detect the radar cross section of the targets of interest at

the search rate required for the application; (2) the radar must have enough

angular and range resolution to locate the target with the required accuracy;

and (3) the radar must be capable of rejecting clutter returns from the earth

and interference from other electromagnetic transmissions to detect targets in

the presence of these usually much larger unwanted signals.

Note that these three criteria describe precisely the same quality-of-service parameters

de�ned by the GINA methodology. The probability of detecting targets is equivalent to the

Integrity since missed detections or false alarms constitute errors in the interpretation of

signals, the search rate is obviously the generalized rate measure, and the ability to locate

the target and reject unwanted clutter signals de�nes the Isolation characteristics of the

radar. A space-based radar therefore bene�ts from a built-in capability for searching large

areas very quickly (rate), but faces characteristic problems with detectability (Integrity)

and clutter rejection (Isolation). These issues are discussed in the following section.

7.2 Detecting Moving Targets in Strong Clutter Backgrounds

A detailed treatment of radar detection in the presence of noise and clutter is not necessary

for a basic appreciation of the issues and problems that face SBR systems. An engineering

discussion of the essential concepts and mathematics of the detection process will su�ce.

Nevertheless, if readers wish to gain a further understanding, there are many excellent texts

that describe radar detection in great detail; Skolnik [58] is considered the classic text, but

some very readable alternatives are Blake [59], Barton [39] and Tzannes [60], which is

214

Page 213: The Generalized Information Network Analysis Methodology for

interesting not only because of the author's relaxed writing style but also for its generalized

approach to radar and communication systems (does this seem familiar?).

Proceeding then with the engineering discussion, all radar systems emit electromagnetic

(EM) radiation and extract information from the re ected energy. Sometimes, the absence

of re ection is used to characterize the medium through which the electromagnetic wave

propagated, but the vast majority of radars analyze the re ected signal. There are four

aspects to the information in the re ected, received signal: presence of a re ector, the EM

wave's travel time to the re ector, frequency content of the received signal, and polarization

of the re ection. Polarization is seldom used since antennas are typically designed for

only one polarization. The presence of a re ected signal is used to detect the presence of

targets. The time delay between the transmitted and received signal gives the range to the

target if the speed of the EM wave is known for the medium. Finally, the spectrum of the

received signal can be used to indicate the velocity of the target relative to the radar by the

phenomenon of Doppler shift. Every radar uses these aspects of information in the received

signal in di�erent ways, depending upon the function and role of the radar. For example,

an air tra�c control radar relies primarily upon the detection and ranging aspects to �nd

and locate air tra�c. A moving target indicator (MTI) radar relies heavily on the signal

spectrum to separate moving targets from stationary clutter. MTI radar also uses range

information to locate target position. A synthetic aperture radar (SAR) for imaging uses

signal time delay to resolve objects in range and frequency information to resolve objects in

cross-range. A range/cross-range image is constructed based on the presence and strength

of re ection.

For the GMTI mission, the presence of a return and its frequency spectrum are used to

detect the moving targets in a strong clutter background. The target velocity is a direct

output of this process. For most MTI radars the target's position is estimated in terms of

range from the antenna by time-gating the received signal to form range bins, and can be

located in azimuth only to within the width of the radar beam.

7.2.1 Locating the Target

The acronym \radar" stands for RAdio Direction And Ranging, and of course the eventual

goal of all surveillance radars is to locate targets in range and direction. In the simplest

terrestrial or airborne radars, the radar scans through an arc in the horizontal plane, and

the direction measurement comes from knowing where the radar is pointed when the echo

is received. In this way, the con�dence in locating the target in the azimuth direction is

limited to the beamwidth of the antenna. The range to the target is linearly related to the

time between transmitting a pulse and receiving the echo. The range resolution is related

215

Page 214: The Generalized Information Network Analysis Methodology for

to the radar pulse duration � . Consider a pair of targets with a separation along the line of

sight of exactly half a pulse length. The echos from these targets will overlap in time, and

the return will be the superposition of the re ection from each of the targets. As a result

the targets cannot be unambiguously isolated, and so the range resolution of the radar is

c�=2, where c is the speed of light in the medium.

For a radar that transmits a sequence of pulses, there also exist range ambiguities. An

echo received at the radar could be the pulse that was just transmitted after being re ected

from a nearby target, or alternatively it could be a pulse transmitted earlier, having been

re ected from a more distant target. In fact, the radar has no way of knowing which of

its emitted pulses caused the echo, and can therefore not know what from range it was

re ected. These range ambiguities are related to the pulse repetition frequency PRF, and

are separated by a distance Ramb = c=(2PRF ).

The situation for a space-based radar is essentially the same, with a few complications,

as shown in Figure 7-1. The location in azimuth AZ of a non-stationary target is still

determined from the angle at which the beam is pointed, and for monostatic (single antenna)

radars the resolution of this measurement is equal to the azimuthal beamwidth, �AZ. As a

consequence of the increased range, this azimuthal uncertainty translates into a very large

position uncertainty on the ground (or in the air for airborne targets). This is obviously a

disadvantage of space-based radar systems.

As in all radars, a target's range from the space-based radar is measured by a time of

ight calculation, but now the line of sight is not horizontal. Referring to Figure 7-1, the

radar beam illuminates an elliptical spot on the ground (distorted by the curvature of the

Earth) and horizontal distances across this spot are related to the range along the line of

sight by a sec( ) multiplier, where is the grazing angle between the line of sight and the

local surface. Note that in the communications community, this angle is called the elevation

angle. However, in radar circles, the term \elevation angle" refers to the angle above the

nadir direction that the radar beam is pointed. Within this chapter, the conventions of radar

will be used consistently. The range resolution on the ground is therefore c�=(2 cos ), and

for a at Earth approximation this is c�=(2 sinEL). The \range bins" on the ground are

therefore of this width and aligned perpendicular to the line of sight.

The same geometrical factor is applied to the range ambiguities, so that they are now

separated by c=(2PRF cos ). Usually, the space-based radar designer would like the radar

to be range unambiguous, meaning that there are no range ambiguities within the projected

radar footprint. The length of this footprint depends on the satellite altitude, the beamwidth

216

Page 215: The Generalized Information Network Analysis Methodology for

Rs θ

AZ

Rsψ

h

(cτ /2) secΨ

Range bins

AZ

Range bins are

shown greatly

exaggerated

vp

ELFigu

re7-1

:Space-basedradargeometry

oftherad

ar,andtheeleva

tionangle

ELsuch

that,

usin

ga at

Earth

approx

imation

,

Lfoot=hsin

(�EL )

cos2(E

L)

(7.1)

where

�EListhebeam

width

intheelevation

direction

.Thise�ectively

places

acon

straint

relationship

onthemaximum

allow

ablePRFfor

agiv

enbeam

width,altitu

deandelevation

angle,

PRFmax=

ccos(E

L)

2hsin

(theta

EL )tan

(EL)

(7.2)

Thisturnsoutto

beavery

cripplingcon

straintfor

space-b

asedrad

arfor

movingtarget

indication

,sin

ceitlim

itsthePRFto

reasonably

lowvalu

es(1000's

ofHz)

foranyreason

-

ably

sizedanten

naatusefu

lradar

frequencies.

APRFhigh

erthan

thisresu

ltsin

range

ambigu

itiesacro

ssthefootp

rint.

Thenumber

ofambigu

itiesissim

ply

theratio

betw

een

thechosen

PRFandPRFmaxcalcu

latedabove.

High

erPRF'sare

desirab

lefor

detectin

g

movingtarg

etsin

strongclu

tterback

grounds,as

shall

beshow

nin

latersection

s.

217

Page 216: The Generalized Information Network Analysis Methodology for

7.2.2 The Radar Range Equation

The radar range equation relates the maximum range at which targets can be detected

to the transmitter power, antenna gain and area, signal to noise ratio, signal integration,

system losses, thermal noise, and target radar cross section. The signal power to noise power

ratio (SNR) is an important factor in the detection of valid radar returns in the presence

of noise. The most general form of the radar range equation is [58],

SNR =PtGAe�T

(4�)2R4sLs

(7.3)

where Pt is the transmitter power, G is the antenna gain, Ae is the e�ective aperture area

of the antenna, and Ls are system losses.

The radar cross-section (RCS or �T ) of a target is the e�ective area that re ects power

back to the radar receiver. Typical values for the average RCS of common targets range

from 10m2 for small boat, to 200m2 for a pickup truck [57]. In actuality, the RCS of a target

depends strongly on the wavelength and on the azimuth and grazing angle perspective.

7.2.3 Detecting the Target

To best understand the radar detection problem, it will be posed in a very simple mathe-

matical context. The radar receives a return signal r(t) and a decision must be made as to

whether,

r(t) = N(t) (no echo present) (7.4)

or

r(t) = N(t) + s(t) (echo present) (7.5)

where N(t) is noise and interference, and s(t) is the target return. This is of course equiv-

alent to the general detection problem discussed in Chapter 4. In that chapter, it was also

stated that the best way to process (prior to detection) a signal corrupted by noise and in-

terference was using a matched �lter, since it rejects everything except the signal-plus-noise

components that exist in the same signal subspace as s(t). In less mathematical terms,

this just means that the �lter only lets through the signal-plus-noise components that \look

like" the expected target return. For radar this involves correlating the return with replicas

of the transmitted signal, since the return should look like what was transmitted, only de-

layed, attenuated, frequency-shifted and phase-distorted. Since the noise and interference

N(t) is in general a random variable, the output of the matched �lter will also be a ran-

dom variable. The detection process then involves making a decision based on samples of a

random variable measured from the output of the matched �lter.

218

Page 217: The Generalized Information Network Analysis Methodology for

7.2.4 Noise-Limited Detection

If the system can be assumed to be noise-limited (with little or no clutter or interference)

N(t) will be zero-mean Gaussian white noise. The decision process then simply reduces to

determining whether the measured samples came from a zero mean Gaussian pdf, or from

a Gaussian pdf with a non-zero mean equal to the energy S of the desired signal s(t). If we

deduce that the mean is zero, then we have decided there is no target echo; otherwise we

have decided that there is a target. Errors can occur by declaring a false alarm, or worse,

by missing a detection. The probabilities of these errors is a function of the decision rule,

which in turn depends on the type of detector.

A basic incoherent detector (peak detector) measures the complex amplitude of the

received signal to determine if it exceeds a predetermined threshold. As discussed in Chapter

4, a positive radar detection is declared if the envelope of the received signal exceeds the

threshold voltage vT . The peak detector is therefore actually two detectors: an envelope

detector to measure the envelope of the signal, and a threshold detector to make the decision.

The probability of a false alarm for each decision made by the threshold detector is given

by,

Pr(false alarm) = Pfa =Z 1

vT

g0 (x)dx (7.6)

where g0 is the probability density function of the noise entering the threshold detector.

Similarly, the probability of detection for the incoherent peak detector is,

PD =

Z 1

vT

g1(x)dx (7.7)

where g1(x) is the probability density function of the signal-plus-noise envelope at the input

to the threshold detector in the case when an echo is present. The form of g1(x) depends

on the nature of the transmitted radar signal and on the signal processing performed ahead

of the detector.

Single pulse detection

For a decision based on a single measurement, no further processing is done between the

envelope detector and the threshold detector. In this case, g0(x) and g1(x) are the same as

the pdf's at the output of the envelope detector.

If the Gaussian noise N(t) entering the envelope detector has variance �2, the proba-

bility density function of the noise at the output of envelope detector g0(x) has a Rayleigh

distribution [39],

219

Page 218: The Generalized Information Network Analysis Methodology for

g0(x) =x

�2exp

�x22�2

!(7.8)

and so the probability of false alarm for a each decision process is

Pr(false alarm) = Pfa =Z 1

vT

g0 (x)dx = exp

�v2T2�2

!(7.9)

If the matched �lter output is compared continuously to the threshold, independent

samples of noise at a rate of Bn will give an average false-alarm rate FAR = BnPfa, where

Bn is the noise bandwidth. This false alarm rate is an important measure of the integrity

of radar systems, and is often stated as its inverse, the false alarm time.

Considering now the signal component, if the signal, after downconversion to an interme-

diate frequency (IF), is a sinusoid with a peak amplitude A, such that s(t) = A � cos(2�fIF),the output envelope of the signal-plus-noise will have a Rician distribution [39],

g1(x) =x

�2exp

"� �x2 +A2

�2�2

#I0�Ax=�2

�(7.10)

where I0 is the Bessel function with imaginary argument. The probability of detection

given by the integral of Eqn. 7.7 with the Rician pdf has no closed form solution. There are

many approximate solutions in the literature [39], [59], of which the North approximation

is perhaps the simplest,

PD =1

2erfc

�qln (1=Pfa)�

qSNR+ 1=2

�(7.11)

As expected from the understanding of the generalized Integrity, the probability of

correctly detecting a target is a strong function of the SNR. In actuality, it is a function

of the energy in the information symbol component compared to the energy in the noise,

for each decision process. For the case when only one signal measurement is used to make

the decision, this is equivalent to the average SNR, and the equation above can be used to

calculate the Integrity (PD). However, detection based on a single measurement is not often

used in modern radar, and by de�nition not for MTI. The reason for this is that decisions

made using only an instantaneous sample provide no information about the time-varying

(motion) properties of the target and do not take advantage of the fact that the transmitted

signal may have re ected from the target for an extended period. This, of course, increases

the total energy from the target that is available for making a decision. Large improvements

can be obtained by processing more samples of the re ected signal.

220

Page 219: The Generalized Information Network Analysis Methodology for

Noncoherent integration

For pulsed-radar, as is our interest here, the transmitted signal is a sequence of pulses

(modulated on a carrier frequency) and the number of pulses that re ect from the target is

the product of the pulse repetition rate (PRF) and the dwell time, Td

np = Td � PRF (7.12)

There are several ways to use these np pulses to improve the detection. The simplest is

to use some kind of linear-weighted integrator after the envelope detector to smooth the

variations in the noise. This is noncoherent integration since all phase information has been

removed by the envelope detector.

The most common type of noncoherent integration is the uniform-weight integrator,

discussed by Marcum [61] and Swerling [62]. If si is the voltage at the output of the

envelope detector after receiving the ith radar pulse, then the uniform-weight integrator

computes the sum,

s =

npXi=1

si (7.13)

The e�ect of this operation is to lower the SNR (of each sample) that is required for

detection. This is understood by noting that noncoherent integration is a smoothing process

[59]. When np independent signal-plus-noise samples are summed, the standard-deviation-

to-mean ratio is reduced bypnp relative to the variation of the individual samples. It is this

smoothing of the noise that improves the detection performance of noncoherent integrators.

Speci�cally, since the hard-decision rule is based on a threshold value, smaller variations

allow the threshold to be placed closer to the mean value of the signal-plus-noise while

maintaining the same false-alarm probability. With this smaller threshold-to-mean ratio, a

smaller signal can produce a threshold crossing and the sensitivity of the system is improved.

The improvement of noncoherent integration for detection is therefore primarily deter-

mined by how well the noise variation can be reduced, and to some extent is independent of

the actual signal characteristics. This means that noncoherent integration provides a rea-

sonable processing gain even when the signal has a random phase and is rapidly uctuating,

as is the case with dynamic targets [59]. This distinguishes it from coherent integration (dis-

cussed in the next section) that requires the signal to have predictable or measurable phase

characteristics.

The probability of detection for a noncoherent integrator followed by a threshold de-

tector is still given by Eqn. 7.7, but now the probability density function g1(x) is more

complicated and the actual form of g1(x) depends on the statistics of both the signal and

221

Page 220: The Generalized Information Network Analysis Methodology for

the noise. A great deal of literature exists to estimate the e�ects of target uctuations, and

the classi�cations of the severity of the uctuations as de�ned by Swerling [62] have become

the accepted standard. Within this classi�cation system, the Case 2 Swerling model, corre-

sponding to a rapidly uctuating target that gives signal uctuations from pulse to pulse, is

considered the most likely (worst case) scenario for the detection for moving targets. There

are a confusing number of published approximations to the integral of Eqn. 7.7 under these

conditions. A well accepted approximation that appears to match observations and exact

(numerical) solutions very closely is given by Neuvy [63]. For noncoherent integration of N

pulses the detectability of a Swerling 2 target can be approximated by,

log10

�1

PD

�=

log10(nfa)

n2=3p SNR

!1=�

(7.14)

where � = (1=6) + exp (�np=3) and nfa is the false alarm number,

nfa =ln 0:5

ln (1� Pfa) (7.15)

The probability of detection calculated from this relationship is plotted versus the SNR

(per pulse) in Figure 7-2 for Pfa = 10�6 with various values of np .

0 2 4 6 8 10 12 14 16 180

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR (dB)

Prob

abili

ty o

f de

tect

ion

P D

np=2

np=4

np=6

np=8

np=10

Figure 7-2: The Neuvy approximation [63] for the probability of detection of a Swerling2 target using noncoherent integration of pulses and a simple peak detector; Pfa = 10�6

222

Page 221: The Generalized Information Network Analysis Methodology for

Coherent integration

In many modern radar systems it is possible to control, or at least compensate for the phase

characteristics of the signal. Coherent integration of the pulses is obtained by matching the

IF �lter to the entire pulse sequence from the target, requiring that the signal has a pre-

dictable phase relationship (coherence) over this period. A distinction is made between

coherent integration and coherent processing; the former is a special case of the latter that

involves coherently summing sequential samples of the signal before the envelope detector.

Coherent processing simply speci�es that multiple samples are processed ahead of the en-

velope detector, utilizing the phase information in the signal to improve detection. Other

types of coherent processing are pulse compression in which the bandwidth of the pulses are

spread to improve range resolution, and synchronous detection in which the actual decision

making is performed coherently, with no envelope detection whatsoever.

Coherent integration requires that the phase response of the �lter brings all signal com-

ponents into the same phase when they are added [39]. For a pulse sequence, the �lter must

be matched to the pulse-to-pulse phase variation of the target. The random uctuations in

the starting phase between pulses of the transmission can be compensated in the receiver

using a reference signal from a stable oscillator locked to each transmit pulse. However,

phase variations also occur due to the target's motion (through the Doppler shift), and this

is not known a priori. Coherent integration must therefore involve several parallel receive

channels tuned to slightly di�erent frequencies to account for all possible doppler shifts.

This is the basic principle behind pulse-doppler radar, to be discussed in more detail later.

For now, it su�ces to appreciate that only radar systems that account for target motion

e�ects can practically achieve coherent integration. The consequence of this is that there

is an e�ective limit on the duration over which the integration can be performed for high-

dynamic targets. It is a common rule of thumb that a receiver can maintain coherence over

time scales of around 50ms for most conventional targets.

The bene�t of coherent integration is an np-fold increase in the SNR compared to single

pulse incoherent detection. Essentially, the signal amplitudes add coherently so that the

amplitude of the resulting signal is np times the amplitude of each signal pulse, and the

signal power is increased by a factor of n2p. Receiver noise has a random phase and amplitude

from pulse to pulse. As a result, the summed noise amplitude may or may not exceed the

amplitude of individual pulses. On average, the power level of the noise is increased by a

factor of np as a result of the coherent summing, hence the np-fold improvement in terms

of SNR.

As noted earlier, noncoherent integration bene�ts from reducing the variation of the

noise uctuations. Coherent integration, however, is more simply an improvement in the

223

Page 222: The Generalized Information Network Analysis Methodology for

SNR. Coherent integration does not in fact reduce the noise variation, since both the mean

noise power and the standard deviation increase by the same factor np. Thus, to achieve

the same Pfa, the threshold setting must be the same as for the single pulse detection.

However, the signal power has increased, and the detection capability is the same as that

which would be achieved with a single pulse that was np times longer.

The probability of detection can then be calculated using Eqn. 7.11, but with a SNR that

is a factor np higher than for a single pulse. This is not however the only option. Although

the coherent dwell time is restricted to be within the coherence time of the target, there

is no such restriction placed on noncoherent integration dwell time. The two techniques

can thus be combined; after performing some level of coherent integration, limited by the

requirement for coherence, several integrated pulses can be added noncoherently to further

improve the detection characteristics. The resulting probability of detection for Swerling 2

targets can be approximated by substituting the coherently integrated SNR into Eqn. 7.14,

log10

�1

PD

�=

log10(nfa)

n2=3i nc(SNR)

!1=�

(7.16)

where ni is the number of \pulses" that are summed noncoherently, each having a signal to

noise ratio of ncSNR from the coherent integration of nc separate radar pulses with signal

to noise SNR. This is the method most often used in military surveillance radar since it

achieves many of the bene�ts of each type of integration.

A �nal alternative for the detection is to not use an envelope detector at all. The de-

tection process can take place completely coherently. By demodulating the �ltered received

signal with a synchronous replica of the transmitted carrier, the output is the baseband

pulse modulation multiplied by a sinusoid at the beat frequency between the reference and

the phase distorted carrier. The presence of this beat frequency is an immediate indication

that the target is moving, since a stationary target will not distort the phase of the carrier

wave. This is the basic principle behind MTI radar, that will be discussed later. Note that

there is a di�erence between the MTI mission, that simply speci�es that the radar identify

moving targets, and the MTI radar, that represents one such solution to this mission need.

7.2.5 Clutter-Limited Detection

So far all the discussion has involved detection that is limited by the e�ects of thermal noise.

Unfortunately, for radars designed to look toward the ground, as for the GMTI mission, the

noise power is insigni�cant compared to the power of the clutter returns.

Main-lobe clutter is the energy backscattered from the Earth's surface within the foot-

print of the main beam. Main-lobe clutter is particularly severe since it is ampli�ed by

224

Page 223: The Generalized Information Network Analysis Methodology for

the same antenna gain as the signal itself. The received signal-to-clutter ratio (SCR) for

main-lobe clutter is therefore,

S

C=�T��

(7.17)

where �T and �� are the e�ective radar cross sections (RCS) of the target and the clutter

respectively [57]. The e�ective RCS of the main-lobe clutter is given by,

�� = Ac�o (7.18)

where Ac is the area of the Earth's surface illuminated by the radar, and �o is the average

clutter cross section per unit area. From Figure 7-1, the illuminated area for each range

bin is,

Ac = Rs�AZNac�

2sec (7.19)

where Rs is the radar range to the surface along the line of sight, and Na is the number

of range ambiguities in the antenna footprint. The main-lobe clutter is therefore directly

proportional to the number of range-ambiguities in the footprint. The clutter power C can

be computed with a variation of the radar range equation [58],

C =PtGAe��

(4�)2R4sLs

(7.20)

where Pt is the transmitter power, G is the antenna gain, Ae is the e�ective aperture area

of the antenna, and Ls are system losses.

Sidelobe clutter is the energy backscattered from the Earth's surface outside the main-

beam footprint, and enters the antenna through the sidelobes. The sidelobe clutter level

relative to the main-lobe clutter is determined by the main-lobe to sidelobe level. To

minimize this component of clutter, antennas with very low sidelobe gain are therefore

desirable.

The surface clutter cross section per unit area, �o is a function of many parameters

including the type of terrain or sea conditions, frequency, polarization, and grazing angle,

as well as radar parameters, such as angular resolution, bandwidth, pulse waveform and

clutter processing techniques. The enormous number of variables involved in characterizing

the clutter has meant that most of the research on this issue is empirically based. Extensive

databases for di�erent values of �o for many possible conditions exist in the literature. A

great deal of this is reproduced in the Space Based Radar Handbook [57].

One important aspect of the variability of �o is that in general, it increases dramatically

at large grazing angles. This means that space-based radars characteristically have a \nadir-

hole" extending 20o{30o from the nadir direction, in which the SCR is too large for reliable

225

Page 224: The Generalized Information Network Analysis Methodology for

detection [57].

Clutter statistics

For analysis and prediction of the performance of radar in the presence of clutter, it is

necessary to have models for the statistics of the clutter amplitude. The simplest and

most analytically convenient model of the clutter amplitude is to assume it has Rayleigh

statistics. This is basically an assumption that the clutter return is the superposition of

the returns from a large number of independent, random scatterers. The in-phase and

quadrature amplitude uctuations are then described by a Gaussian pdf, and the envelope

of the clutter has a Rayleigh distribution. The validity of this assumption is dependent on

the terrain, and on the radar resolution; the radar resolution cell (range and azimuth) must

be large compared to the characteristic size of the scatterer variations. This is also sensitive

to frequency, since the dominant scatterers are frequency dependent.

The convenience of assuming Rayleigh clutter statistics is that target detection in the

presence of clutter can be treated in exactly the same way as for Gaussian noise. The

equations of the previous sections can all be applied, with the only modi�cation that the

SNR is replaced by the total signal-to-interference ratio (SIR),

SIR =S

N + C(7.21)

where S, N and C represent the signal, noise and clutter powers respectively.

If Rayleigh conditions cannot be met, a log-normal distribution of clutter been proposed

[58]. This is not discussed further, since all the analysis performed in this chapter assumed

Rayleigh clutter statistics.

Impact of clutter

The real problem with clutter is that it is usually coherent between pulses. This is partic-

ularly true for the returns from stationary clutter, which is more coherent pulse-to-pulse

than the moving targets that the radar is trying to detect. As a result, the integration

processes described earlier do not help at all in terms of improving the SCR. Clutter, cov-

ering the entire resolution cell, and not being designed to be stealthy, has returns that are

characteristically much larger than the target, and can simply overpower the target signal.

There is however a way to attack the clutter problem, and that is to use the fact that

the targets move, but the clutter does not. Recall from Chapter 4 that e�ective Isolation

requires signals that are separable in some domain; motion, leading to frequency separation

is the key to dealing with clutter, and it is applied using pulse-doppler radar.

226

Page 225: The Generalized Information Network Analysis Methodology for

7.2.6 Pulse-Doppler Radar

Pulse doppler processing is an implementation of coherent integration and uses the fact that

the targets are assumed (or known) to have a radial velocity vt relative to the radar antenna.

This velocity gives rise to a Doppler frequency shift �f = 2vt=� in the re ected signal

that can be used to assist the detection process and identify the target. Essentially, the

radar \looks" for signal components that are frequency shifted, and performs the detection

processing on each of these components separately. The process by which this occurs in a

standard pulse-doppler radar is simply explained.

The key to understanding pulse-Doppler processing lies in understanding the form of

the frequency spectrum of the received radar signal, an example of which is shown in Figure

7-3. This continuous spectrum is the fourier transform of a single square pulse convolved

with an in�nite impulse train (representing the pulsed transmission) and multiplied by a

square window corresponding to the length of time Td that the radar dwells upon the target.

The peaks of this spectrum are spaced by the PRF, and are of width 1=Td.

-1.5 -1 -0.5 0 0.5 1 1.5

x 104

-0.2

0

0.2

0.4

0.6

0.8

1

Frequency (Hz)

Nor

mal

ized

am

plitu

de

Figure 7-3: Frequency spectrum of a sequence of square radar pulses; PRF=3000Hz,pulse length � = 1=12000 seconds, and dwell time Td = 1=300 seconds (10 pulses)

After downconversion to IF, this received signal is split by a bank of nr range gates, as

shown in Figure 7-4. These range gates are �lters that open and close at intervals of time

corresponding to di�erent ight times, equivalent to target range.

The output from each of these gates is fed into a system of nf narrow bandpass �lters

227

Page 226: The Generalized Information Network Analysis Methodology for

IF Amplifier

Range gate 1 Range gate nr

NBF Filters NBF Filters

Amplifier detectors Amplifier detectors

Figure 7-4: Simpli�ed block-diagram for pulse-doppler radar processing

(NBF) centered around the middle of the spectrum of the signal (the carrier frequency

shifted to IF). Each �lter has a bandwidth fD equal to the width of the peaks 1=Td in

the signal spectrum, with a center frequency at some anticipated shift, left or right, away

from the middle peak. The idea is that one of the �lters will \capture" the Doppler-shifted

central peak. The output of this bank of �lters is essentially the nf -point discrete fourier

transform of the input signal, and is in fact often implemented digitally with an FFT.

Detection processing is performed on the narrowband output from each of these �lters, and

the system measures the Doppler shift, and hence the target's radial velocity, by identifying

the �lter that declares a detection.

Since each �lter has a bandwidth fD equal to the width of the peaks in the spectrum,

the maximum number of �lters that can be used to measure the shifted central peak is

limited by the spacing of the peaks in the spectrum. As described earlier, this is equal to

the PRF, and so the maximum number of �lters is,

nf =PRF

fD= PRF � Td = np (7.22)

Adding more �lters than this simply places a �lter directly over one of the other peaks

228

Page 227: The Generalized Information Network Analysis Methodology for

in the spectrum, and provides no additional information about the signal but represents an

ambiguity about which peak is being measured. Even with the correct number of �lters,

these ambiguities can occur due to the Doppler shift of the target; it is assumed that a

large output from one of the �lters measures a Doppler-shifted central peak, but we cannot

be sure that it is not due to a shifted version of one of the other peaks. For example, the

return from a very fast moving target may have shifted the entire spectrum by an amount

equal to the PRF. The large output from the �lter centered at zero frequency (relative to

the IF) would incorrectly indicate that the target has zero radial velocity. This Doppler-

ambiguity is unavoidable, but its impact can be reduced by increasing the PRF such that

the peaks are so far apart that no reasonable target can a have such a speed as to create

ambiguities [60]. Unfortunately, this comes at a price, since increasing the PRF gives more

range ambiguities. The radar engineer is stuck between a rock and a hard place and can

only hope to obtain a compromise. Of course, all this trading of ambiguities (or accuracies)

in range (time) or velocity (frequency) is actually a statement of the Uncertainty Principle;

it is simply impossible to reduce the total amount of ambiguity in both domains.

It is perhaps possible that higher PRF's can be used if, as part of a pulse compres-

sion scheme, each sequential pulse is spread in bandwidth and encoded with a di�erent

orthogonal PRN code. For instance, pulse 1 is encoded with code1, pulse 2 with code 2,

etc. The number of separate codes that are used determines how often they repeat, which

sets the number of range ambiguities. More codes translate linearly into fewer ambiguities.

After range gating the encoded pulses, the code-modulation can be removed with a set of

synchronous reference codes, and sequential pulses can be integrated as usual in the Pulse-

Doppler processor. The number of Doppler ambiguities is therefore unchanged. This would

seem to achieve exactly what was just stated as being impossible; uncertainty has been

removed in both domains. How can this be? Well, the answer is, as always straightforward

when one understands the problem correctly. The Uncertainty Principle has not been vio-

lated since the initial conditions have been changed. By modulating the pulses with a set

of orthogonal codes, the total amount of information in the radar signal has been increased

linearly. We could expect therefore, a linear reduction in the ambiguities. Whether or not

this innovative (crazy?) idea can be realized in practice is not known, but the potential

bene�ts it o�ers are great, since the PRF can be given any value to optimize detection.

If the radar platform is moving relative to the ground, as all non-geostationary satellites

do, the clutter returns from the ground also have a Doppler shift. The absolute Doppler

shift is related to the projection of the platform velocity vector into the line of sight. Using

the coordinate system of Figure 7-1, the absolute doppler shift for a point on the ground

in the direction (AZ;EL) from the satellite is therefore given by (ignoring Earth rotation

229

Page 228: The Generalized Information Network Analysis Methodology for

and curvature),

fd =2vp�cos(AZ)sin(EL) (7.23)

This means that the velocity and hence Doppler shift varies across the beam footprint,

since di�erent parts of the ground have di�erent radial velocities compared to the antenna.

For example, with a side-looking radar, the ground at leading edge of the beam is moving

toward the radar, while the ground at the trailing edge is moving away. In this case, the

doppler spread �fd across a beam of half-width � can be determined by calculating Eq 7.23

at the leading and trailing edges of the beam where AZleading = �+ � and AZtrailing = �� �,and di�erencing the result,

�fd =4vp�

sin(EL) sin(�) (7.24)

The same principle holds for forward-looking radars, where there is a velocity di�erence

between the heel and toe of the beam due to a di�erent elevation angle. In both cases,

the spread in Doppler shifts across the beam is directly related to the beam width, either

in azimuth (for side-looking) or in elevation (forward-looking). Clutter signals entering the

receiver through sidelobes are shifted by an amount di�erent to the main lobe clutter due

to the greater angle away from boresight. The net result is that the clutter signal is spread

over a �nite bandwidth, and this fact is important for MTI.

Recall that pulse-doppler radar splits the input signal into nf separate narrowband

signals, and so a di�erent part of the clutter spectrum is passed through each �lter. This

has several consequences:

� It is usually assumed that the �lters that capture the main-beam clutter (around zero

frequency relative to IF, after compensating for platform motion) are swamped by

this clutter, since it has been ampli�ed by the high gain of the antenna. These �lters

would have been the ones to detect very slow moving targets, since a stationary target

has the same absolute velocity as the clutter. As a result, the frequency spread of the

main-beam clutter determines the smallest radial velocity that a target can have and

still be detected. This minimum detectable velocity (MDV) is hence directly related

to the beamwidth of the antenna, and can be estimated by equating the doppler shift

of a target with velocity vp =MDV, to the doppler shift of the beam-edge clutter. For

a side-looking radar this is,

MDV = vp sin(�) sin(EL) (7.25)

� The amount of clutter that competes with the signal from a target is the sum of the

component of the clutter spectrum within the bandwidth of a single NBF, and the

aliased clutter components that have Doppler shifts equal to the translates of this

230

Page 229: The Generalized Information Network Analysis Methodology for

passband by integer multiples of the PRF. These aliased clutter components fold into

the passband of the NBF's through the Doppler ambiguities of Figure 7-3. Clearly,

smaller �lter bandwidths, corresponding to long dwell times, reduce the total amount

of clutter that can compete with a target signal. Recall there is a practical limit of

approximately 50ms placed on the dwell time from considerations of phase-coherence.

Also, higher PRF's reduce the number of aliased clutter components that can fold in

to the �lter bandpass. Of course, this comes at the expense of range ambiguities.

The actual detection for each channel can be performed by a basic peak detector, and

most often features an additional level of noncoherent integration on the (coherent) output of

the pulse-doppler processor, as described in earlier sections. An alternative to the incoherent

peak detector is to implement a synchronous detector. Adding this to the pulse doppler

output creates the so-called MTI radar. The advantages of this approach are that it can

be used to obtain very accurate measurements of target velocity. The cost is a loss in

processing gain, due a mismatch loss [59], and an extremely complicated implementation.

7.2.7 The Potential of a Symbiotic Distributed Architecture

There are therefore some implicit characteristics of space-based radar that make the detec-

tion of ground moving targets very di�cult. The most important of these are the isolation

and integrity problems of reliably detecting and locating the targets to a high spatial res-

olution, while rejecting clutter and other interferers. These problems have led to proposed

spacecraft designs that feature either very large apertures or very complicated and expensive

(adaptive) clutter-processing schemes.

As was suggested in Chapter 3, symbiotic architectures can o�er improvements in both

isolation and integrity compared to singular deployments, and can do so using reasonably

modest satellite resources. It was suggested in that chapter that for the search mission, the

most bene�cial architecture is one that uses independent wide-angle beams on transmit (to

achieve a high search rate) but coherently forms many simultaneous receive beams using

the signals from all the satellites, to achieve high gain and clutter rejection. The creation

of a large sparse array from a symbiotic cluster of formation ying small-satellites can

therefore lead to improved capabilities by supporting a very narrow main-lobe beamwidth.

This has the e�ect of increasing the ground resolution, reducing the main-lobe clutter and

the MDV. Also there will be no range ambiguities in the main-lobe, and the PRF can be

increased. This last point is somewhat countered by the characteristically high sidelobes

of sparse arrays, so that even range ambiguities in the sidelobes contribute signi�cantly to

clutter. Nevertheless, the possible improvements o�ered by such a system, together with

the potential for cost savings from using smaller satellites at lower orbital altitudes, make

231

Page 230: The Generalized Information Network Analysis Methodology for

it worthy of investigation. The Air Force Research Laboratory has begun a study to do

exactly this, incentivized by the Scienti�c Advisory Board's suggestion that developing and

deploying these distributed technologies is a primary goal for the Air Force in the 21st

century. Techsat21 is the name of the proposed design, and it basically involves using

symbiotic clusters of small (or micro-) satellites to perform the GMTI mission.

7.3 The Techsat21 Concept

Techsat21 relies on forming large sparse arrays from clusters of formation ying satellites,

each weighing less than 100kg, for the detection of ground-moving targets in a strong clutter

background. To achieve the desired level of coverage, several clusters can be deployed, so

the system can be classi�ed as a clustellation within the GINA framework. An artist's

impression of the system on orbit is shown in Figure 7-5, based on the current genesis for

the design. The gravity gradient satellites will feature several state-of-the-art technologies

including Micro-Electro-Mechanical-Systems (MEMS), advanced solar arrays and batteries,

modular transmit/receive modules, and, most importantly, very fast microprocessors.

Figure 7-5: Artist's impression of the operational Techsat21 system [48]

Operationally, the satellites receive and process the returns not only from their own

transmitters, but also the bistatic responses from the orthogonal transmit signals of the

other satellites in the cluster. Since each satellite has a di�erent geometry to the target,

the phase of the sampled radar signals are di�erent for each path from satellite{ground{

satellite. This permits multiple simultaneous high resolution, high gain receive beams to

be created during post-processing, supporting a greatly enhanced isolation and integrity

capability. The key phenomenology is therefore that of sparse signal-processing arrays, and

its principles must be understood before any further analysis or design can be introduced.

232

Page 231: The Generalized Information Network Analysis Methodology for

7.3.1 Signal Processing Arrays

In many modern remote sensing systems, the directional properties of the system (angular

resolution, spatial �ltering) is not only a function of the antenna, but also of the processing

of the signals received or transmitted [34]. These \signal-processing antennas" include,

but are not limited to: synthetic aperture antennas that involves sweeping out a large

synthetic aperture using the motion of a real aperture; interferometers that combine the

signals from two widely spaced apertures; and sparse arrays, the primary interest here, in

which antenna patterns equivalent to large �lled apertures are reproduced using much fewer

widely separated antenna elements.

The possible methods by which sparse arrays can be formed are numerous. Radio

astronomers have long used the concept of a multiplying array in which signals from two

sub-arrays are multiplied during post-processing. This is in fact the reason that sparse

arrays are classi�ed as signal-processing arrays. However, the concepts discussed in this

chapter involve only additive arrays, in which the signals from array elements are added

coherently. Before discussing the di�erent types of sparse arrays considered, it is helpful to

review the mathematics that are used to calculate the directional properties of all arrays.

Arrays and the concept of spatial frequency

An array is an aperture excited at only discrete points or localized areas. The array consists

of small radiators or collectors called elements. If the element radiation signal strengths are

I and are located at positions x, then the general aperture excitation can be written:

I(x) =Xn

In�(x� xn) (7.26)

This can also be written as:

I(x) = i(x)s(x) (7.27)

where i(x) is an underlying current density and s(x) is a sampling function:

s(x) =Xn

�(x� xn) (7.28)

The far �eld radiation pattern f(u) of such an array is given by the Fourier transform of

the excitation. If all elements are illuminated with uniform strength I(x) = s(x) and so,

f(u) = F fI(x)g =

ZI(x)ejkxudx (7.29)

=

Z NXn

�(x� xn)ejkxudx (7.30)

233

Page 232: The Generalized Information Network Analysis Methodology for

=NXejkxnu (7.31)

where the fourier variable u = sin�. Note that f(u) is the far �eld radiation pattern in terms

of the electric �eld intensity. To convert to a power response that would be measured by

a square-law detector, f(u) must be squared. Alternatively, from basic Fourier transform

relationships, the square law process corresponds to an auto-convolution of the aperture

excitation, yielding the spatial frequency spectrum. The far �eld power response can be

obtained directly by taking the Fourier transform of the spatial frequency spectrum. These

relationships are shown in Figure 7-6.

Aperture (excitation) distribution

Spatial frequency Power response

Amplitude response

Auto-convolution Square-law

FT

FT

Figure 7-6: The relationship between the aperture distribution, the far-�eld amplituderesponse, the spatial frequency and the power response.

The previous development assumed that the array consisted of discrete point-source

apertures, but accounting for directional elements is simple. If the array consists of identical

antenna elements de�ned by a current density ie(x), then the far-�eld pattern of a single

element is given by the fourier transform Ffie(x)g.

e(u) = Ffie(x)g =Zie(x)e

jkxudx (7.32)

Since the elements are located at positions x = xn, the current density (excitation) across

the array is:

ie(x) =NXie(x� xn) (7.33)

234

Page 233: The Generalized Information Network Analysis Methodology for

The radiation pattern of the whole array is the Fourier transform of this current density:

g(u) =

Z NXie(x� xn)e

jkxudx (7.34)

Substituting y = x� xn gives:

g(u) =

Z NXie(y)e

jk(y+xn)udy (7.35)

=NXejkxnu

Zie(y)e

jkyudy (7.36)

= f(u)e(u) (7.37)

The �rst term is the array pattern and is de�ned by the geometric properties of the array.

The second term is the element pattern, and is a function of the excitation of the element.

Eqn. 7.37, called pattern multiplication, therefore decomposes the array pattern into prop-

erties due to the array geometry and properties due to element excitation. This shows that

grating lobes can be suppressed if they lie outside of the element pattern. Eqn. 7.37 is

general and can be applied to many scenarios. For example, multiple elements can treated

as a single super-element (or subarray). Their compound pattern can then be de�ned as

the pattern of the super-element. Stacking these super-elements together and applying

Eqn. 7.37, we can determine the far-�eld pattern of a two-dimensional planar array. Each

column of N elements of the planar array is considered a single super-element. The two-

dimensional array is treated as a linear array of M super-elements, having pattern f2(u).

By the principle of pattern multiplication, the array pattern is f(u; v) = e(u; v)f1(v)f2(u),

where each element has pattern e(u; v) and f1(v) is the pattern associated with a linear

array of super-elements.

Sparse arrays

Having now laid out the mathematics, two possible options for forming sparse arrays ap-

plicable to Techsat21 can be introduced. The common link between both of the sparse

array concepts presented here is that their element spacings are aperiodic. If the elemental

spacings were periodic, there would be unwanted grating lobes in the far �eld response.

� Random Arrays

The random array is a sparse array with random positions of the array elements.

Consider a linear array of N elements, their positions xn being randomly distributed

along a line of length Da. Assume that all elements, regardless of their locations, are

properly phased such that they form a main lobe of maximum strength along some

235

Page 234: The Generalized Information Network Analysis Methodology for

direction �0 (rede�ne u = sin� � sin�0). The complex far �eld response is given by

Eqn. 7.31, where the xn are randomly distributed. The main lobe amplitude is N ,

with u = 0, independent of the random locations. The width of this main lobe is

mostly una�ected compared to a regularly spaced array, scaling with �=Da. Away

from the main lobe however, the phase angle kxnu is a random variable, due to the

randomness of xn. Hence the unit vectors combine with random phases. The RMS

amplitude grows aspN and the power as N . Thus the ratio between power in the

main lobe to the random sidelobes is N=N2 = 1=N .

� Minimum Redundancy Arrays

Consider a conventional array of N elements, as in Figure 7-6. The spatial frequency

of the array shows how the response is made up from constituent components each

related to the inter-element spacing of pairs of elements. It can be seen that for a

regular array there are redundancies in the spatial frequencies. The short spatial

frequencies have many components corresponding to numerous pairs of elements with

small separations. Conversely the longer spatial frequencies, that can only be created

by pairs of elements at each end of the array, have fewer components. Since each

of the pairs that correspond to a given spatial frequency contribute identically to

the eventual radiation pattern, it can be argued that element spacings should not be

duplicated, as this corresponds to a waste of elements [64]. In terms of the response to

a single source, this is quite true. An array that does not duplicate its inter-element

spacing, having only one spatial component for each line in its spectrum, is known as

a minimum redundancy array. For example, a 4 element minimum redundancy array

has elements in positions x1 = 0, x2 = 1 , x3 = 4 and x4 = 6. This arrangement,

given the notation f�1 � 3 � 2�g to indicate the spacings, has all elemental separations

between one and six, the same as a regular array of six elements, but achieves it with

two fewer elements. The obvious bene�ts in cost and complexity from having fewer

elements have made the minimum redundancy arrays popular with the interferometry

community [65]. For the spacecraft array concepts, such as Techsat21, reductions

in the number of elements translates directly into a reduction in satellites and can

dramatically lower costs. For this reason they are worth pursuing.

The problem of arranging N elements along a line such that their spacings are non-

redundant was �rst addressed by Leech [66] in the context of number theory to de�ne

\restricted di�erence bases". In this work, the spacings for minimum redundancy

arrays are given up to N = 11, and are reproduced in Table 7.1. Note that there are

two types of minimum redundancy array; the Unrestricted (or General) array, in which

the maximum separation is allowed to increase to whatever value is necessary in order

236

Page 235: The Generalized Information Network Analysis Methodology for

to maximize the total number of sampled spatial frequencies; and the Restricted array,

in which the spacings are set to maximize the number of contiguous spatial frequencies,

accepting a penalty that some spatial frequencies are duplicated. The best option for

forming sparse arrays for remote sensing is not clear, and a goal of this study was to

quantify the characteristics of each option for the Techsat21 system.

Table 7.1: Minimum redundancy arrays, up to N = 11 elements; the number sequenceindicates relative spacings

N Unrestricted Restricted3 f�1 � 2�g f�1 � 2�g

4 f�1 � 3 � 2�g f�1 � 3 � 2�g

5 f�3 � 1 � 5 � 2�g f�1 � 3 � 3 � 2�gf�4 � 1 � 2 � 6�g f�1 � 1 � 4 � 3�g

6 f�4 � 1 � 1 � 7 � 3�g f�1 � 1 � 4 � 4 � 3�gf�6 � 1 � 2 � 2 � 8�g f�1 � 5 � 3 � 2 � 2�gf�1 � 3 � 6 � 2 � 5�g f�1 � 3 � 1 � 6 � 2�gf�1 � 7 � 3 � 2 � 4�g

7 f�6 � 3 � 1 � 7 � 5 � 2�g f�1 � 1 � 4 � 4 � 4 � 3�gf�8 � 1 � 3 � 6 � 5 � 2�g f�1 � 1 � 1 � 5 � 5 � 4�gf�14 � 1 � 3 � 6 � 2 � 5�g f�1 � 1 � 6 � 4 � 2 � 3�gf�13 � 1 � 2 � 5 � 4 � 6�g f�1 � 1 � 6 � 4 � 3 � 2�g

8 f�8 � 10 � 1 � 3 � 2 � 7 � 8�g f�1 � 1 � 9 � 4 � 3 � 3 � 2�gf�1 � 3 � 6 � 6 � 2 � 3 � 2�g

9 f�1 � 1 � 12 � 4 � 3 � 3 � 3 � 2�gf�1 � 3 � 6 � 6 � 6 � 2 � 3 � 2�gf�1 � 2 � 3 � 7 � 7 � 4 � 4 � 1�g

10 f�16 � 1 � 11 � 8 � 6 � 4 � 3 � 2 � 22�g f�1 � 2 � 3 � 7 � 7 � 7 � 4 � 4 � 1�gf�7 � 15 � 5 � 1 � 3 � 8 � 2 � 16 � 7�g

11 f�18 � 1 � 3 � 9 � 11 � 6 � 8 � 2 � 5 � 28�g f�1 � 2 � 3 � 7 � 7 � 7 � 7 � 4 � 4 � 1�g

Both of the array concepts presented in this section have involved additive processing.

An alternative is to use multiplicative arrays in which the signals from pairs of apertures, or

from sub-arrays, are fed into a circuit that produces an output proportional to the product of

the two input signals. This has the advantage that it can provide angular resolution twice

as �ne as additive arrays but can often lead to increased sidelobes [64]. The di�erences

between additive and multiplicative processing are most easily seen by comparing their

outputs in terms of the signals received by their antennas. For a two element additive array

237

Page 236: The Generalized Information Network Analysis Methodology for

receiving signals e1 and e2,

Power output = e21 + e22 + 2e1e2

For omnidirectional antennas, the �rst two terms in this equation do not contribute to the

high resolution angular information. The cross-term is the only contributor to the angular

resolution, and this is, in fact, precisely the output of a multiplicative array [64]. Note that

the two self-product terms do contribute to the SNR, and so assist in the detection of a

target (integrity), but not to its angular location (isolation). Nothing further will be said

concerning multiplicative arrays, since they were not considered for this study, although

future research should address their applicability to the Techsat21 program.

7.3.2 Overall System Architecture

The previous sections have introduced the concepts that will be used to choose an array type

for the Techsat21 clusters, but little has been said concerning the overall implementation

of the concept.

Implementing the sparse array with a satellite cluster

Most of the published work on sparse arrays assumes that there are hard-wired electrical

connections between the separate antenna elements. This is, of course, not the case for

Techsat21, in which each individual satellite represents an element of the array. The prob-

lems and issues that this fact raises are primarily concerned with coherence, bandwidth and

processing load. Consider the basic Techsat21 architecture shown in Figure 7-7.

Each of the ns satellites transmits a di�erent orthogonal radar signal that is received

at every other satellite. The ns time domain signals received at each of the ns satellites

must be eventually delivered to the location at which the array signal processing will be

performed. For now, do not worry too much about where and what this processor will be,

since it will be discussed later; simply assume that it exists, and that the signals have to be

transmitted there. To be able to form the coherent array, and for pulse doppler processing,

the signals cannot undergo any integration prior to their delivery to the processor. This

means that the cluster satellites must be able to digitally record each of the received signals,

preserving all the carrier phase information. This can be done at IF after mixing (provided

that the phase of the original carrier can be reconstructed) and this reduces the processing

and storage requirements somewhat. After recording the signals, the digital data can be

used to modulate a high frequency carrier for transmission to the processor node/nodes. To

create the sparse array directivity pattern, the array processor must then reconstruct all n2s

radar signals at their carrier frequency, and coherently sum their amplitudes. This is a non-

238

Page 237: The Generalized Information Network Analysis Methodology for

sat1 sat2 sat nsn s -1

Array Processing

Pulse-Doppler processing

Must preserve phase

Detector

Figure 7-7: Simpli�ed Techsat21 Radar Architecture

trivial task at militarily useful radar frequencies (X-band �10GHz). Nevertheless, assuming

that this is performed satisfactorily, the target information can then be extracted from the

single channel output of the array processor using a standard pulse-doppler processor.

Returning now to the question of where to place the array processor, the available

options are that it could reside on the ground, on a single satellite, or be distributed among

the cluster satellites. The �rst two options require enormous processing power and represent

a single point of failure, while the latter involves a great deal of complexity. One of the

goals of the generalized analysis is to quantify the impact of processor placement in terms

of performance and cost.

Dimensionality of the array

The dimensionality of the array has not yet been mentioned. In actual fact, the Techsat21

clusters could feasibly be deployed and maintained in one, two, or even three dimensions,

representing an array along a line, across a 2D plane, or within a 3D volume. The optimum

239

Page 238: The Generalized Information Network Analysis Methodology for

architecture will depend strongly on the mission requirements, the number of satellites in

the cluster and the capabilities of each satellite, as well as orbital parameters and issues

with the formation ying.

One-dimensional clusters are perhaps the least complicated option, and can be formed

by a simple train of satellites traveling in a single orbital plane. By looking to the side,

perpendicular to the ight direction (AZ = �=2), a sparse array is formed over the maximum

extent of the satellite cluster, from the lead satellite to the trailing satellite. The length

Dc of the cluster can be chosen freely to maximize radar performance, and since there are

little or no tidal forces that act to distort it, the con�guration is static in time. This has

an important bene�t in that it reduces the variability of the detection capabilities, thereby

improving the availability. In addition, there are no real propulsion requirements beyond

those necessary for absolute station-keeping, and the cluster is fail-safe, in that it requires

no active control to maintain its con�guration. The disadvantage of this architecture is that

the array provides good angular resolution only in the azimuth direction,

�AZ � �=Dc (7.38)

while the beamwidth in elevation corresponds to the aperture size Ds of the individual

satellites,

�EL = �=Ds (7.39)

Range ambiguities in the main-lobe are therefore not suppressed (since the main lobe has

a large cross-track extent) and the PRF is limited by Eqn. 7.2 to small values. The �ne

resolution in azimuth does however provide for a very small MDV. The clutter suppression

in any range bin is a function of the sidelobe level, which is a strong function of the number

of satellites and their spacing.

The next level of complexity would be to form two-dimensional arrays. These o�er

a huge bene�t in terms of o�ering two-dimensional angular resolution, so that at least in

theory, the radar can be operated in a range-ambiguous mode (range ambiguities outside the

main-lobe are suppressed by low sidelobe gain). This permits high PRF's and consequently

improved clutter suppression. The MDV can be tiny, and the location accuracy of the target

greatly improved over anything possible with singular deployments.

All of these bene�ts come at the cost of increased complexity and more di�cult cluster

management. It has been shown [28], that realistically achievable cluster con�gurations can

be formed in free-orbits, provided some amount of array tilt can be tolerated. The main

problem with using two-dimensional clusters in free orbits is the dynamic nature of the array.

Although array distortion can be limited by proper orbit selection, and actively controlled

using propulsion, there will be times when the cluster is in a sub-optimal con�guration.

240

Page 239: The Generalized Information Network Analysis Methodology for

The sensitivity of the radar capabilities to distortion or rotation of the array have not yet

been determined. It is hoped that future studies will address this issue.

There are therefore many system variables that are important to the success of Tech-

sat21. Just summarizing the ones that have already been discussed (in no particular order):

� The number of satellites in the cluster obviously a�ects the directivity and gain of the

coherent array, which can impact all aspects of the radar capabilities.

� The array con�guration, in terms of extent, spacing and dimensionality is critical, for

the same reasons.

� The number of clusters deployed in the clustellation determines the statistics of the

coverage over a theater.

� The PRF is the critical parameter in the clutter suppression, being related to the

number of Doppler and range ambiguities that add to competing clutter. Large PRF's

reduce the Doppler ambiguities, but small PRF's reduce the range ambiguities. The

overall e�ect is very coupled to the array pattern, the waveform, the dwell time and

the clutter variations.

� The dwell time on a target sets the number of pulses that can be integrated coherently

and also the bandwidth of the Doppler �lters, controlling the velocity sensitivity and

to a large extent, the clutter suppression. The dwell time is of course limited by the

available time over a target, that is itself a function of the orbital parameters. Together

with the number of incoherent pulses that are integrated and the beamwidth of the

satellite transmit antenna, the dwell time e�ectively speci�es the area search rate of

the radar.

� The aperture on each satellite a�ects the transmit beamwidth that dominates the

search rate. On the receive side, it a�ects the roll-o� of the array sidelobes. Aperture

is obviously important for SNR considerations, but the relative signi�cance of this can

only be appreciated after also accounting for clutter.

� The transmitter power on each satellite directly impacts the noise-limited capabilities

through the radar range equation, but has no impact on the clutter-limited capabili-

ties.

� The location of the processing can dominate the feasibility, cost and reliability of the

system. Single-node processing may be the least complicated option, but requires

enormous processing power and is a single point of failure.

241

Page 240: The Generalized Information Network Analysis Methodology for

This large, but not complete list demonstrates the complexity of even a simpli�ed system

analysis of Techsat21. Design is even more challenging, since the coupling between the

di�erent variables is not immediately obvious, and it is unclear what the impact is of

changing any one variable. These are precisely the conditions under which the GINA

methodology can help. By performing system level analysis, accounting for all the important

functionality of the system, the impacts of di�erent variables can be fully appreciated. The

preliminary design can then re ect all the di�erent coupled e�ects, reducing the potential

for costly surprises later in the project.

7.4 Using GINA in Design Trades for Techsat21

The Techsat21 project is still in its infancy, and even the preliminary architectural design

has not yet been �nalized. To assist AFRL in the de�nition of a workable architecture, the

GINA methodology has been applied to the problem. This section describes the modeling

of Techsat21 within the GINA framework and presents the predicted capabilities for a wide

range of possible architectures. The following section takes the candidate architectures

that have the best capabilities, and assesses their CPF and Adaptability in order to make

intelligent design suggestions.

7.4.1 Goals of the Study

A complete analysis and evaluation of all the architectural options available for Techsat21

is most de�nitely beyond the scope of this study, and is probably more suited to an entirely

dedicated research program. However, the design process has to start somewhere, and if

nothing else, an investigation limited to a subset of all possible architectures has merit in

eliminating alternatives or identifying viable candidates. This study is therefore limited to

the evaluation of designs featuring one-dimensional clusters, and considers only the mini-

mum redundancy arrays discussed in Section 7.3.1. Further work will assess two-dimensional

clusters and other sparse array types.

Thus, the primary goals of the GINA study for Techsat21 are to quantify the rela-

tive importance of the most signi�cant architecture variables for 1D cluster con�gurations.

These are: (1) cluster size, in terms of the number of satellites; (2) array con�guration

(restricted or unrestricted) and extent; (3) PRF; (4) transmitter power; (5) aperture size of

each satellite; and (5) processing location.

The real emphasis is on how each of these variables impact the capabilities of the system,

rather than on the performance and cost. The reason for this is that, at present, the system

requirements have not been well de�ned. Furthermore, only by considering the capability

characteristics can a feasible architecture be chosen. A shortlist of candidate architectures

242

Page 241: The Generalized Information Network Analysis Methodology for

have been selected, and some performance and CPF results are actually presented, based

on an approximate set of system requirements that were agreed upon in meetings with the

AFRL.

The approach taken in the analysis is to model each alternative architecture within

a full test-matrix that covers reasonable ranges of each design variable. This method is

not exactly elegant, but is comprehensive and guarantees that the important trends are

captured. The test matrix for the analysis is shown in Table 7.2.

Table 7.2: Test Matrix for Analysis of Techsat21

Variable Test valuesNumber of cluster satellites f4, 8, 11gArray type fUnrestricted, RestrictedgCluster diameter f100m, 200mgPRF f1500Hz, 3000HzgAperture size f1m 2m 4mgTransmitter Power f100W, 200W, 400WgProcessor location fsingle satellite, distributedg

Note that there are some combinations of variables that cannot be realized. For example,

the smallest inter-element separation for an 11 element unrestricted array, is 1=45th of its

total length (see Table 7.1). This means that for a cluster that is 100m long, satellite

apertures no larger than approximately 2m can be used. Also, although the location of

the processor has an impact on the performance and cost, it should have no impact on

the capabilities of the system. The trade study for the processor location can therefore

be carried out after selecting the candidate architectures that have acceptable capabilities.

Note that even with these reductions, the test matrix still has over 200 elements.

The modeled system parameters that are constant across all cases are given in Table 7.3.

These values are the result of conversations with AFRL. The orbital altitude is chosen to

be 800km based on a desire to keep the free-space attenuation low, while being high enough

that the satellites have a wide FOV, and restricted to lie below the radiation belts. The

radar bandwidth, at 15MHz, is similar to previous designs. A noise temperature of 290oK

is typical of ground-looking receivers, and the receiver losses are assumed at approximately

1.5dB. Finally, the target RCS is conservatively chosen to be 10m2 to represent a small

automobile.

7.4.2 Transformation of the GMTI mission into the GINA framework

Within the framework of GINA, the de�nition of the market and the quality-of-service

parameters for the GMTI mission are speci�ed below:

� The market is to detect, locate and track moving targets within speci�ed ground

243

Page 242: The Generalized Information Network Analysis Methodology for

Table 7.3: Modeled Techsat21 system parameters held constant across all cases

Parameter ValueSystem Altitude 800 kmRadar bandwidth 15 MHzCoherent dwell time 50msTarget RCS �T 10m2

Receiver noise temperature 290oKSystem losses 1.5dB

regions. The \users" are therefore ground locations, or cells, of a given size. The

system must transfer information regarding the existence of moving targets from these

ground cells to military command centers. Operated in a \track-while-search" mode,

the radar can construct and maintain the tracks of targets by repeated revisits.

� Isolation is speci�ed in terms of the ground resolution (the cell size), the MDV and

velocity precision of detected targets. The amount of interference from clutter and

ambiguities is also related to the isolation capabilities, since they can result in a

declaration of a target in an incorrect cell.

� Rate is equivalent to the revisit (search) rate of the ground cells within a theater

of interest. This ows directly from the need to track moving targets. The update

rate must match the expected target dynamics since slow movers pose a less serious

threat and can be updated slowly, while fast moving targets must be updated often to

maintain track. Note that this revisit rate speci�es the update rate of each cell during

the times when the theater is being searched. Any periods of time when the theater

is not being accessed are omitted from this analysis. These coverage considerations

are largely uncoupled from the radar issues of interest, and are only a function of the

constellation/clustellation design.

� Integrity is strictly the sum of the probability of detection and the probability of

false alarm for each radar interrogation of each ground cell. This represents the total

probability of error. However, search radars are conventionally operated in a Constant

False Alarm Rate (CFAR) mode, in which the rate of false alarms is held constant, and

the detection threshold oats accordingly. For comparative purposes, the Integrity is

therefore de�ned as just the probability of detection, at the speci�ed CFAR.

� Availability has the consistent de�nition of being the probability of achieving given

values of the other capability parameters.

244

Page 243: The Generalized Information Network Analysis Methodology for

7.4.3 Modeling Techsat21

The network architecture for Techsat21 is, by de�nition, a representation of the system

architecture. The topology of these networks is una�ected by the array con�guration (re-

stricted/unrestricted) since the routing of information is not a function of satellite spacing.

However, architectures with di�erent numbers of satellites have di�erent network topolo-

gies. The network diagrams used for the generalized analysis for 4, 8 and 11 satellites are

shown in Figures 7-8{7-10.

Cluster Separations

Cluster Separations

Cluster Separations

Cluster Separations

2-Way SpacelossRadar TXRadar

Source Sink

Radar RX

Radar RX

Radar RX

Radar RX

TechSat Radar ProcessingMux

ISL

ISL

ISL

ISL

Target & Clutter

Figure 7-8: Network diagram for Techsat21 with ns = 4 satellites

Cluster Separations

Cluster Separations

Cluster Separations

Cluster Separations

Cluster Separations

Cluster Separations

Cluster Separations

Cluster Separations

2-Way SpacelossRadar TXRadar

SourceSink

Radar RX

Radar RX

Radar RX

Radar RX

Radar RX

Radar RX

Radar RX

Radar RX

TechSat Radar ProcessingMux

ISL

ISL

ISL

ISL

ISL

ISL

ISL

ISL

Target & Clutter

Figure 7-9: Network diagram for Techsat21 with ns = 8 satellites

245

Page 244: The Generalized Information Network Analysis Methodology for

2-W

ay S

pace

loss

Rad

ar T

XR

adar

S

ourc

eS

ink

Tec

hSat

Rad

ar

Pro

cess

ing

Tar

get &

C

lutte

rIn

1O

ut1

Clu

ste

r o

f 1

1

Figure

7-10:

Network

diagram

forTechsat21withns=11

satellites

Startingattheleft

handsideof

each

ofthesediagram

s,the\R

adar

Source"

module

represents

thesignalgeneratorfortheindividual

radars.Although

only

asinglemoduleis

show

n,thisrepresents

allnsofthecluster

satellites.Thismethodof

model

reductionis

possiblebecause

each

oftheorthogonaltransm

itsignalsfrom

thenssatellites

areuncoupled

through

thenetworkuntilbeingcombined

bytheprocessor

module.Eachchannelexhibits

thesamebehaviorthrough

thesystem

,andonly

thee�ects

ofchannelfailuresneedto

be

modeled.

Thenextmodulerepresentsthensradar

transm

itters,in

whichthetransm

itpow

erand

aperture

size

arespeci�ed.Theaperture

controlsnot

only

thetransm

ission

gain

ofthe

signals,butalsotheone-way

far-�eldradiation

pattern

that

illuminates

thetheater.

The\T

wo-way

Spaceloss"modulecalculatesther2

attenuationofthesignalpow

erthat

isexperiencedin

each

direction,to

andfrom

thetarget.Thisdependson

theconstellation

altitudeandon

thegrazinganglebetweenthelineofsightto

theclusterandthetarget'slocal

horizon.Thegrazingangle

isactually

representedas

aprobabilitydistribution

function,

asshow

nin

Figure

7-11.Thisdistribution

functionwas

obtained

bycreatingahistogram

ofthegrazingangles

aboveamask

angleof

15oforallgroundlocationswithin

the�eldof

viewof

acluster.Thereisnonadir-holeconstraintplacedupon

thegrazinganglesince

the

abilityof

thesystem

todetecttargetsat

allangles

willbecalculated;thegrazingangles

that

leadto

SCR

sohighas

tohinder

detection

willshow

upin

theresultsas

losses

of

availability.

Figure

7-11thuscorrectlyrepresents

theviewingstatistics

ofacluster

that

is

activelysearchingatheaterof

interest.Ofcourse,therewillbetimes

when

thecluster

is

notin

viewofatheater,butthisisnot

importantsince

thecapabilitiescalculatedin

this

chapterreferonly

tothetimes

when

agivencluster

isactivelysearching.

Thefact

thattheelevation

angleis

representedstatisticallyis

thereason

that

the

spacelossismodeled

asa\two-way"loss,andnot

astwoseparate\one-way"losses;the

statisticsforeach

direction

are

notindependentandso

thenetloss

has

thesamestatistics

aseither

onealone.

Returningto

thenetwork

diagram

s,thenextmodule

accounts

forthetarget

charac-

teristics(�

T)andtheclutter

returns(��).

Thepow

erre ectedfrom

thetarget

issimply

theproduct

oftheincidentpow

erand�T.Since�Tisassumed

tohaveaconstantvalue

(10m

2),thisdoes

not

changethenature

ofthesignal'spow

erstatistics.

246

Page 245: The Generalized Information Network Analysis Methodology for

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.91

1530

4560

7590

Gra

zing

ang

le (

degr

ees)

Probability

Figure

7-11:

Grazingangleprobabilitydistributionfunction

Atthisstage,

theclutter

return

isactually

representedas

theclutter

power

perunit

area

forreasonsthat

willbecomeclearlater.

Thisisgivenbytheproduct

oftheincident

signalpow

erandtheaverageclutter

crosssectionper

unitarea�o.Asdescribed

insection

7.2.5,�ovaries

stronglywithmanyparam

eters,butparticularlywithterrainandgrazing

angle.

Published

data[57]

formeasuredvariationsin�owithgrazingangleforatypical

terrainareplotted

inFigure

7-12.Theresultspresentedin

thechapteruse

the�ovariation

forfarm

landterrain.Thisvariationiscombined

withthat

oftheincidentpow

erto

obtain

theclutter

pow

erper

unitarea

asafunctionof

grazingangle.

Itsstatistics

canalso

be

determined

from

thegrazingangleprobabilitydistribution

function.

Againreferringto

thenetwork

diagram

s,theoutputofthetarget/cluttermoduleissplit

intonsseparate

inform

ation

pathsforinputto

thenssatellitereceiversviatheirassociated

\Cluster

Separation"modules.

Thenumber

ofthesepathsthrough

thereceivingarray

istheonly

di�erence

betweenthediagram

sof

Figure

7-8{7-10.Notethat

Figure

7-10

is

drawnwithonly

asinglemodulethat

represents

theentire

receivingarray;thiswas

done

fordiagram

aticalsimplicity

andtheactualtopology\underneath"thisblock

looksjustlike

theother

networksbutwith11satellites.

The\C

luster

Separation"modulesrepresentthearraycon�guration

andareusedto

inputtherelative

positionsof

thedi�erentsatellites.They

appearin

thediagram

only

because

thereceiver

modulesarestandardized

modules1

withnoinput�eldthat

speci�es

position.Thereceiver

modulesspecifythereceiver

antennasize

(usually

thesameas

the

1Allthenetwork

diagramsare

screen-shots

from

thesoftware

usedto

perform

theanalysis.

247

Page 246: The Generalized Information Network Analysis Methodology for

-28

-26

-24

-22

-20

-18

-16

-14

-12

1020

3040

5060

Gra

zing

ang

le (

degr

ees)

Clutter reflectivity, σ0 in dB (1m2/m2)

farm

land

woo

dlan

dci

ties

dese

rt

Figure

7-12:

Clutterre ectivity,�o,asafunctionofgrazingangle,forseveralterrains

environments

transm

itantennasize),thenoise

temperature,andthecircuitlosses.

Itwas

stated

earlierthatthecapabilitiesshould

beuna�ectedbythelocation

ofthe

processor,andso

forsimplicity

itismodeled

asasinglemodulethat

could

beasatellite,a

groundstationor

aparallelcomputerform

edfrom

thenssatellites.For

theperform

ance

calculationsdescribed

later,accountistakenof

thereliabilityandcost

implicationsof

the

di�erentprocessorimplementations.

Theprocessormodulereceives

theinputsfrom

thenssatellitereceivers,witheach

input

carryingasmanyasnsseparate

channels(thedi�erenttransm

ittersignals).

The�rstoperation

isto

calculate

thefar-�eldpow

erresponse

ofthesparse

array.

Using

thetwo-way

antennapattern

foreach

satellitease(u),

andthefourier

transform

ofthe

satellitepositionsasf(u),thefar-�eldam

plituderesponse

iscalculatedfrom

Eqn.7.37.

Squaringthisgives

thepow

er-response,as

afunctionof

azimuth

angle.

Sam

ple

antenna

patternsfor100m

diameter

unrestricted

arraysconsistingof

4,8or

11satellites,each

with

a2m

aperture

areshow

nin

Figures7-13{7-15.

Themaximumdirectivityattheboresightofthesparsearraycorrespondsto

ane�ective

processinggain

(after

accountingforthenoncoherentsummingof

noise)ofn2.This

is

multiplied

bythetarget

signal

pow

er,assumingthetarget

islocatedat

theboresight.

Theclutter

also

undergoes

pattern

ampli�cation,butthegain

varies

acrossthepattern.

Amap

oftheclutter

ampli�cationfactor

(dueto

arrayprocessing)

foreach

groundcell

atcoordinates(A

Z,EL)canbecalculated.Multiplyingthismap

bytheclutter

pow

erper

unitarea

foreach

grazingangle,andbythecalculatedarea

ofeach

cell,givesamap

ofthe

array-processed

clutter

pow

erforallgroundlocations.

248

Page 247: The Generalized Information Network Analysis Methodology for

−0.015 −0.01 −0.005 0 0.005 0.01 0.015−100

−90

−80

−70

−60

−50

−40

−30

−20

−10

0

sin(θ)

Nor

mal

ized

pow

er r

espo

nse

(dB

)

Altitude =800 km

Average range =1388 km

Average 3dB Resolution =167 meters

Average MDV =1 m/s

PRF =1500Hz

Minimum grazing angle to avoid range ambiguities =16

Figure 7-13: Far �eld power response for an unrestricted minimum redundancy array;ns = 4;Dc = 100m;Ds = 2m

Recall that the ground-clutter returns also have Doppler shifts that are dependent on

the positions of the clutter source relative to the radar. It is possible therefore to calculate

a Doppler map giving the Doppler shifts for each ground cell at coordinates (AZ,EL).

Consider now the pulse-Doppler processing. Coherent dwell time has been assumed at

50ms, which together with the PRF speci�es the number of integrated pulses and hence

the number of Doppler �lters used by the pulse-Doppler processor. Now, the target can be

assumed to have a radial velocity and associated Doppler shift that would place it's signal

within the bandpass of any of these �lters with an equal probability.

For each �lter, and each range bin (corresponding to a speci�c value of EL), the clutter

that competes with the signal is equal to the sum of the array-processed clutter powers for

each of the ground cells that have the correct Doppler shift to pass through the �lter. This

includes the Doppler ambiguities. The probability distribution function of this competing

clutter can also be calculated by combining all the relevant statistics for each of the variables

in the calculation, correctly accounting for those that are independent and those are not.

The SIR values (and statistics) at the output of the pulse-Doppler processor can now

249

Page 248: The Generalized Information Network Analysis Methodology for

−0.015 −0.01 −0.005 0 0.005 0.01 0.015−100

−90

−80

−70

−60

−50

−40

−30

−20

−10

0

sin(θ)

Nor

mal

ized

pow

er r

espo

nse

(dB

)

Altitude =800 km

Average range =1388 km

Average 3dB Resolution =208 meters

Average MDV =1 m/s

PRF =1500Hz

Minimum grazing angle to avoid range ambiguities =16

Figure 7-14: Far �eld power response for an unrestricted minimum redundancy array;ns = 8;Dc = 100m;Ds = 2m

be determined with Eqn. 7.21 using the calculated probability distributions for the signal

power, the noise power and the clutter power.

The e�ects of noncoherent integration are then modeled. The number of pulses (each

with the calculated SIR) that can be integrated is the ratio between the total allowable

dwell time for each cell and the coherent dwell time. The total allowable dwell time is

estimated from the theater size, the transmit beam footprint and the required revisit rate.

The resulting number of pulses integrated is such that there is just enough time to sample

every ground cell in the theater within the required revisit interval. If the revisit rate is

speci�ed too high, the radar does not have time to visit every location even once, and

probability of achieving any particular SIR is reduced linearly by the fraction of cells that

cannot be addressed.

With the SIR and the number of pulses to be integrated, the probability of detection can

be calculated from Eqn. 7.16, for any speci�c Pfa. The results presented in this chapter

used a Pfa corresponding to a CFAR of 1/1000 seconds for each km2 of theater. This

value was chosen during conversations with AFRL as being a reasonable starting point for

250

Page 249: The Generalized Information Network Analysis Methodology for

−0.015 −0.01 −0.005 0 0.005 0.01 0.015−100

−90

−80

−70

−60

−50

−40

−30

−20

−10

0

sin(θ)

Nor

mal

ized

pow

er r

espo

nse

(dB

)

Altitude =800 km

Average range =1388 km

Average 3dB Resolution =208 meters

Average MDV =1 m/s

PRF =1500Hz

Minimum grazing angle to avoid range ambiguities =16

Figure 7-15: Far �eld power response for an unrestricted minimum redundancy array;ns = 11;Dc = 100m;Ds = 2m

preliminary analysis. The availability of the calculated detection capability is de�ned by

the probability distribution function of the corresponding SIR.

7.4.4 Capability Results

The Capability characteristics relating PD to Availability have been calculated for each

architectural option in the test-matrix and with the following quality-of-service parameters:

� Theater size (number of users) = 5� 105km2 and 106km2

� Revisit time = 60 seconds, 100 seconds, and 120 seconds

� False alarm rate = 1/1000 seconds for each km2 of theater

The results (over 200 of them) are included in the attached Appendix. To preserve the

reader's sanity, the important trends displayed by these characteristics are summarized in

the following sections, organized according to the PRF, since it turns out that this has one

of the largest impacts on the capabilities.

251

Page 250: The Generalized Information Network Analysis Methodology for

Capabilities of systems with PRF=1500Hz

At this lowest PRF, the footprint is range unambiguous, and the dominant signal degra-

dation comes from the large clutter returns attributable to a large number of Doppler

ambiguities.

� Restricted Arrays versus Unrestricted Arrays

There are no signi�cant di�erences in the capabilities between the Restricted and Un-

restricted (Generalized) array con�gurations. Basically, the di�erences in the far-�eld

patterns cause variations in the clutter accepted in any particular Doppler �lter, but

average out when considered over all possible target velocities. The largest di�erences

amount to less than 5% variation in availability, with the Unrestricted being a little

better than Restricted arrays. The reason for this small improvement is a slightly

�ner angular resolution that limits the main-lobe clutter.

� Number of Satellites

The con�gurations featuring 4 satellites have very poor capabilities, with availabilities

less than 10% for all useful detection probabilities (PD > 0:5). The problems are

many grating lobes and high sidelobes that amplify clutter signi�cantly. Since all the

architectures with ns = 4 have such poor capabilities, only a small selection of them

are included in the Appendix, just to show how bad they really are.

Increasing the cluster size to 8 satellites improves the capabilities to within the realms

of usefulness by suppressing grating lobes and reducing sidelobe levels. The availabil-

ities reach as high as 77% at a PD=0.5, with a 2 minute update of the small theater.

This is for a system featuring small apertures (1m) and high power (400W).

The largest cluster size of 11 satellites o�ers the highest capabilities. For the same two

minute update of the small theater, the best 11 satellite clusters can support a PD=0.5

with 95% availability. This is a militarily useful capability. The architectures that

achieve these capabilities involve small apertures (1m) and medium to high powers.

In general increasing the number of satellites results in improved capabilities through

greater sidelobe suppression (see Figures 7-13{7-15). This reduces the impact of the

Doppler ambiguities, and transitions the system so that it is more evenly noise-and-

clutter limited.

� Power and Aperture

The individual aperture size has a profound impact on the results. A smaller aperture

provides quadratic increases in the search rate, and hence quadratic increases in the

number of integrated pulses, but incurs only a linear penalty in the number of Doppler

252

Page 251: The Generalized Information Network Analysis Methodology for

ambiguities. Conversely, a large aperture produces dreadful results, reducing the

number of pulses that can be integrated. The conclusion drawn is that a smaller

aperture o�ers much greater PD and the best architectures in every category feature

the smaller aperture size. The small apertures are compromised a little in availability

at the low PD's, since this regime corresponds to very heavy clutter backgrounds (high

grazing angles) in which the extra Doppler ambiguities have an impact. The high PD

regime corresponds to lower clutter powers, and hence a slower antenna roll-o� is not

a problem. Since the aperture area goes as the square of the diameter, the smaller

apertures obviously have lower SNR's. This is not too important if the power is

equal to or greater than 200W, since the detection is then clutter-limited. However,

reducing the power to only 100W, while also having a small aperture, means that noise

becomes relatively more signi�cant (compared to clutter) and results in a noticeable

(20%) drop in the availability. Because the detection is clutter-limited, increasing the

power beyond 200W has a very limited impact on the capabilities.

� Array Diameter

Longer baselines result in higher main-lobe resolutions, but can mean that more grat-

ing lobes fold into the pattern. This essentially produces \blind-spots" in the response,

where targets cannot be detected. Other than this, which is only an issue for a few

of the 8 satellite architectures, there are no penalties for spreading the cluster over

the longer baselines. The only bene�ts are to marginal lower the theoretical MDV,

although both options provide MDV's that are lower than can be practically achieved.

� Summary for PRF=1500Hz

The best option is to choose a small aperture (very important), with a high power

transmitter (less important) on as many satellites as can be a�orded. The array

diameter does not really matter.

The candidate architectures with a PRF=1500Hz that have the best capabilities are:

� 8 satellites, 100m Generalized array, 1m aperture, 400W

� 11 satellite, 100 Generalized array, 1m aperture, 200W

The Capability characteristics for each of these are shown in Figures 7-16 and 7-17.

Across all values of PD, the availability is a strong function of the update rate, since high

rates reduce the time available for noncoherent integration. The range in variation with

update rate is indicative of the signi�cance of thermal noise, since this is suppressed by

integration. The overall shape of the curves is dominated by clutter e�ects. Note that the

availability drops to very low values for high PD's since they can be achieved only during

253

Page 252: The Generalized Information Network Analysis Methodology for

the rare circumstances when the geometry leads to low clutter returns. The value of the

availability at the elbow at low PD's is somewhat representative of the relative signi�cance

of the clutter to detection, since this corresponds to the worst case clutter returns.

The corresponding antenna patterns for these two architectures are shown in Figure 7-18

and 7-19. These plots give the average resolution (over all likely grazing angles) to be less

than 200m and MDV to be a very low 1ms�1. This last value is probably not achievable in

practice since clutter motion a�ects (due to wind etc.) that have not been modeled begin

to dominate at low Doppler frequencies.

Capabilities of systems with PRF=3000Hz

With a PRF of 3000Hz, there are range ambiguities across the footprint for all aperture sizes.

This has the detrimental e�ect of adding competing clutter to all range bins, and in practice

would require some additional processing to resolve the ambiguity in target location. For

this analysis, the e�ects of the additional clutter have been modeled, but not the problems

of correct target location. Since the number of range ambiguities is proportional to the

length of the footprint, it could be expected that smaller apertures are penalized by having

longer footprints.

� Restricted Arrays versus Unrestricted Arrays

Once again, there is no di�erence in the capabilities of systems featuring Restricted

or Unrestricted arrays.

� Number of Satellites

The 4-satellite clusters have unacceptably low capabilities and are not discussed fur-

ther.

The di�erences between the capabilities of the 8 and the 11 satellite clusters is not

as great at 3000Hz as it was at the lower PRF. The reason for this is because the

detection process is now dominated by range ambiguities, and with one-dimensional

arrays, the roll-o� in the response in the range direction is entirely a function of the

elemental aperture size. As a result, suppressing sidelobes in azimuth by adding more

satellites does not really improve the capabilities of the system. Quantitatively, the

extra signal power placed on the target improves the SNR enough to give 5{10%

improvement in the availability by going from 8 satellites to 11 satellites. The shape

of the characteristics do not however change noticeably.

� Power and Aperture

As has been suggested, the aperture size is a critical parameter for the capability of

the systems in the presence of range ambiguities. The slow roll-o� in the patterns for

254

Page 253: The Generalized Information Network Analysis Methodology for

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Model:TS8G100hpsa. Number of users =500000

Rate=0.0083

Rate= 0.01

Rate=0.0167

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Model:TS8G100hpsa. Number of users =1000000

Rate=0.0083

Rate= 0.01

Rate=0.0167

Figure 7-16: Capability Characteristics for candidate Techsat21 architecture: ns = 8;Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz

255

Page 254: The Generalized Information Network Analysis Methodology for

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Model:TS11G100sa. Number of users =500000

Rate=0.0083

Rate= 0.01

Rate=0.0167

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Model:TS11G100sa. Number of users =1000000

Rate=0.0083

Rate= 0.01

Rate=0.0167

Figure 7-17: Capability Characteristics for candidate Techsat21 architecture: ns = 11;Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz

256

Page 255: The Generalized Information Network Analysis Methodology for

−0.03 −0.02 −0.01 0 0.01 0.02 0.03−100

−90

−80

−70

−60

−50

−40

−30

−20

−10

0

sin(θ)

Nor

mal

ized

pow

er r

espo

nse

(dB

)

Altitude =800 km

Average range =1388 km

Average 3dB Resolution =250 meters

Average MDV =1 m/s

PRF =1500Hz

Minimum grazing angle to avoid range ambiguities =20

Figure 7-18: Far �eld power response for candidate Techsat21 architecture: ns = 8;Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz

small apertures causes very large range and Doppler ambiguity problems. This has a

signi�cant e�ect on the capability characteristics, in both shape and magnitude. For

low to mid values of PD , in conditions dominated by clutter returns at large grazing

angles, the extra range ambiguities of 1m-aperture systems cause 10{20% losses in

availability compared to 2m-aperture designs. At the high values of PD, achievable

only in conditions with weaker clutter (small grazing angles), the small aperture allows

a longer total dwell time and so more pulses can be integrated to improve SNR. This

bene�t is not enough however to outweigh the losses due to range ambiguities.

The only conditions when it makes sense to have a smaller aperture is if the driv-

ing requirement is for a high search rate; under these conditions, systems with larger

apertures (2m or 4m) do not have time for noncoherent integration, and their capabil-

ities are worsened beyond that of the (already poor) capabilities of the small-aperture

system. In fact, the systems with the largest aperture (4m) have this problem even at

the lower search rates. Consequently, the availability supported by the systems with

the 4m-apertures is almost 50% worse than mid-sized aperture systems in the useful

257

Page 256: The Generalized Information Network Analysis Methodology for

−0.03 −0.02 −0.01 0 0.01 0.02 0.03−100

−90

−80

−70

−60

−50

−40

−30

−20

−10

0

sin(θ)

Nor

mal

ized

pow

er r

espo

nse

(dB

)

Altitude =800 km

Average range =1388 km

Average 3dB Resolution =250 meters

Average MDV =1 m/s

PRF =1500Hz

Minimum grazing angle to avoid range ambiguities =20

Figure 7-19: Far �eld power response for a candidate Techsat21 architecture: ns = 11;Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz

ranges of PD. Essentially, the improvements in the SNR and the sidelobe clutter

suppression does not help detection as quickly as an almost total loss of noncoherent

integration hinders it.

The interesting conclusion is that there appears to be an optimum aperture size for the

system when using a range ambiguous PRF, and for a PRF=3000Hz, this optimum

is around 2m.

Transmitter power, conversely, has a very limited impact. Within the range of values

that were modeled (100W,200W and 400W), each doubling of the power resulted in

approximately 5% improvement in the availability at useful PD's. This logarithmic

behavior is typical of systems operating in the linear region of the SNR vs PD curves

of Figure 7-2.

� Array Diameter

As for the lower PRF, the array diameter has a very limited e�ect on the capabilities

of the system.

258

Page 257: The Generalized Information Network Analysis Methodology for

� Summary for PRF=3000Hz

Operating in a range ambiguous mode, the architectures with the best capabilities

have mid-sized apertures and high powers. The bene�ts o�ered by a higher power

transmitter are limited and may not be worth the extra cost that it represents.

Of those modeled with a PRF=3000Hz, the architecture with the best capabilities has

11 satellites, each with a 2m aperture and 400W of transmit power. The capability charac-

teristics for this system are shown in Figure 7-20

Comparing these characteristics with those of the best architectures at a PRF of 1500Hz,

it can be seen that the higher PRF has worse capabilities. The range ambiguities are simply

too damaging. For this reason, none of the architectures with a PRF of 3000Hz were carried

through to the CPF part of the analysis.

Notice that for even a moderate sized theater of a half-million square kilometers, none

of the architectures presented can support availabilities exceeding 90% at any useful update

rate (around 1 minute) for any PD greater than 0.5. These values represent something close

to the minimum acceptable capabilities for a military GMTI mission, meaning that the use

of one-dimensional clusters featuring minimum-redundancy arrays is probably inappropri-

ate for an operational theater surveillance system. However, their simplicity makes them

suitable for a demonstration-class mission, and their capabilities could be militarily useful

for smaller-sized theaters.

7.5 The Performance, CPF and Adaptability for Techsat21

Candidate Architectures

The architectures with the best capabilities are now analyzed in terms of their generalized

performance and cost. In addition, the issue of how best to implement the signal processing

is addressed. Performance and cost will be used as discriminators for choosing whether

to deploy a single dedicated processing satellite per cluster, or to distribute the processing

among the cluster satellites themselves. The �rst option requires an extra satellite which

must be very capable (to be able to handle the enormous processing load) and reliable

(to avoid single-point failure modes). The second option adds complexity to the system,

in terms of the intersatellite communication, parallelization of the algorithms and load-

balancing between the satellites, but has no single-point of failure.

7.5.1 Performance

The �rst step in quantifying the performance is to establish a set of system requirements so

that the concept of mission failure can be de�ned. These requirements represent minimum

259

Page 258: The Generalized Information Network Analysis Methodology for

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Model:TS11G100hp. Number of users =500000

Rate=0.0083

Rate= 0.01

Rate=0.0167

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Integrity

Ava

ilabi

lity

Model:TS11G100hp. Number of users =1000000

Rate=0.0083

Rate= 0.01

Rate=0.0167

Figure 7-20: Capability Characteristics for candidate Techsat21 architecture: ns = 11;Dc = 100m; Generalized Array; P = 400W ;Ds = 2m; PRF=3000Hz

260

Page 259: The Generalized Information Network Analysis Methodology for

acceptable values for the isolation, rate, integrity and availability of system operations in a

given market. For the GMTI mission this translates into the availability at speci�c values for

the MDV and location accuracy of the target, the revisit rate of a theater of a given size, and

the PD and FAR. Values for these requirements have been chosen based on conversations

with the AFRL, and represent reasonable estimates that are appropriate for a preliminary

study. These are:

� Theater size = 105 square kilometers

� Revisit time = 1 minute

� MDV = 3 ms�1, Location accuracy (resolution) = 1km

� PD=0.75

� Availability = 90%

Two candidate architectures, representing a small cluster (8 satellites) and a large cluster

(11 satellites)

1. 8 satellites, Dc=1m, Pt=400W

2. 11 satellites, Dc=1m, Pt=200W

These architectures are considered the \baseline" systems for evaluation, and to show that

they can satisfy the requirements, their capability characteristics are reproduced in Figure

7-21. Also shown in the �gure are the capabilities for modi�ed versions of the 11 satellite

architecture with higher and lower transmit powers. These alternatives o�er di�erent levels

of margin over which the capabilities are exceeded and may result in increased performance

or reduced costs. This will be discussed later in the Adaptability section. Note that the

capability characteristics are independent of the processor implementation, and each of

the candidate architectures could be deployed with either type of processor. The list of

candidate architectures considered for the rest of the study is therefore:

1. 8 satellites, Dc=1m, Pt=400W

2. 11 satellites, Dc=1m, Pt=200W

3. 8 satellites, Dc=1m, Pt=400W with centralized processor

4. 11 satellites, Dc=1m, Pt=200W with centralized processor

One of the important di�erences between these lies in the probability of continued sys-

tem operation, measured by the generalized performance. Reliability models are therefore

needed for each architecture. Consider �rst the systems with distributed processors.

261

Page 260: The Generalized Information Network Analysis Methodology for

0.5 0.6 0.7 0.8 0.9 10.75

0.8

0.85

0.9

0.95

1

Integrity (PD

)

Ava

ilabi

lity

nc=8, D

s=1m, P

t=400W

nc=11, D

s=1m, P

t=200W

nc=11, D

s=1m, P

t=100W

nc=11, D

s=1m, P

t=400W

Figure 7-21: Capability Characteristics for candidate Techsat21 architectures at a 1minute update of a 105km2 theater; requirements are PD = 0:75, Availability=0.9

To properly treat the problem of distributing detection-processing across a satellite

cluster is a worthy subject for its own doctoral thesis, and involves the �elds of computer

science, antenna theory, signal theory and of course space systems engineering. We cannot

hope do justice to this problem within the con�nes of this chapter, and after all, the goal

here is to demonstrate the application of GINA for design. All that is really needed is

an approximation of its impact on the performance and cost. An assumption is therefore

made that the technology exists to perform the processing, whether distributed or on a

dedicated satellite, and that the reliability of the processor itself is unity. For comparing

between architectures, the assumption on the reliability is equivalent to an assumption

that the processing is equally reliable (but less than unity) in either case, and unity is

just more convenient. Equating the reliabilities of the processors is justi�able because they

essentially have to perform the same functionality. The dedicated processor will have to be

implemented as a parallel computer anyway, because the load is too high for any envisioned

single processor. The only di�erence then is in the networking. Well, both architectures rely

on intersatellite links, and the additional connectivity required of the distributed processor

on the one hand adds complexity, and on the other, redundancy. These compensate each

other, and the net result is that, at least to �rst order, the reliability of the processing

262

Page 261: The Generalized Information Network Analysis Methodology for

is independent of its implementation. The assumption then is that the performance is

dominated by satellite failures.

Satellite failures are modeled to occur at a constant failure rate, calculated from the

failure rates of the important subsystems. Each cluster satellite is modeled as comprising a

structural bus module, a propulsion system, a communications payload (for connectivity)

and a \special payload" representing the radar package. Using data from SMAD [3] and

assuming one in 10 failures result in a satellite loss, the equivalent satellite failure rate is

approximated as �s = 0:026 per year. The resulting state probabilities for di�erent numbers

of satellite failures for the 8 satellite cluster are shown in Figure 7-22.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 2 4 6 8 10

Years

Pro

babi

lity

No failures

1 failure

2 failures

3 failures

4 failures

5 failures

6 failures

7 failures

Figure 7-22: The state probabilities for di�erent numbers of satellite failures in the 8satellite cluster; �s = 0:026

Mission failures occur if the cluster capabilities degrade, through satellite failures, to

the point that requirements can no longer be satis�ed. Analysis showed that the 8 satellite

cluster cannot tolerate a single satellite failure if it is to satisfy the requirements given above.

Therefore, the performance of the 8 satellite cluster is given by the curve corresponding to

zero satellite failures in Figure 7-22, resulting in a value of only about 13% after 10 years,

which is too low for most military applications. Of course, the assumed failure rates are not

particularly accurate, but the real factor that drives the performance is that all 8 satellites

must work for the system to satisfy requirements. This would force a scheduling of regular

replenishment launches, to maintain the performance at higher levels. This will be captured

later in the lifetime cost calculations.

263

Page 262: The Generalized Information Network Analysis Methodology for

Increasing the number of satellites to 11 improves things a great deal. Provided the

satellites have at least 200W of power, as many as 3 failures can be tolerated by recon�guring

the array after each failure. The resulting performance curves are plotted in Figure 7-23,

showing that the performance can be increased to around 65%. Also shown in this �gure

are the performance curves for the architectures featuring centralized processing. They

are worse than the corresponding distributed processing options because the centralized

processing satellite adds an additional mission failure mode, that being the single point

failure of its satellite bus. In producing these curves, the failure rate for the extra processing

satellite has been modeled as the same as the cluster satellites.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 2 4 6 8 10

Year

Pro

babi

lity

8 sats11 sats11 sats, low power8 sats +centralized processor11 sats + centralized processor

Figure 7-23: The generalized performance of the di�erent architectures subject to re-quirements for a 1 minute update of a 105km2 theater with PD = 0:75 and Availabil-ity=0.9

7.5.2 The CPF Metric and the System Lifetime Cost

The CPF metric for a military search radar system is the cost per protected square kilometer,

where \protected" indicates a compliance with the detection requirements. The total system

lifetime cost, used to calculate the CPF, accounts for the baseline costs of building and

launching the satellites, and also the failure compensation costs needed in the event of

a violation of system requirements. In this way, the system lifetime cost captures the

performance of the system. Furthermore, since all the systems being considered in this

study address the same number of square kilometers over the same lifetime with the same

264

Page 263: The Generalized Information Network Analysis Methodology for

requirements, the CPF is actually nothing more than a scaled version of system lifetime

cost. Since the lifetime costs are easier to comprehend than the CPF's, that have very small

absolute values, they will be used as surrogates for the CPF.

The system lifetime cost can be estimated using simple cost models and the failure

probability pro�les of the last section. The models assume a three year cost spreading, with

the �rst launch in 2004 and IOC in 2005. The system is assumed active through the year

2014.

The baseline cost is the sum of the satellite costs, and launch and insurance costs.

The total satellite costs increase with the number of satellites per cluster and the number of

clusters. Each of the systems considered is assumed to be deployed with 48 separate clusters

in polar orbits to achieve revisits to a theater at approximately 30 minute intervals. Version

8.0 of the Aerospace Corporation Small Satellite Cost Model [13] is used to calculate the

TFU satellite bus cost as,

Csat = 6:47P 0:1599EOL ���0:356 (7.40)

where PEOL is the end-of-life payload power, conservatively assumed to be twice the RF

power of the transmitter, and �� is the pointing requirement. For Techsat21, AFRL have

established a value of 2o as the pointing requirement, allowing gravity gradient stabilization.

To this bus cost must be added an estimate of the payload cost. For Techsat21 featuring

distributed processing, the payload cost of each satellite is assumed to be dominated by the

processors, since these represent the most advanced components. The cost of the satellite

processors scales with the number of oating point operations per second (FLOPS). For

the Techsat21 concept, the total number of FLOPS can be estimated as the sum of the

array processing load and the pulse-Doppler processing load. To form each beam, the

array processing involves summing n2s signals that are bandlimited to 15MHz and (at least)

Nyquist sampled. From the ratio of the antenna pattern roll-o� to the maximum resolution,

the number of simultaneous beams is approximately Dc=Ds, and so an estimate for the array

processing load for the entire cluster is,

FLOPSarray proc = 2� 15MHz� n2s �Dc

Ds(7.41)

The pulse-Doppler processing is implemented with an np-point FFT that involves ap-

proximately np log2(np) operations for each coherent dwell period of length (np=PRF ).

Accounting for all range bins, each pulse is actually a digitized 15MHz signal of duration

(1/PRF), and so the total number of FLOPS involved in the pulse Doppler processing for

the entire cluster is,

265

Page 264: The Generalized Information Network Analysis Methodology for

FLOPSp-D proc = 2� 15MHz� (1=PRF )� np log2(np)� (PRF=np)� Dc

Ds

= 2� 15MHz� log2(np)�Dc

Ds(7.42)

For Techsat21 at a PRF of 1500Hz, np � 64 and so n2s >> log2(np). Consequently the

array processing dominates, and the pulse-Doppler processing can be neglected. The total

processing load is therefore estimated at:

� 11.5 Giga-FLOPS for the 8 satellite cluster

� 21.8 Giga-FLOPS for the 11 satellite cluster

For equal load-balancing, each satellite is accountable for an equal share of this pro-

cessing. The payload cost per satellite can then be estimated by scaling processing costs

from conventional military satellite programs. Canavan [18] states that the cost density for

processing was approximately $0.001 per FLOP in 1996. Assuming this price halves every

two years gives a cost density of $8:84� 10�4 in 2003.

For the cases featuring centralized processing, the payload cost of the cluster satellite

are assumed to cost a �xed $0.5M, and the above scalings are used to calculate the payload

cost of the processing satellite, noting that it is responsible for the entire processing load.

The sum of the payload cost and the bus cost de�nes the TFU, and the non-recurring

costs are estimated as being four times this value. The recurring costs assume a learning

curve discount of 15% over the entire production run. Launch is modeled as costing $8000

per kg of wet mass, and the satellites and dispensers are conservatively assumed to weigh

200kg. Insurance costs are 20% of the satellite and launch vehicle costs. The resulting base-

line costs, in �xed year FY94$ for the baseline architectures, with and without centralized

processing, are given in Tables 7.4{7.7.

The failure compensation costs are the expected costs required to build and launch

any replacement satellites in the event of a violation of system requirements. These are

calculated by an expected value calculation, using the state probability curves similar to

Figure 7-22 and the average satellite costs. Finally, the lifetime costs are the sum of the

baseline costs and the expected failure compensation, and discounted back to 2002 at 12%

per year to account for the time value of the money. This is labeled as NPV Cost in the

tables, indicating that it is the result of a Net Present Value calculation.

266

Page 265: The Generalized Information Network Analysis Methodology for

Table 7.4: System lifetime costs for Architecture 1 (8 sats)

Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M)2002 429.86 0.00 429.862003 859.73 0.00 767.612004 1147.83 0.00 915.042005 717.97 67.55 559.112006 0.00 62.39 39.652007 0.00 57.54 32.652008 0.00 52.98 26.842009 0.00 48.68 22.022010 0.00 44.66 18.042011 0.00 40.87 14.742012 0.00 37.31 12.012013 0.00 33.98 9.772014 0.00 30.85 7.92

Table 7.5: System lifetime costs for Architecture 2 (11 sats)

Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M)2002 520.27 0.00 520.272003 1040.54 0.00 929.052004 1228.62 0.00 979.452005 708.35 0.09 504.262006 0.00 1.18 0.752007 0.00 4.10 2.332008 0.00 8.76 4.442009 0.00 14.58 6.602010 0.00 20.89 8.442011 0.00 27.06 9.762012 0.00 32.62 10.502013 0.00 37.27 10.712014 0.00 40.83 10.48

7.5.3 Lifetime Cost Results

The system lifetime costs for the baseline architectures with distributed or centralized pro-

cessing are plotted in Figure 7-24. The costs for all the systems are reasonable for a mission

of this type, and are comparable to the projected cost of the \Discoverer-II" system [47]

proposed by DARPA, the Air Force and the National Reconnaissance O�ce to address

a similar mission. However, the absolute values of the costs are of less interest than the

relative trends.

Note that the cheapest system is the 8 satellite architecture with distributed processing,

followed closely by the 11 satellite architecture, again with distributed processing. This

means that it is marginally cheaper to deploy the lower performance system and maintain

operations through regular replenishment, rather than build the reliability into the system

up front. However, the small relative di�erence in cost between these two options is prob-

267

Page 266: The Generalized Information Network Analysis Methodology for

Table 7.6: System lifetime costs for Architecture 3 (8 sats, Centralized Processor)

Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M)2002 476.42 0.00 476.422003 952.84 0.00 850.752004 1322.27 0.00 1054.102005 845.85 78.25 657.762006 0.00 73.09 46.452007 0.00 68.22 38.712008 0.00 63.63 32.242009 0.00 59.29 26.822010 0.00 55.22 22.302011 0.00 51.36 18.522012 0.00 47.73 15.372013 0.00 44.32 12.742014 0.00 41.10 10.55

Table 7.7: System lifetime costs for Architecture 4 (11 sats, Centralized Processor)

Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M)2002 616.24 0.00 616.242003 1232.48 0.00 1100.432004 1466.51 0.00 1169.102005 850.27 25.08 623.062006 0.00 25.39 16.142007 0.00 27.33 15.512008 0.00 30.81 15.612009 0.00 35.32 15.982010 0.00 40.26 16.262011 0.00 45.10 16.262012 0.00 49.43 15.912013 0.00 52.97 15.232014 0.00 55.57 14.26

ably within the uncertainty of the cost model, suggesting that both archtectures (8 or 11

satellites) have approximately the same lifetime cost. Furthermore, the results presented

here do not capture the e�ects of down-time that would follow the failure of a single satellite

from the smaller cluster. Since continuity of service is critical for military systems used in

war-time operations, this pretty much invalidates the use of the 8 satellite architecture.

Adding a centralized processing satellite results in higher costs, due to a combination of

higher spending during initial deployment, and increased failure compensation. Essentially,

all the reliability bene�ts of a distributed sensor (path redundancy, recon�gurability, etc.)

have been lost by adding a single point of failure in the data processing. Based just on

these results, it does not seem sensible to implement centralized processing.

268

Page 267: The Generalized Information Network Analysis Methodology for

2.86

3.00

3.26

3.65

3.96

2.00

2.50

3.00

3.50

4.00

4.50

Lifetime cost ($B)

8 sa

t, H

igh

pow

er, S

mal

l ape

rtur

e11

sat

, Sm

all a

pert

ure

8 sa

t, H

igh

pow

er, S

mal

l ape

rtur

e &

Pro

cess

or11

sat

, Sm

all a

pert

ure

& P

roce

ssor

Sta

ged

Dep

loye

d P

roce

ssor

Figure

7-24:Thesystem

lifetimecost

ofdi�erentTechsat21architecturessubjectto

requirementsfora1minute

update

ofa10

5km

2theaterwithPD=0:75andAvailabil-

ity=0.9

Stageddeploymentofthecentralizedprocessor

Proponentsofthecentralized

processor

conceptfordistributedsatellitesystem

shaveclaimed

that

onepotential

bene�tisforeasy

upgradingof

capabilities.

Therationaleisthat

ofall

satellitetechnologies,thefastestadvancesareoccurringin

the�eldof

computing.

Asa

result,itmay

makesense

todeploytheprocessor

separately,supportingan

easy

upgrade

asnew

andimprovedcomputers

aredeveloped.It

had

also

beenassumed

that

since

the

staged

deploymentof

theupgradeprocessor

occurred

laterin

thelifetimeof

projects,the

presentvalueofthecost

oftheupgradeislow.Techsat21provided

anexcellenttest

case

forthisconcept.

Atwo-stagedeploymentof

thesystem

was

investigated,in

whichan

8-

satellitearchitecture

withcentralizedprocessor

isdeployed

�rst,follow

ed5years

laterby

anew

processor

satelliteandanadditional

threereceiver

satellites.Theresultsshow

that

although

theseaugm

entationsassistfailure

compensation

andimprove

thecapabilities,the

net

a�ectisto

increase

thesystem

lifetimecost,as

show

nin

Figure

7-24.

7.5.4

Adaptability

Since

the11

satellitecon�gurationhas

somemarginincapabilities,itispossibleto

ascertain

itsperform

ance

andlifetimecost

under

morestringentsystem

requirem

ents.In

thisway,

thesensitivityofthesystem

cost

tomission

requirem

ents

canbemeasured.Alternatively,

forthesamerequirem

ents,thesystem

may

be`'dow

n-sized"byreducingthetransm

itter

pow

erto

100W

inan

attem

ptto

reduce

costswhilestillmeetingthemission

goals.

To

becomplete,it

isalsoworth

consideringaddingeven

morepow

erto

provideincreased

269

Page 268: The Generalized Information Network Analysis Methodology for

performance. This is a good idea if the resulting reductions in the failure compensation

costs outweigh the increases in the baseline costs. All these issues can be addressed with

the Adaptability metrics.

Requirement Elasticities for the 11-satellite Techsat21 cluster

As de�ned in Chapter 4, the requirement elasticities of the CPF at a given design point are,

Isolation Elasticity, EIs =�CPF=CPF

�Is=Is(7.43)

Rate Elasticity, ER =�CPF=CPF

�R=R(7.44)

Integrity Elasticity, EI =�CPF=CPF

�I=I(7.45)

Availability Elasticity, EAv =�CPF=CPF

�Av=Av(7.46)

Recall that these represent the percent change in the CPF (in this case lifetime cost) in

response to a 1% change in the requirement values. For the Techsat21 space-based radar, the

system requirements that most in uence the system lifetime cost are the rate, integrity and

availability speci�cations. The isolation requirement, equivalent to the MDV or the ground

resolution, is related to the array con�guration (spacing) and as such, is not immediately

impacting on the system cost. Also, while the rate requirement most de�nitely has a

profound impact on the design of feasible architectures, the designer himself has very little

exibility in choosing a rate requirement, since it is implicitly related to the dynamics of

the targets. However, the integrity and the availability requirements are more tradeable, in

that they are related to the customer's perceptions of quality. The probability of detection

requirement is derived from the real mission requirement that the system be able to maintain

a target track. The actual value of PD that is needed to do this e�ectively depends on the

algorithms used to analyze the detection data. A slight change in PD may have a small

impact on the overall success of the mission, but can have a huge impact on the lifetime

cost of the system. Also, although it is unlikely that the DoD approve a theater surveillance

system with an availability lower than 90%, there is a very real possibility that decision-

makers in the DoD arbitrarily choose a value higher than this as the requirement. Now

realize that in many cases, the value chosen is not the result of extensive research to �nd

the value below which military operations are compromised; rather, the requirement is

chosen ad hoc based on judgement and politics. It is useful then, to quantify the �nancial

impact of changing the requirements on PD and availability.

The 11-satellite cluster can support higher values for both PD and availability, making

it exible to changes in the system requirements. However, the performance changes, and

270

Page 269: The Generalized Information Network Analysis Methodology for

consequently the failure compensation increases so that there is a measurable impact on

the lifetime cost. Consider a change in the PD requirement from 0.75 to 0.9. Although the

11 satellite cluster can satisfy the requirements with a full complement of satellites, if two

satellites fail, the resulting 9 satellite cluster cannot satisfy this new PD requirement. The

performance of the system is therefore lower than for the previous requirement, and this

results in increased failure compensation costs. Increasing the required availability to 95%

has a similar impact. To quantify these e�ects, the corresponding requirement elasticities

of the lifetime cost are plotted in Figure 7-25.

0.000

0.200

0.400

0.600

0.800

1.000

Ela

stic

ity

0.250 0.899 0.029

E(Pd) E(Av) E(Power)

Figure 7-25: Elasticities of the lifetime cost for the 11 satellite cluster; PD : 0:75 ! 0:9;Availability: 0:9! 0:95 ; Pt : 200W ! 100W

The fact that both elasticities are positive in sign is to be expected, since an increase

in either requirement leads to increased costs, through failure compensation. The magni-

tude of the elasticities show that the availability requirement is a much larger cost driver

than the PD. In fact, the availability elasticity is almost unity (0.9), and so even small

increases in the availability requirement result in measurable cost increases, whereas the

cost is reasonably insensitive to changes in the PD. This result emphasizes the importance

of correctly specifying the requirements during system de�nition; whereas engineers would

likely specify the PD requirement, based on knowledge of what is needed to construct target

tracks, the availability requirement, which has a much larger impact on the system cost, is

most likely chosen by high ranking decision-makers with little appreciation for the impact

of their choice. If the Techsat21 system is to be deployed cost-e�ectively, the engineers and

271

Page 270: The Generalized Information Network Analysis Methodology for

the military planners must work together in de�ning the availability requirement, based on

what is really needed for military utility, and little or no more.

Technology (Power) Elasticity for Techsat21 cluster

Before evaluating the technology-elasticity metric corresponding to modi�cations in the

transmitter power, valuable insight can be gained from considering just the performance

implications of these changes.

Referring to the performance pro�les plotted in Figure 7-23, the 11-satellite baseline

system bene�ts from being compliant with requirements even after a loss of 3 satellites.

This would correspond to the 8-satellite cluster after recon�guration. It was already stated

that the 8 satellite cluster cannot tolerate a single satellite failure, even with 400W of

transmit power. Therefore, the 11 satellite cluster with Pt = 400W can tolerate the same

number of satellite failures as the baseline system that has Pt = 200W . Their performances

will be equal and so there is no bene�t in deploying the larger power system since it will

assuredly cost more to build and launch.

Conversely, if the satellites have only 100W of transmit power, there is less margin for

error, and 2 satellite failures result in a violation of the requirements. This reduces the

performance to approximately 25% over 10 years, but also reduces the system cost. Now,

there is an engineering trade to be made, and the elasticity metric can be used to guide

the decision. Speci�cally, a positive value for the power-elasticity indicates that it is cost

e�ective to deploy less power than the nominal 200W, since it would imply that a reduction

in power results in a reduction of cost. Also the absolute value of the elasticity measures

the relative importance of this decision on the overall system cost. In actual fact, the power

elasticity is positive but is almost zero, as shown in Figure 7-25. This means that although

the baseline cost is less with the smaller power transmitters, this change is almost exactly

counteracted by increases in the failure compensation needed to maintain operations. There

is therefore only a very slight bene�t in deploying smaller power transmitters. It is probably

more prudent to accept the slightly higher costs of the 200W system, since it buys extra

margin reducing the probability of system downtime.

7.5.5 Conclusions of Design Trades

The generalized analysis of the Techsat21 concept has uncovered some very interesting

trends:

� One dimensional spacecraft clusters using minimum-redundancy arrays are feasible for

a GMTI mission, provided that at least 8 satellites are deployed. The capabilities of

272

Page 271: The Generalized Information Network Analysis Methodology for

these systems are adequate for a demonstration program, but probably not su�cient

for a operational system.

� At the chosen operating altitude of 800km, a range-unambiguous PRF of 1500Hz

gives the best capabilities. Higher PRF's su�er from high clutter return from the

ambiguities.

� At this PRF, each satellite should be equipped with a small antenna, to improve the

search rate, and enough transmitter power to satisfy SNR constraints, accounting for

the n2s processing gain. For an 8 satellite cluster, this corresponds to an aperture of

approximately 1m2 and a power of 400W. For an 11 satellite cluster with apertures

of 1m2, the power can be as low as 100W for reasonable capabilities.

� Both the 8 satellite cluster and the 11 satellite cluster can satisfy requirements for

90% availability in searching a 105km2 theater within 1 minute, at a PD = 0:75 and

a FAR of 1000 seconds per square km. Both systems can support MDV's as low as

1ms�1 and location accuracies of 100m on the ground.

� Extra margin in the capability is useful for improving performance, since it allows

satellite failures to occur without violating requirements. The performance of the 11

satellite system is almost 65% over 10 years for the given system requirements.

� The most cost e�ective option, accounting for all the e�ects of failure compensation,

is the 8 satellite cluster with distributed processing. The most prudent (and only

marginally more expensive) is the 11 satellite cluster.

� A centralized processor is a bad idea for Techsat21, incurring high costs and low

performance.

� The 11 satellite cluster is insensitive to changes in the PD requirement, but very

sensitive to changes in the availability requirement. Consequently, the availability re-

quirement should be set very carefully to match what is actually needed operationally.

Reducing the power on the 11 satellite cluster does not really reduce costs since the

savings in the baseline cost is counteracted by high failure compensation costs.

7.6 Summary

The Techsat21 concept is very di�cult to understand without a great deal of prior ex-

perience in radar, antenna theory, signal processing and orbital mechanics. The analysis

presented here, covering around 60 pages, is nothing more than a �rst-cut at the problem.

273

Page 272: The Generalized Information Network Analysis Methodology for

Nevertheless, some interesting trends have been discovered, which will guide the next level

of design.

Most notably, it appears that one-dimensional minimum-redundancy arrays cannot pro-

vide su�cient capability to be used as the sole asset for GMTI theater surveillance. This

statement must however be quali�ed, lest the reader misunderstands its implications. The

conclusion is based on the analysis presented in this chapter, and as such is sensitive to

the assumptions that were made to simplify the model. The most important of these is

the clutter-processing model, that was based on simple pulse-Doppler radar techniques. In

a real system, additional levels of clutter rejection and suppression could be implemented,

improving the capabilities beyond that predicted. However, if complicated clutter rejec-

tion and suppression techniques are required, the total processing load (including the array

processing) would become very prohibitive for small satellite platforms. However, there

are possibilities that have not yet been explored. Preliminary results (not presented) sug-

gests that the use of arrays in which the ratio of the element spacings is prime o�ers a

de�nite potential for improvement over the minimum-redundancy arrays. Furthermore, if

the encoded-pulse idea is proved workable, high PRF's around 10kHz give marked improve-

ments in the capabilities even for 8-satellite minimum-redundancy clusters. This work is in

progress at AFRL.

The most important thing to realize though, is that the inadequacies of the one-dimensional

clusters are not characteristic of two-dimensional clusters. In particular, clutter returns are

suppressed in both range and azimuth by the low pattern ampli�cation in the sidelobes of

the sparse array. This reduces the total clutter power entering each receive-beam by orders

of magnitude (approximately 10dB for a 4 � 4 randomly distributed cluster) compared to

one-dimensional arrays with the same number of satellites. This allows high PRF's to be

used, further reducing the Doppler ambiguities and clutter. It is almost certainly true that

the capabilities and performance of two-dimensional clusters will be signi�cantly better than

the one-dimensional architectures studied here. As a result, future work should address the

application of two-dimensional arrays for Techsat21.

In concluding discussion about Techsat21, it must be emphasized that the concept rep-

resents an extraordinarily elegant approach for performing an GMTI mission. Recall that

the most di�cult problem for a GMTI system is an isolation issue: how to isolate slow

moving targets from stationary clutter. Techsat21 attacks this problem not with massive

amounts of processing to correct for poorly suited sensing, as is the approach taken by

Discoverer-II, but instead asks Mother Nature to work in its favor. By spreading out its

apertures, it implicitly improves its ability to isolate signals from di�erent locations since

the di�erences in the arrival phase of the signals will be increased. This makes the isolation

task a whole lot easier, and decouples it from the now trivial detection process.

274

Page 273: The Generalized Information Network Analysis Methodology for

This chapter was intended as a demonstration of the GINA methodology for a realistic

design study, and the Techsat21 space-based radar provided a challenging example. As has

been stressed many times, the level of complication in this design is intimidating. However,

by considering only what is actually important to the mission, in terms of the generalized

quality-of-service parameters that de�ne the capabilities, and by decomposing the system

into only those functional modules that a�ect these capability parameters, a reasonably

simple model was constructed. This yielded signi�cant results in predicting capabilities,

showed important trends relating these capability characteristics to changes in the system

parameters, identi�ed architectures that are well suited to the mission while eliminating

architectures that are not e�ective, and allowed the cost drivers to be identi�ed. This is

precisely the conceptual design process.

275

Page 274: The Generalized Information Network Analysis Methodology for

276

Page 275: The Generalized Information Network Analysis Methodology for

Chapter 8

Conclusions and Recommendations

The goal of this research was to develop a systematic approach to analyze modern satellite

systems having any architecture, distributed or monolithic, for any likely mission.

A generalized analysis methodology for satellite systems has been developed, and it

can be used for the analysis of any space system architecture addressing any mission in

communications, sensing or navigation. The generalization is possible because, for each of

these applications, the overall mission objective is to transfer information between remote

locations, and to do so e�ectively and economically. The analysis methodology is therefore

a hybrid of information network ow analysis, signal and antenna theory, space systems

engineering and econometrics. The most important concepts of the Generalized Information

Network Analysis (GINA) can be summarized:

� Satellite systems are information transfer systems

All current satellite systems essentially perform the task of collection and dissemina-

tion of information.

� Information transfer systems serve O-D markets

These markets are de�ned by a set of origin-destination pairs, and speci�c information

symbols that must be transferred between them.

� Satellites and ground stations are nodes in a network

Information must ow through the nodes, to connect the O-D pairs that de�ne the

market. At any instant, the network is de�ned only by its operational components, and

so all networks are assumed to be instantaneously failure-free. Should a component

fail, the network changes by the removal of that component.

277

Page 276: The Generalized Information Network Analysis Methodology for

� The capabilities of the system are characterized by the isolation, rate,

integrity and availability parameters

{ Isolation characterizes the system's ability to isolate and identify the signals

from di�erent sources within the �eld of view.

{ Information Rate measures the rate at which the system transfers information

symbols between each O-D pair. Information must be sampled at a rate that

matches the dynamics of the source or end-user.

{ Integritymeasures the error performance of the system, characterizing the prob-

ability of making an error in the interpretation of a symbol.

{ Availability is the instantaneous probability that information symbols are being

transferred through the network between known and identi�ed O-D pairs at a

given rate and integrity. It is a measure of the mean and variance of the other

capability parameters. It is not a statement about component reliabilities.

� Each market has associated requirements on isolation, rate, integrity

and availability

Users of the system are satis�ed only when information transfers occur that are compli-

ant with these requirements. Therefore, these are the functional requirements placed

on the system. A network satisfying these requirements is deemed operational.

� Performance is defined relative to mission requirements

The performance of a system within a given market is the probability that the system

instantaneously satis�es the top-level functional requirements. This is simply the

probability of being in an operational state. It is here that component reliabilities

make an impact.

� The Cost per Function metric

This is a measure of the average cost to provide a satisfactory level of service to a

single O-D pair within a de�ned market. The metric amortizes the total lifetime

system cost over all satis�ed users of the system during its life.

The lifetime system cost includes the baseline cost and the expected failure compensa-

tion costs. Baseline costs account for the design, construction, launch and operation of

the system. The failure compensation costs represent expenditure necessary to com-

pensate for any failures that cause a violation of requirements. Since the likelihood

of failure is the compliment of the generalized performance, it is through the failure

compensation costs that performance impacts the CPF metric.

278

Page 277: The Generalized Information Network Analysis Methodology for

The number of satis�ed users is determined by the capability characteristics of the

system and by market e�ects. The system capabilities de�ne the maximum number

of users that can be supported at the required rate, integrity and availability. The

number of satis�ed users is the smaller of the supportable capacity and the size of the

local market.

� The Adaptability metrics

These measure how sensitive a system is to changes in the requirements, component

technologies, operational procedures or even the design mission.

{ Type 1 adaptabilities are the elasticities of the CPF with respect to realistic

changes in the system requirements or component technologies. This allows the

system drivers to be identi�ed, and can be used in comparative analyses between

candidate architectures.

{ Type 2 adaptability measures the exibility of an architecture for performing

a di�erent mission, or at least an augmented mission set.

The bene�ts of the GINA methodology are several-fold. First of all, it is completely

compatible with, and supportive of, formal SE practices. By being based on a functional

decomposition of the real architecture, GINA builds upon basic functional analysis, adding

unambiguous, objective quanti�cation to predict capabilities, performance, cost and risk.

In addition, the CPF and the Adaptability metrics support comparative analyses between

competing systems with large architectural di�erences. GINA bases judgment on how well

a system addresses a de�ned market and scales the cost accordingly. In this way, very large

and ambitious systems can be fairly compared to smaller, more conservative systems.

The mathematical form of the elasticities (Adaptability metrics) are identical to the

conventional elasticities used in econometric analysis. This allows the quantitative results

of systems engineering analyses to be integrated with �nancial analyses so that they may

be applied in forming business-cases for satellite programs, or can be used in the investment

decision-making process.

The formalism of the GINA framework has been used to obtain signi�cant quantitative

and qualitative results for a variety of applications and has allowed a comprehensive charac-

terization of satellite systems in general. The most important of these classi�cations relate

to distributed satellite systems. A distributed satellite system is de�ned as any system that

uses more than a single satellite to address the market, and the cluster size speci�es how

many satellites are in view of a common ground location. The categories of distribution are

based on the network topology: (1) Collaborative systems feature parallel uncoupled paths

through the network, from source to sink; and (2) Symbiotic systems feature interconnected

279

Page 278: The Generalized Information Network Analysis Methodology for

paths through several satellites before arrival at the sink.

Posed within the GINA framework, and organized using the generalized classi�cations,

the bene�ts o�ered by distributed systems are easily appreciated. Summarizing just the

most signi�cant of these:

� Symbiotic architectures o�er greatly improved isolation by separating sensors over

large cluster baselines, thus exploiting a di�erent collection geometry to separate and

identify di�erent signals in phase or frequency. Collaborative clusters do not improve

isolation capabilities.

� Distribution o�ers improvements in rate and integrity due to the ergodic property

of noise such that integrating over several collectors is equivalent to integrating over

time, but incurs no penalty in rate. A higher net rate of information transfer is

possible with collaborative clusters by combining the capacities of several satellites in

order to satisfy the local and global demand. This is simply a result of task division.

Signal to noise ratios can be improved linearly with collaborative clusters, through

task division, and super-linearly (quadratically or cubically) with symbiotic clusters,

through coherent integration. Both of these e�ects give exponential improvements in

integrity compared to singular deployments.

� Distributed systems can exhibit higher availabilities through a reduced variance in the

coverage of target regions. This reduces the need to \overdesign" and provides more

opportunities for a favorable viewing geometry.

� A staged deployment of space assets, matched to the development of the market,

can e�ectively lower the baseline costs of distributed systems compared to monolithic

designs, due to the time value of money, and a reduced level of �nancial risk.

� Distributed systems require only fractional levels of redundancy to be deployed in

order to achieve high reliabilities. Thus only marginal increases in the up-front costs

are needed to gain large savings in the failure compensation costs. More importantly,

due to the separation of important system components among many satellites, only

those components that break need replacement. This greatly reduces the failure com-

pensation costs.

Based on these arguments, deduced from organized, qualitative analysis using the GINA

methodology, it would appear that the potential o�ered by distributed systems is very great,

and their further development is strongly encouraged. These signi�cant conclusions, which

are straightforward and easily understood, where made possible by the structured approach

provided by GINA.

280

Page 279: The Generalized Information Network Analysis Methodology for

Qualitative analysis is however, insu�cient for the SE process. To demonstrate the

applicability of GINA for quantitative analysis, and to prove the claim of generality, the

GINA methodology has been applied to three detailed case studies, covering a range of

applications.

Validation of the technique was provided by analysis of the NAVSTAR Global Posi-

tioning System. The system was modeled to comprise modules for the satellite navigation

payload, the downlink transmitters, the e�ects of free-space loss, satellite visibility, cov-

erage geometry, multiple-access interference, and the user receiver functions. Inputs to

the model were based on simulations of the constellation, or on measured statistics. The

50th-percentile capabilities of GPS calculated using GINA agree to within 3% of the mea-

sured 50th-percentile capabilities. This is an excellent result, providing an \acid-test" for

the validity of the GINA approach. The generalized analysis also suggests that the GPS

architecture is extremely robust, with the navigation accuracy degrading by only a few

meters after two or three satellite failures. Since the original system requirements (16m

position accuracy at the 50th-percentile) are easily satis�ed by the current constellation,

this degradation is insu�cient to cause system failure, and the generalized performance is

very high. Augmenting the system with an additional three satellites, placed in GEO, adds

even greater performance. For the augmented system, military users could achieve 16m

position accuracy with 90% availability, even after 6 satellite failures. This corresponds to

a performance of approximately 100%.

A comparative analysis of three proposed broadband satellite systems demonstrated

the utility of GINA for a competitive assessment of commercial viability. Models were

constructed for Cyberstar, Spaceway and Celestri based on the designs listed in their FCC

�lings. The most important results of the study are summarized below:

� Cyberstar, as it appears in the �ling, is unsuited for providing broadband commu-

nications at rates higher than 386Kbit/s, while Spaceway and Celestri will be able

to support high rate (T1) services with high levels of integrity (BER� 10�10) and

availabilities exceeding 97%.

� The cost per billable T1-minute metric is the metric used to compare the potential

for commercial success of each system. It is the cost per billable T1-minute that

the company must recover from customers through fees in order to achieve a 30%

internal rate of return. Assuming improvements are made in Cyberstar so that it

may compete in this market, the calculated cost per billable T1-minute metrics for

all the systems are between $0.15 and $0.25, implying that all three systems will be

able to o�er competitively priced services to users. Celestri has a slight competitive

advantage since it supports the lowest cost per billable T1-minute, and also has the

281

Page 280: The Generalized Information Network Analysis Methodology for

smallest variation across market models.

� Deployment strategies and market capture have a larger impact on commercial success

than architecture. The di�erence in the cost per billable T1-minute between the

GEO and the LEO architectures is not as large as the di�erences due to a more

e�cient deployment strategy that is tailored to match the development of the market.

Increasing the size of the market captured means that the high �xed costs can be

amortized over more paying customers.

� Contrary to popular belief, lower launch costs are not as e�ective for commercial

bene�t as lower manufacturing cost.

� For smaller systems such as Cyberstar, o�ering lower rate services at discounted rates

o�ers the potential for larger revenues, through increased yield. This is not the case

for Celestri, that maximizes revenue by providing high-rate services. Basically, there

are not enough paying customers in the market to e�ciently utilize the resources of

Celestri at low data rates.

In the �nal case study, GINA was applied to a preliminary design of TechSat21, a dis-

tributed space-based radar concept. Techsat21 features symbiotic clusters of small satellites

(approximately 100 kg) that y in close formation, creating sparse arrays to detect ground

moving targets in a theater of interest. The GINA methodology allowed the capabilities to

be predicted, accounting for the e�ects of coverage variations, clutter and noise power, and

most importantly, the sparse aperture synthesis. The results are signi�cant:

� One-dimensional minimum-redundancy arrays provide insu�cient capability to be

used as the sole asset for GMTI theater surveillance, unless additional levels of (un-

modeled) clutter processing are implemented. There are some other options for one-

dimensional sparse arrays, that have not been explored, and they may o�er improve-

ment over the minimum-redundancy arrays. Furthermore, it would appear qualita-

tively that two-dimensional clusters o�er signi�cant potential. It should be noted

that the capabilities of the one-dimensional arrays are de�nitely within the bounds

of an e�ective concept demonstrator, and since the cluster can be augmented and

recon�gured, traceability to an operational system is guaranteed. The e�cacy of the

Techsat21 concept will thus be con�rmed.

� Small apertures and high powers are preferable at low PRF's, while larger apertures

are needed at higher (ambiguous) PRF's. The smaller apertures have a large FOV

and allow more rapid searching of the theater. However, at higher PRF's, a wide FOV

gives rise to range ambiguities and the clutter returns in the range ambiguities hinder

detection.

282

Page 281: The Generalized Information Network Analysis Methodology for

� At an operating altitude of 800km, a range-unambiguous PRF of 1500Hz gives the

best capabilities. At this PRF, each satellite should be equipped with the a small

antenna and enough transmitter power to satisfy SNR constraints, accounting for

an n2s processing gain. For an 8 satellite cluster, this corresponds to an aperture of

approximately 1m2 and a power of 400W. For an 11 satellite cluster with apertures

of 1m2, the power can be as low as 100W for reasonable capabilities.

� Both the 8 satellite cluster and the 11 satellite cluster can satisfy requirements for

90% availability in searching a 105km2 theater within 1 minute, at a PD = 0:75 and

a FAR of 1000 seconds per square km. Both systems can support MDV's as low as

1ms�1 and location accuracies of 100m on the ground.

� Based on cost-e�ectiveness and performance, centralized processing on a dedicated

satellite is a bad idea for Techsat21. The additional satellite adds to baseline costs

and the single-point of failure reduces the performance and increases the failure com-

pensation costs

� The system is almost insensitive to changes in the probability of detection (PD) re-

quirement, but very sensitive to changes in the availability requirement. It is im-

portant therefore to set the availability requirement very carefully to represent the

real operational needs of the mission. This necessitates signi�cant collaboration from

engineers and policy-makers, to balance what is needed with what can be a�orded.

These case studies show how the generalized analysis methodology has real applica-

tion in the SE process, since signi�cant results were produced in a very short period of

time (the case studies were all conducted within a 2 month period). By standardizing the

representation of the overall mission objective, in terms of the generalized quality of ser-

vice parameters, GINA organizes, prioritizes and focuses the engineering e�ort expended

in satellite system analysis. The formal methodology means that the analysis of a new

mission is straightforward once the mission parameters have been mapped into the GINA

framework. Following the procedures described in this thesis should produce quantitative,

relevant and meaningful results. Finally, a word of caution. GINA is not an excuse for

ignorance or carelessness. Successful application of GINA is contingent upon a solid under-

standing of the system and the mission to which it is addressed. Although it may give the

recipe, it neither bakes the cake, nor teaches the cook.

8.0.1 Recommendations

The procedures described in Chapter 4 of this thesis are the product of over two and

a half years of careful thought in re�ning concepts and eliminating irrelevant features,

283

Page 282: The Generalized Information Network Analysis Methodology for

so that the methodology is focused on only what is really important. Clearly the main

contribution of the research is the development of this methodology, and in that, there are

no recommendations for improvements par se (if I knew of any short-comings, I would have

surely corrected them before writing the thesis).

The main recommendation then, is to further validate the GINA methodology through

repeated application to more missions and satellite systems. Only through continued use

can GINA gain acceptance into the toolbox of the space systems engineer. Some speci�c

examples of where GINA should be applied, representing extensions of the case studies

presented in the thesis, are given below:

1. GINA should be applied for a comparative analyses of the mobile communication

satellite systems that will soon begin o�ering service. This should assist validation,

and will provide an opportunity to demonstrate GINA for near real-time strategic

planning. As the market develops, and the satellites become aged, the models can be

updated and re�ned. This would allow suggestions to be made regarding constellation

replenishment and augmentation, in order to optimize the cost per billable voice-

circuit minute.

2. GINA should be used for an analysis of the Discoverer-II space based radar, so that

it may be fairly compared to Techsat21. The results would greatly assist decision-

making for budget allocation. However, since the D-2 system parameters are classi�ed

secret, this analysis must be done outside of an academic environment.

3. The FAA plans deploy the geostationary Wide Area Augmentation System to improve

the capabilities of GPS so that satellite navigation can be adopted as the primary

means navigation for commercial air tra�c. GINA is perfectly suited to predict the

improvements that this will bring, and could even model the capabilities of augmented

operations using GPS, inertial navigation, and existing VHF Omnidirectional Ranging

(VOR) equipment. Evaluating the capabilities such a complex system will be di�cult,

but certainly tractable.

It is also suggested that the applicability of the method for missions other than commu-

nications, navigation and sensing be determined. There is a growing interest in the DoD to

deploy weapons in space, using either directed (photon) energy or mass-drivers. A question

arises as to whether GINA could be applied in the design of such systems. Strictly, neither

of these concepts feature information ow, and so GINA cannot be applied directly. How-

ever, at least with the laser weapons, the mission objective can still be posed in terms of

delivering an energy signal to a sink (target) using a satellite network. The sources of the

energy signal are the satellites, but they would likely receive command information from

284

Page 283: The Generalized Information Network Analysis Methodology for

allied commanders on the ground. The system is thus a hybrid network, in which infor-

mation is delivered to a set of satellites, that then act to deliver energy signals to a set of

targets. There does not seem to be any reason why the GINA methodology could not be

adapted to address these types of applications.

Note that, whereas the GINA procedure took over two and a half years to develop, the

implementation of the methodology, that being the development of the software used to

calculate the results presented in Part 2 of the thesis, lasted only nine months. This is

an area where it is recommended that improvements be made. Speci�cally, the functional

behavior of some modules can be improved to account for higher-order e�ects (non-Rayleigh

clutter, better rain attenuation prediction, improved ionospheric models, etc.) Also, Mat-

lab/Simulink is probably not the optimum platform for implementing GINA due to its re-

strictive format control on the vectors connecting functional modules. A better choice would

have been one of the commercial Computer Based Systems Engineering (CBSE) tools that

are more suited to functional ow concepts1. Integration with one of these existing tools

would be the main recommendation for further development of the GINA implementation.

1see INCOSE's web site URL=http://www.incose.org/

285

Page 284: The Generalized Information Network Analysis Methodology for

286

Page 285: The Generalized Information Network Analysis Methodology for

Bibliography

[1] San Francisco Bay Area Chapter International Council on Systems Engineering. Sys-

tems Engineering Handbook. Technical report, INCOSE, January 1998.

[2] Shigeru Mizuno and Yoji Akao, editors. QFD: The Customer-Driven Approach to

Quality Planning and Deployment. Asian Productivity Organization, 1994.

[3] W. J. Larson and J. R. Wertz, editors. Space Mission Analysis and Design. Microcosm,

Inc. and Klewer Academic Publishers, second edition, 1992.

[4] C. Gumbert, M. Violet, E. Hastings, D., W. M. Hollister, and R. R. Lovell. Cost per

Billable Minute Metric for Comparing Satellite Systems. Journal of Spacecraft and

Rockets, 34(12):837{846, December 1997.

[5] A. Kelic, G. B. Shaw, and D. E. Hastings. A Metric for Systems Evaluation and

Design of Satellite-Based Internet Links. Journal of Spacecraft and Rockets, 35(1),

January-February 1998.

[6] C. Jilla and D. Miller. A Reliability Model for the Design and Optimization of Sep-

arated Spacecraft Interferometer Arrays. In 11th Annual AIAA/USU Conference on

Small Satellites, Utah, September 1997.

[7] Douglas Wickert, G. B. Shaw, and D. E. Hastings. The Impact of a Distributed

Architecture for a Space Based Radar Replacement to AWACS. Journal of Spacecraft

and Rockets, 35(5), September{October 1998.

[8] The Satellite Remote Sensing Industry. KPMG Peat Marwick LLP, 1996.

[9] A. H. Greenaway. Prospects for Alternative Approaches to Adaptive Optics. In D.M.

Alloin and J.M. Mariotti, editors, Adaptive Optics for Astronomy, NATO-ASI, pages

287{308. NATO Publishers, 1993.

[10] R. Stephenson, D. Miller, and E. Crawley. Comparative System Trades Between Struc-

turally Connected and Separated Spacecraft Interferometers for the Terrestrial Planet

287

Page 286: The Generalized Information Network Analysis Methodology for

Finder Mission. Technical Report SERC 3-98, The MIT Space Engineering Research

Center, Massuchetts Institute of Technology, Cambridge , MA 02139, 1998.

[11] E. M. C. Kong. Optimal Trajectories and Optimal Design for Separated Spacecraft

Interferometry. Master's thesis, Department of Aeronautics and Astronautics, Mas-

sachusetts Institute of Technology, February 1999.

[12] T. J. Cornwell. A Novel Principle for Optimization of the Instantaneous Fourier Plane

Coverage of Correlation Arrays. IEEE Transactions on Antenna and Propagation,

36(8), August 1988.

[13] David Bearden. Cost Modeling. In Reducing Space Mission Cost. Microcosm Press,

1996.

[14] Jerry Sellers and Ed Milton. Technology for Reduced Cost Missions. In Reducing Space

Mission Cost. Microcosm Press, 1996.

[15] J. R. Wertz and W. J. Larson, editors. Reducing Space Mission Cost. Microcosm Press,

1996.

[16] Robert Parkinson. Introduction and Methodology of Space Cost Engineering. AIAA

Short Course, April 28-30 1993.

[17] Rick Fleeter. Design of Low-Cost Spacecraft. In J. R. Wertz and W. J. Larson, editors,

Space Mission Analysis and Design. Microcosm, Inc, second edition, 1993.

[18] G Canavan, D. Thompson, and I. Bekey. Distributed Space Systems. In New World

Vistas, Air and Space Power for the 21st Century. United States Air Force, 1996.

[19] R. F. Brodsky. De�ning and Sizing Payloads. In J. R. Wertz and W. J. Larson, editors,

Space Mission Analysis and Design. Microcosm, Inc, second edition, 1993.

[20] M. Socha, P. Cappiello, R. Metzinger, D. Nokes, C. Tung, and M. Stanley. Development

of a Small Satellite for Precision Pointing Applications. Technical report, Charles Stark

Draper Laboratoty, 1996.

[21] Project Foresight. 16.89 Space Systems Engineering Final Report, Department of

Aeronautics and Astronautics, Massachusetts Institute of Technology, Spring 1997.

[22] Nancy Lynch. Distributed Algorithms. Morgan Kaufmann Publishers, 1996.

[23] System Reliability and Integrity. Infotech International Limited, 1978.

288

Page 287: The Generalized Information Network Analysis Methodology for

[24] Robert Schwarz. A Probabilistic Model of Satellite System Automation on Life Cy-

cle Costs and System Availability. Master's thesis, Department of Aeronautics and

Astronautics, Massachusetts Institute of Technology, June 1997.

[25] E. S. Dutton. E�ects of Knowledge Reuse on the Spacecraft Development Process.

Master's thesis, Department of Aeronautics and Astronautics, Massachusetts Institute

of Technology, June 1997.

[26] Bob Preston. Plowshares and Power, The Military Use of Civil Space. National Defense

University Press, 1994.

[27] Greg Yashko and D. E. Hastings. Analysis of Thruster Requirements and Capabilities

for Local Satellite Clusters. In 10th Annual AIAA/USU Conference on Small Satellites,

Utah, September 1996.

[28] Ray Sedwick, E. Kong, and D. Miller. Exploiting Orbital Dynamics and Micropropul-

sion for Aperture Synthesis Using Distributed Satellite Systems: Applications to Tech-

Sat21. In AIAA Civil Space and Defense Technologies Conference, number AIAA-98-

5289, Hunstsville, AL, October 1998.

[29] G. W. Hill. Researches in the Lunar Theory. American Journal of Mathematics,

1(1):5{26, 1978.

[30] C. Swift and D. Levine. Terrestrial Sensing with Synthetic Aperture Radiometers.

IEEE MTT-S International Microwave Symposium Digest, 1991.

[31] Bernard D. Steinberg. Principles of Aperture and Array System Design. John Wiley

and Sons, 1976.

[32] R. K. Ahuja, T.L. Magnanti, and J.B. Orlin. Network Flows. Theory, Algorithms and

Applications. Prentice Hall, 1993.

[33] R. N. Bracewell. The Fourier Transform and its Applications. McGraw-Hill, second

edition, 1986.

[34] S. Drabowitch, A. Papiernik, and H. et al Gri�ths. Modern Antennas. Chapman and

Hall, 1998.

[35] J. M. Wozencraft and I. M. Jacobs. Principles of Communication Engineering. Wiley,

New York, 1965.

[36] E. A. Lee and D. G. Messerschmitt. Digital Communication. Kluwer Academic Pub-

lishers, second edition, 1994.

289

Page 288: The Generalized Information Network Analysis Methodology for

[37] D. G. Forney. Principals of digital communication. Printed notes for MIT class 6.451,

1996.

[38] Landau H. J. and Pollak H. O. Prolate spheroidal wave functions, fourier analysis,

and uncertainty. III The dimension of the space of essentially time- and band-limited

signals. Bell Systems Technical Journal, 41:1295, 1965.

[39] D. K. Barton. Modern Radar System Analysis. Artech House, 1988.

[40] E. J. Fitzpatrick. Spaceway. Providing a�ordable and versatile telecommunications

solutions. Paci�c Telecommunications Review, September 1995.

[41] Hughes Communications Galaxy Inc. Application of Hughes Communications Galaxy,

Inc. for Authority to Construct, Launch and Operate Spaceway, a Global Intercon-

nected Network of Geostationary Ka-Band Fixed-Service Communications Satellites.

FCC Filing, July 26 1994.

[42] Hughes Communications Galaxy Inc. Application of Hughes Communications Galaxy,

Inc. Before the Federal Communications Commission for Galaxy Spaceway, a Global

System of Geostationary Ka/Ku Band Communications Satellites - System Amend-

ment. FCC Filing, September 29 1995.

[43] Crane R. K. Prediction of Attenuation by Rain. IEEE Transactions on Communica-

tions, com-28(9):1717{1733, September 1980.

[44] Philip S. IV Babcock. An Introduction to Reliability Modeling of Fault-Tolerant Sys-

tems. Technical Report CSDL-R-1899, The Charles Stark Draper Laboratory, Cam-

bridge, MA, 1986.

[45] Robert Lovell. The Design Trade Process. Lecture notes fromMIT 16.89 Space Systems

Engineering class, 1995.

[46] R. Pindyck and D. Rubinfeld. Microeconomics. Prentice Hall, fourth edition, 1998.

[47] Discoverer-II: Brie�ngs to Industry. DARPA Tactical Technology O�ce Presenta-

tion, June 1998. Available on the World Wide Web at http://www.arpa.mil/tto/dis2-

docs.htm.

[48] Techsat21{Space Missions Using Satellite Clusters. Air Force Research Lab-

oratory Factsheet, September 1998. Available on the World Wide Web at

http://www.vs.afrl.af.mil/factsheets/TechSat21.html.

290

Page 289: The Generalized Information Network Analysis Methodology for

[49] Bradford Parkinson and James Jr. Spilker, editors. Global Positioning System: Theory

and Applications, Volume 1, volume 163 of Progress in Astronautics and Aeronautics.

AIAA Inc., 1996.

[50] Aeronautics and National Research Council Space Engineering Board. The Global

Positioning System|A Shared National Asset. National Academy Press, 1995.

[51] J. F. Zumberge andW. I. Bertiger. Ephemeris and Clock Navigation Message Accuracy.

In Global Positioning System: Theory and Applications, Volume 1. AIAA Inc., 1996.

[52] U.S. General Accounting O�ce. Satellite Acquisitions: Global Positioning System

Acquisition Changes After Challenger's Accident. U.S. Government Printing O�ce,

September 1987.

[53] Loral aerospace Holdings Inc. Application of Loral Aerospace Holdings, Inc. to Con-

struct, Launch and Operate a Global Communications Satellite System in the Fixed-

Satellite Service { The CyberStar Communications System. FCC Filing, September 29

1995.

[54] Teledesic Corporation Inc. Application of Teledesic Corporation for Authority to Con-

struct, Launch and Operate a Low Earth Orbit Satellite System in the Domestic and

International Fixed-Satellite Service. Amendment. FCC Filing, July 13 1995.

[55] Motorola Global Communications Inc. Application for Authority to Construct,

Launch and Operate the Celestri Multimedia LEO System, a Global Network of Non-

Geostationary Communications Satellites Providing Broadband Services in the Ka-

Band. FCC Filing, June 1997.

[56] New World Vistas|Air and Space Power for the 21st Century, 1995. USAF Scienti�c

Advisory Board Study.

[57] Leopold Canta�o, editor. Space Based Radar Handbook. Artech House, 1989.

[58] M. I. Skolnik, editor. Radar Handbook. McGraw Hill, New York, 1970.

[59] Lamont Blake. Radar Range-Performance Analysis. Artech House, 1986.

[60] Nicolaos Tzannes. Communication and Radar Systems. Prentice-Hall, 1985.

[61] J. I. Marcum. A Statistical Theory of Target Detection by Pulsed Radar { and Math-

emaical Appendix. IRE Transactions IT-6, (2):59{267, July 1963.

[62] P Swerling. Probability of Detection for Fluctuating Targets. IRE Transactions IT-6,

(2):269{308, April 1960.

291

Page 290: The Generalized Information Network Analysis Methodology for

[63] J. Neuvy. An Aspect of Determining the Range of Radar Detection. IEEE Transactions

on Aerospace and Electronic Systems, AES-6(4), July 1970.

[64] A.W. Rudge, K. Milne, A.D. Olver, and P. Knight, editors. The Handbook of Antenna

Design, volume 2. Peter Peregrinus Ltd., 1983.

[65] A. T. Mo�et. Minimum-Redundancy Linear Arrays. IEEE Transactions on Antennas

and Propagation, AP-16(2):172{175, 1968.

[66] John Leech. On the Representation of 1,2,...n by Di�erences. Journal of the London

Mathematical Society, 31:160{169, 1956.

292