51

Five Core Metrics: The Intelligence Behind Successful ...ptgmedia.pearsoncmg.com/images/9780133488593/samplepages/... · Computer software—Quality control. I. Myers, Ware. II. Title

Embed Size (px)

Citation preview

FIVECORE

METRICS

The IntelligenceBehind Successful

Software Management

Also Available from Dorset House Publishing

Adaptive Software Development:A Collaborative Approach to Managing Complex Systemsby James H. Highsmith III foreword by Ken OrrISBN: 0-932633-40-4 Copyright ©2000 392 pages, softcover

The Deadline: A Novel About Project Managementby Tom DeMarcoISBN: 0-932633-39-0 Copyright ©1997 320 pages, softcover

Dr. Peeling's Principles of Management:Practical Advice for the Front-Line Managerby Nic PeelingISBN: 0-932633-54-4 Copyright ©2003 288 pages, softcover

Measuring and Managing Performance in Organizationsby Robert D. Austin foreword by Tom DeMarco and Timothy ListerISBN: 0-932633-36-6 Copyright ©1996 240 pages, softcover

Peopleware: Productive Projects and Teams, 2nd ed.by Tom DeMarco and Timothy ListerISBN: 0-932633-43-9 Copyright ©1999 264 pages, softcover

Project Retrospectives: A Handbook for Team Reviewsby Norman L. Kerth foreword by Gerald M. WeinbergISBN: 0-932633-44-7 Copyright ©2001 288 pages, softcover

The Psychology of Computer Programming: Silver Anniversary Editionby Gerald M. WeinbergISBN: 0-932633-42-0 Copyright ©1998 360 pages, softcover

Quality Software Management Vol. 2: First-Order Measurementby Gerald M. WeinbergISBN: 0-932633-24-2 Copyright ©1993 360 pages, hardcover

For More Information

• Contact us for prices, shipping options, availability, and more.

• Sign up for DHQ: The Dorset House Quarterly in print or PDF.

• Send e-mail to subscribe to e-DHQ, our e-mail newsletter.

• Visit Dorsethouse.com for excerpts, reviews, downloads, and more.

DORSET HOUSE PUBLISHINGAn Independent Publisher of Books on

Systems and Software Development and Management. Since 1984.

1-800-DH-BOOKS 1-800-342-6657212-620-4053 fax: 212-727-1044

[email protected] www.dorsethouse.com

0CORE

METRICS

The IntelligenceBehind Successful

Software Management

Lawrence H. Putnam and Ware Myers

DORSET HOUSE PUBLISHING3143 BROADWAY, SUITE 2B

NEW YORK, NEW YORK 10027

Library of Congress Cataloging-in-Publication Data

Putnam, Lawrence H.Five core metrics : the intelligence behind successful software

management / Lawrence H. Putnam, Ware Myers.p. cm.

Includes bibliographical references and index.ISBN 0-932633-55-2 (soft cover)1. Computer software—Development—Management. 2. Computer

software—Quality control. I. Myers, Ware. II. Title.QA76.76.D47P8675 2003005.1'068-dc21

2003046157

All product and service names appearing herein are trademarks or registeredtrademarks or service marks or registered service marks of their respective own-ers and should be treated as such.

Cover Design: David W. McClintockCover Illustrations, from Figures 7-03 and 7-04: Elisabeth Thayer

Copyright © 2003 by Lawrence H. Putnam and Ware Myers. Published byDorset House Publishing Co., Inc., 3143 Broadway, Suite 2B, New York NY 10027

All rights reserved. No part of this publication may be reproduced, stored in aretrieval system, or transmitted, in any form or by any means, electronic, mechani-cal, photocopying, recording, or otherwise, without prior written permission of thepublisher.

Distributed in the English language in Singapore, the Philippines, and South-east Asia by Alkem Company (S) Pte. Ltd., Singapore; in the English languagein India, Bangladesh, Sri Lanka, Nepal, and Mauritius by Prism Books Pvt.,Ltd., Bangalore, India; and in the English language in Japan by Toppan Co.,Ltd., Tokyo, Japan.

Printed in the United States of America

Library of Congress Catalog Number: 2003046157

ISBN 13: 978-0-932633-55-2 12 11 10 9 8 7 6 5 4

Digital release by Pearson Education, Inc., June, 2013

Acknowledgments

We want to thank our colleagues who have been most helpful in pro-viding their insights and opinions concerning the material in this book.They have been generous in sharing examples from their experienceand a rich lore of stories from their consulting work in a wide variety ofdifferent companies, industries, and environments. Our thanks go toDoug Putnam, Larry Putnam Jr., Lauren Thayer, Kate Armel, MichaelMah, Ira Grossman, Bill Sweet, Stan Rifkin, Jim Greene, AnthonyHemens, and Hans Vonk.

Many of the figures in this book were specially prepared by peopleat Quantitative Software Management or were inspired by figures pre-sented in QSM training materials. These resources were greatly appre-ciated. In particular, Doug Putnam revised and adapted many of thefigures; his help was invaluable. Elisabeth Thayer drew the gnomesthat appear on the cover and are used in figures to illustrate some ofthe key ideas. Again, we are thankful for all these efforts on our behalf.

Many people think that the role of editor and publisher is merely tocheck the spelling and to correct the grammar. Not so! The work doneat Dorset House by Nuno Andrade, Vincent Au, Wendy Eakin, andDavid McClintock, in suggesting many meaningful changes to the text,has improved the style, consistency, integration of the ideas, and read-ability. We are deeply indebted to them for their diligent work in tend-ing to these matters.

V

vi ACKNOWLEDGMENTS

Finally, we thank our consulting and tools clients with whom we'veworked over the past decade or so. These professionals have demon-strated the efficacy and value of good metrics programs leading to use-ful estimates up front, to control of projects under way, and to quanti-fied benefits of process improvement initiatives. They have succeededin making these ideas work.

Contents

Introduction 3

THE EVOLUTION OF THE METRICS 5Computing Nuclear Weapons Effects 6Computer Budgets in the Pentagon 7Applying the Rayleigh Concept to New Projects 10The Rayleigh Concept Leads to the Software Equation 12Second Key Relationship: Manpower Buildup Equation 15The Rayleigh Curve As a Process Control Vehicle 16

Part I What Software Stakeholders Want 19

Chapter 1 Some Software Organizations Are Doing Very Well 21

MATURITY ASSESSMENTS REFLECT HOPE

HAS SOFTWARE DEVELOPMENT MIRED DOWN? 26

WAY DOWN Low 27

INTELLIGENCE CAN GUIDE Us 28

Chapter 2 A Finite Planet Makes Measurement Essential 31

WHAT MAKES METRICS EFFECTIVE? What the Right Metrics Are 33How the Core Metrics Relate 34

THE MANAGEMENT OF SOFTWARE PROJECTS Is VERY DIFFICULT 34

Vll

33

24

viii CONTENTS

Specifications 35Capability Maturity Model 35Process 36

MEASURE WHAT HAS BEEN DONE 36

THESE FIVE METRICS WORK TOGETHER 37

Chapter 3 Integrate Metrics with Software Development

METRICS METER LIMITED RESOURCES 40

WHAT Is THE PROCESS? 42Phase One: Inception 44Phase Two: Elaboration 45Phase Three: Construction 48Phase Four: Transition 50

Chapter 4 "I Want Predictability" 52

THE SOFTWARE SITUATION Is SERIOUS 53

DEPARTMENT OF DEFENSE CALLS FOR PREDICTABILITY 54

THE UNDERLYING REASONS ARE COMPLEX 54

COMPETITION STIRS THE POT 55

THE LIMITS OF THE POSSIBLE 56

THERE ARE CERTAIN LIMITS 60

Part II The Metrics Needed for Effective Control 63

Chapter 5 The Measurement View 65

THE PROFOUND DIFFERENCE 67

ONE FACT Is MINIMUM DEVELOPMENT TIME 68

THE CORE MEASUREMENTS UNDERLIE "GETTING WORK DONE"

THE KEY CONCEPTS 69

EXPRESSING THE KEY CONCEPTS IN METRICS 71Time 71Effort 71Quality 72Amount of Work 72Process Productivity 73

Chapter 6 Estimating Size as a Measure of Functionality

SHANNON'S PATH 78

REPRESENTING THE AMOUNT OF FUNCTIONALITY 8 1

SIZING FUNCTIONALITY BY CALIBRATION 82

40

68

77

CONTENTS ix

SIZING IMPLICATIONS OF REUSE 83

ESTIMATE SIZE 84Get the Facts 85Get the People 86Get the Data 86Allow Time 86Employ an Estimating Method 86

Chapter 7 Penetrating the Software Productivity Jungle 89FINDING THE RIGHT RELATIONSHIP 90

The Relationship of Time to Size 90The Relationship of Effort to Size 9 1The Software Equation 92

THE EFFECT ON PRODUCTIVITY 92Conventional Productivity Varies Widely 93Schedule Is a Factor in Productivity 94How Conventional Productivity Behaves 94How Process Productivity Behaves 97

OBTAIN PROCESS PRODUCTIVITY BY CALIBRATION 98DON'T Do IT THIS WAY! 101Do IT WITH PROCESS PRODUCTIVITY 101

Chapter 8 Defect Rate Measures Reliability 103

THE FIFTH CORE METRIC 104SOFTWARE DEVELOPMENT Is IN TROUBLE 106WHY Do WE COMMIT ERRORS? 107PLAN PROJECTS TO MINIMIZE ERROR- MAKING 107

Limit Functionality 108Allow Sufficient Schedule Time 108Allow Sufficient Effort 1 09Improve Process Productivity 109

FIND DEFECTS 110SOME DEFECTS REMAIN AT DELIVERY 110

Part III Control at the Project Level 113

Chapter 9 Do the Hard Stuff First— Establish Feasibility 1 15

THE VISION COMES FIRST 116WHAT TO BUILD 116

Risk 117Economic Constraints 117An Endless Task 117

How WE GOT INTO THIS LEAKY BOAT 119

x CONTENTS

THE INCEPTION PHASE ESTABLISHES FEASIBILITY 120Delimit Scope 121Select an Architecture Possibility 122Mitigate Critical Risk 123A Feasibility Decision, 150 Years Later 123

MAKE THE BUSINESS CASE 124Where Do the Resources Come From? 126Were Our Ancestors Shrewder? 127

Chapter 10 Do the Tough Stuff Next— Functional Design 128

THE TOUGH STUFF 129MESHING THE ACTIVITIES 131

FORMULATING REQUIREMENTS Is No LONGER SIMPLE 132Difficulties Fixing Requirements 133What We Need to Do 134

KEY REQUIREMENTS LEAD TO FUNCTIONAL DESIGN 134

IDENTIFY SIGNIFICANT RISKS? 135

THE BUSINESS CASE AT PHASE Two 136

SUPPORTING PHASE TWO ITSELF 136

NEARING THE BID DECISION 137

Chapter 1 1 The Power of the Trade-Off 138

AVOID THE IMPOSSIBLE REGION 140STAY OUT OF THE IMPRACTICAL REGION 14 TRADE-OFF TIME AND EFFORT 141TRADE-OFF TIME AND DEFECTS 142SMALL Is BEAUTIFUL 143ONCE MORE WITH EMPHASIS: SMALL Is BEAUTIFUL! 144SELECTING THE SOFTWARE RELATIONSHIP 151

There Must Be a Relationship 152The Relationship Must Be Reasonably Accurate 153Management Must Use the Relationship 1 56

THE BID Is DIFFERENT FROM THE ESTIMATE 158

Chapter 12 Turning Your Range Estimate Into Your Client'sPoint Bid 159

THE UNCERTAINTY DRIVERS 160THE TIME-EFFORT CURVE Is UNCERTAIN 160ESTIMATORS FACE CONSTRAINTS 161MANAGE ESTIMATING RISK 163

CONTENTS xi

Get Out of the Uncertainty Trap 164Work with Uncertainty, Not in Defiance of It 166

Chapter 13 The Main Build Leads to Operational Capability 169

STAFF ALLOCATION 171

PROJECTING FUNCTIONALITY 175

DETECTING DEFECTS IN TEST 177

APPLY STATISTICAL CONTROL TO THE CORE METRICS 177

TRACK SYSTEM SIZE 180

TRACK DEFECTS 182

THE ESSENTIAL POINT: CONTROL 185

Chapter 14 The Operation and Maintenance Phase 186

WHERE DOES PHASE FOUR BEGIN? 187

WHAT GOES ON IN THE PHASE? 188Find and Fix Defects 189Improve Reliability 189Adapt the System 190Modify the System 190

WHEN DOES THE PHASE END? 191

How TO ESTIMATE THE PHASE 192

Chapter 15 Replan Projects in Trouble 195

THE AWARD DIFFERS FROM THE ORIGINAL ESTIMATE 196

THE AWARD REFLECTS THE ESTIMATE 199

REPLAN TROUBLED PROJECTS 200

REPLAN PERIODICALLY 201

Part IV Control at the Organization Level 203

Chapter 16 A Telecommunications Company Evaluates ItsSoftware Suppliers 205

LESSONS FROM CONSTRUCTION 206

LACK OF FACTS LEADS TO PAIN 206

LOWER TIER, THE SAME 208

PTT TELECOM BV GETS REAL 209The PTT Faces Competition 210The Telecom Considers What to Do 210How to Do It? 211

xii CONTENTS

Purchasing Alerts Management 212The Telecom Gets Results 213Results on Sample Projects 215

Chapter 17 Evaluate Bids on the Facts 217

THE REALITY Is RESEARCH AND DEVELOPMENT 217

GETTING TO THE FACTS 218

THE BIDDER COOPERATES 219The Evaluators Consider Schedule 219The Evaluators Consider Effort 222The Evaluators Consider Reliability 223

THE WORLD OF WILD BIDDING 223The Vendor Tries to Buy In 224The Vendor Gold Plates 225

EXPERIENCE SUPPORTS THE FACTUAL APPROACH 225

WHO NEEDS THE FACTS? 226

Chapter 18 Managing a Portfolio of Projects 227

A SENIOR MANAGER CAN MASTER-PLAN 228

GOOD PROJECT PLANS UNDERLIE A GOOD MASTER PLAN 229

WHAT A MASTER PLAN DOES 232The Master Curve Shows Declining Staff 232The Master Curve Shows Increasing Staff 233Control of Project Backlog 233Coping with Powerful Project Managers 234

PLANNING STAFF AT THE ENTERPRISE LEVEL 234

Chapter 19 Improving the Software Development Process 236

PERFORMANCE COMES FROM THE ENTIRE PROCESS 237Pre-Phase: Needs 238Inception Phase (or Feasibility) 239Elaboration Phase (or Functional Design) 240Construction Phase (or Main Build) 241Transition (or Operation and Maintenance) 241Phases Are Implemented by Workflows 24 1

PROCESS IMPROVEMENT COMES HARD 242Process Productivity Rises Slowly 243Process Productivity Extends Over a Great Range 243

WHY HAS PROCESS IMPROVEMENT BEEN HARD? 244Trapped in the Economy 245Need for Client Participation 245Software Development Is Difficult 246

CONTENTS xiii

SOME ORGANIZATIONS ARE IMPROVING THEIR PROCESS 247

THE PLACE TO START Is WHERE You ARE 250

Chapter 20 Managing Reusable Components 251

THE FIVE STAGES OF REUSE 252Hip-Pocket Reuse 252Reuse from a Repository 252Product- Line Reuse 254ERP Systems Are a Form of Reuse 255Architecture-Wide Reuse 255

WHAT ARE THE ESSENTIAL SUPPORTS FOR REUSE? 257Architecture Accommodates Components 257Interfaces Enable Inter-Component Communication 258Variation Mechanisms Multiply Adaptability 258Process Provides Locations for Selection 258Modeling Language Provides the Means 259Tools Implement These Needs 260Internet-Hosted Services 260

ESTIMATING IN THE AGE OF REUSE 260Components from a Repository 26 1Product-Line and ERP Reuse 261

ANY METRIC MEASURING FUNCTIONALITY WILL Do 261Finding That Metric 26 1Calibration to the Rescue 262Fundamental Principle Reinforced 263

ARCHITECTURE-WIDE COMPONENTS INCREASE REUSE 264

FUNCTIONALITY OR PROCESS PRODUCTIVITY? 264Size Adjustment Seeks "Effective" Size 266Second Approach Adjusts Process Productivity 266

Chapter 21 Metrics Backstop Negotiation 269NEGOTIATION BRIDGES GAPS 270

The Principal Gaps 272Different Interests at Stake 273A Job for Negotiation Man! 273

THE CORE METRICS SUPPORT NEGOTIATION 274Software Development Depends on Negotiation 276Negotiation Depends Upon Metrics 277

Chapter 22 Motivating Participants in the Software DevelopmentProcess 278

PEOPLE ARE THE ULTIMATE RESOURCE 279Hire Good People 279

xiv CONTENTS

Build Organizations 280Keep People 280Give People Time 28 1

FIVE ESSENTIAL MOTIVATING FACTORS 282

EXTENDING ESTIMATING WISDOM MORE WIDELY 282

TEN GREAT TRUTHS 284The Five Great Truths About Metrics 284The Five Great Truths About Software Development 285

Appendix A Good Enough Is Better Than Floundering 287

Appendix B Behavior of the Core Metrics 290STUDY RESULTS 29 1

Size 292Effort 293Time 294Staff 295Process Productivity 296Mean Time To Defect 297

WILL THE NEXT PERIOD REVERSE AGAIN? 298

Bibliography 301

Index 305

FIVECORE

METRICS

The IntelligenceBehind Successful

Software Management

This page intentionally left blank

Introduction

Processes, methodologies, and methods for developing software havebecome the center of much activity in recent years. Today's practicesfor developing software are much more effective than those historicallyemployed. However, these practices are complex, and they rely heavilyon software tools. Neither processes nor tools are easy to adopt. Mas-tering them takes time over a period of years, and the measurement ofimprovement year by year keeps the ever-changing goal in manage-ment's sights.

Moreover, a process improvement effort does not rest solely on theenthusiasm of a solitary developer here or there. It encompasses theentire software organization and even its clients, in-house or out. Aprogram of this magnitude depends on the understanding and supportof the management structure—not only at the beginning, when enthu-siasm runs high, but also over the long haul.

But what can sustain such interest, if not wild enthusiasm, overtime without end? Let us look to the history of business for guidance.Double-entry bookkeeping did not arrive with the first Homo sapiens,tens of thousands of years ago. It was invented rather recently—infourteenth-century Venice. Bookkeeping is said by some to be theepitome of dullness. Yet for six centuries, the principal output of theaccounting process—the profit metric—has maintained the interest

3

4 FIVE CORE METRICS

and excited the daily effort of businesspeople. It is a way of keepingscore in the great game of business.

Software development is part of this great game. At some point,profit tells the participants whether the software game is going well.Unfortunately, profit comes rather late in the game. The project mayhave crashed and burned long before the bean counters' profit-and-loss statement reveals that the company is wallowing in loss.

Therefore, software development organizations need a more imme-diate measure of how they are doing. This measure is especiallyneeded to motivate senior management to persist in its support ofprocess improvement. We need this measure to enable project man-agers and the developers themselves to persist in their efforts to con-trol software development. We need it to enlighten clients and users.

Some approaches use periodic assessment to motivate improve-ment, such as the Capability Maturity Model (CMM) of the SoftwareEngineering Institute (SEI) and specification 9001 of the InternationalStandards Organization (ISO). The attention they have gained sup-ports our belief that many managers feel the need, not only for theguidance that these models provide, but also for the assurance thatassessment offers.

Unfortunately, assessments are of necessity periodic, years apart.When performed, an assessment does indeed give a jolt to the pursuitof process improvement, but in the long intervals between assess-ments, motivation falters. Further, assessments are imprecise. Theydepend upon the judgment of the assessors, who may be swayed bythe artifices of those assessed.

Metrics, in contrast, are continuous, that is, weekly or monthly—asoften as measures are made. They are reliable, since they rest uponcounting definable elements of software production. They meet theneed for immediate control at the project level. They also meet theneed to measure process improvement. Being accurate and frequent,they quiet management's nerves—or excite management to action—every week or month.

Many observers proclaim passionately that people are important.Measures don't solve problems, they say, people do. Metrics just get inthe way, they add. We agree that people are important—leadership isbetter than dictatorship, and people still have to solve the problemsencountered in software development. Metrics, when poorly chosen,inaccurately collected, and unwisely applied, do upset people andimpede problem solving. But measures that are well chosen, accu-rately collected, and wisely applied do intensify the motivation thatimproves the process within which people solve problems.

INTRODUCTION 5

Software development, to be effective, needs an appropriate process.A process, to endure successfully in a world of limited resources andtime, has to be measured, evaluated on those metrics, and often redi-rected as a result. Its relationship to scarce resources is gaugedthrough metrics. The common complaint of overstimulated develop-ers—"Just get out of our way"—fails to grasp these fundamentals.

Our intent in this book is to show that just a few key metrics—five,in fact—are enough to meet these needs. These five metrics providethe equivalent of the historic profit metric that energizes business, butthey provide it much sooner—during development, instead of afterrelease. Consequently, the metrics have more of a chance at criticalpoints to motivate senior management and senior stakeholders to con-tinue their support. Simultaneously, the metrics encourage acquisi-tion managers, project managers, and the developers themselves topursue the often frustrating path of process improvement.

Before describing the structure of this book, we pause to describethe experiences from whence this research sprang.

The Evolution of the Metrics

The ideas expressed in this book are the outcome of Larry Putnam'sresearch and work in software development, first in the Army and thenwith twenty-five years of operating the consulting firm Quantitative Soft-ware Management, Inc. (QSM) of McLean, Virginia. Larry's final years inthe Army were particularly productive in developing these concepts:

My Army career roughly coincided with the early decades ofthe computer age. In a series of assignments, between periodswith the troops, I became familiar with the ability of the pio-neer computers to perform huge, tedious computations. Itwas this experience that prepared me to apply computerpower, later on, to the metrics of software development.

Early on (between 1959 and 1961), in one of the physicscourses I took at the Naval Postgraduate School, I had to dosome tedious calculations to a precision of twelve decimalplaces. The only tool available to us for that kind of work wasthe desktop mechanical calculator, and I had to hire time on itin town. Tedious experiences like that made computers veryappealing when I first encountered them.

6 FIVE CORE METRICS

Computing Nuclear Weapons Effects

My first encounter with nuclear weapons effects was at theArmy's Special Weapons Development Division in the CombatDevelopment Command at Fort Bliss, Texas, where I was sta-tioned between 1961 and 1964. The computer we used was aBendix G15, which was the size of a refrigerator and hadabout as much power as a programmable calculator hastoday. One of my jobs was supervising the preparation of theArmy's nuclear weapons selection tables, which commandersused to pick the right weapon for a particular tactical opera-tion. As the weapons developers upgraded the weapons, theArmy had to recalculate these tables, each of them severalhundred pages in length.

At first we programmed the G15 in Assembly language.Later, I had an opportunity to program the machine in ahigher-order language, ALGOL. These were very small engi-neering programs of perhaps fifty-odd lines of code.

A few years later (in 1966), I needed to do some blast cal-culations to support the course on nuclear weapons effects Iwas teaching at the Defense Atomic Support Agency in Albu-querque, New Mexico. Next door, Sandia Laboratories hadjust received a Univac 1108, the largest scientific computerthen available. They had not yet fully loaded it, so they offeredtime to users on the base, providing the applicants did theirown programming. They offered a FORTRAN course for thispurpose, and I took it.

As you may expect, when the night operators fed my firstdeck of IBM punch cards into this giant computer, it immedi-ately kicked me off. The FORTRAN course had failed to teachthe procedures for job control cards! After about ten tries, Igot my program past that hurdle, only to run into syntaxerrors in my own program. It turned out to be a lengthyperiod before I got my program to compile, run, and generatethe data I wanted. In the back of my head, however, I lodged afirsthand appreciation for the perils of big-time computer pro-gramming. Also lodged there, fortunately, was some ability toapply mathematics and statistics to difficult problems. Thatwould come in handy later.

INTRODUCTION 7

Computer Budgets in the Pentagon

In 1972, after I'd completed a tour in Vietnam and two yearscommanding troops at Fort Knox, Kentucky, the time came forduty in Washington. The personnel people, in theirinscrutable way, divined that I was eminently qualified to takecharge of the Army's automatic data processing budget. I wasgoing to deal with the budgetary process for the Army's pro-curement of computers and funding of software developmentprograms.

The Army was spending close to $100 million per yeardeveloping software to automate its business functions—pay-roll, inventory management, real property management ofbases around the world, mobilization plans, force deploymentplans, command and control structure. It was virtually every-thing that had anything to do with business and logistics. Thehardware on which these programs were to run cost anothercouple hundred million dollars.

As I began this tour, I knew little about software beyondthe FORTRAN, ALGOL, and Basic programs I had written.Most of the initial work on these Army business functions hadbeen coded in Assembly language. The Army Computer Sys-tems Command was redoing this Assembly code in higher-order languages, principally COBOL. We were in the midst ofcompleting 50 to 70 systems in the range of 50,000 to 400,000lines of COBOL when I began to hear those ominous words,"overruns," and "slippages."

I really became aware of the Army's problems with soft-ware the first time I went to the budget table across from thepeople in the Office of the Secretary of Defense. We were look-ing at the next fiscal year and the five years that followed. Totake an example, when the Standard Installation Division Per-sonnel System first became operational the year before, itsproject organization had 118 people. For the next year, wehad projected the count to fall to 116, then to 90 for each ofthe next five years. Those were numbers that had come upfrom the field.

"What are these ninety people going to do?" the budgetanalyst from the Office of the Secretary of Defense asked, rea-sonably enough. "Isn't the system finished?"

Well, there was a big silence in the room. I was new. Ididn't know the answer. I looked to my right, then to my left,

8 FIVE CORE METRICS

at the long-term civilian employees who had come into thePentagon with the first computers. They were the acknowl-edged experts, but they were strangely quiet. Finally, the lameanswer dribbled out: "maintenance." Nobody on the Army sideof the table could satisfactorily explain what that meant.

"Look, this is a ten-million-dollar item," the budget analystfinally said. "Unless I can get an answer, I'll have to delete it.It's getting late, so let's adjourn for today and reconvene atnine A.M."

We scurried off and called the Army Computer SystemsCommand, which was responsible for the Personnel System.We waited by the phone into the evening. Finally, theresponse came, again some lame comment about mainte-nance, but we knew it would not make sense to the budgetpeople. And it didn't. By 9:15 the next morning, we had lost$10 million.

After the budget meeting, as I walked down the halls ofthe Pentagon with my boss, a major general from the Corps ofEngineers, he mused, "You know, Larry, this business of try-ing to plan the resources and schedule for software projects isvery mystifying. It wasn't like that in the Corps of Engineers.Even early on in a project—the big dams, the waterways—wealways had some feel for the physical resources we wouldneed: how many dump trucks, the number of cubic yards ofconcrete, the power shovels, the people. From numbers likethese, we could make a crude estimate of schedule and costs."

We reached the elevators, and he fell silent until we got offon our floor.

Then he continued: "Any time I try to get similar answerson software, I get a dialogue on the architecture of the com-puter itself, or a little explanation of bits and bytes, or someother irrelevancy. Never anything about how long the work isgoing to take, how many people it is going to require, what itwill cost, or how good it will be at delivery. That's the kind ofinformation we need at our level, here in the Pentagon. That'swhat we need to come to grips with this business of planningand managing software development."

At this point, he turned and went into his office. He didn'tseem to expect any immediate answer from me, but his com-ments set me off on a line of thinking that has lasted to thisday. How do software systems projects behave? Can we

INTRODUCTION 9

model this behavior with a few core parameters? Is there away to get the answers that senior managers want?

A couple of weeks after this budget disaster, I stumbledacross a small paperback in the Pentagon bookstore. It had achapter on managing R&D projects, by Peter Norden of IBM.1

Norden showed a series of curves, such as the one in Figure I-1,that depicted the buildup, peak, and tail-off of the staffing levelsrequired to move a project through research and developmentand into production. He pointed out that some of these projectswere for software, some were hardware-related, and some werecomposites of both.

What struck me about the function, which Norden identi-fied as a Rayleigh curve, was that it had just two parameters.One was the area under the curve, which was proportional tothe effort applied. (In the case of software projects, effort isproportional to cost.) The other was the time parameter,which related to the schedule.

Figure I-1: A Norden-Rayleigh curve showing the number ofstaff and the amount of effort required by devel-opment projects over time.

I found that I could easily adapt these Rayleigh curves to thebudgetary data I had on the Army software projects. We hadthe number of person-years applied to each project in ourbudget, for each fiscal year of each project. So, I quickly plot-

1 Peter V. Norden, "Useful Tools for Project Management," Operations Research inResearch and Development, ed. B.V. Dean (New York: John Wiley & Sons, 1963).

10 FIVE CORE METRICS

ted all the software systems that we had in inventory andunder development. From the plots, I could establish theparameters of the Rayleigh curves. Then, from these parame-ters, I could project the curves out to the end of the budgetingcycle. Within a month, I had about fifty large Army develop-ment projects under this kind of control. I was able to docredible budget forecasts for those projects—at least for fiveyears into the future.

The next time the budget hearings came around, a yearlater, we were in the downsizing phase at the end of the Viet-nam War. Budget cuts were endemic, and we were asked tocut the application of effort on a number of existing systems.The turnaround time was short; we had twenty-four hours toreport on the impact such cuts would have.

Now that I had the Rayleigh curve and an understandingof that methodology, I was able, using a pocket calculator pro-grammed with the Norden-Rayleigh function, to quickly makeestimates of what would happen if we reduced the projectionsfor several of the projects. It was easy to show that the aggre-gate of these cuts would wipe out the Army's capability to startany new software projects for the following three years.

We did not lose any money at that budget meeting.

Applying the Rayleigh Concept to New Projects

Naturally, the next important question was,

How do I use the equations that stand behind theRayleigh curve to generate an estimate for a new proj-ect?

More questions followed:

It's nice to pick up those that are already under way,but is there some way I can find the time and effort fora new project? Is there a way to build a budgetingand staffing profile for getting the work done?

I looked into that. Right away the notion arose that somehowwe had to relate the size of the Rayleigh curve—the time andeffort it represented—with the amount of function the projectwas to create. To measure the functionality, we had to ask,How do the people building software for the Army—its in-

INTRODUCTION 11

house developers, like the Army Computer Systems Commandand its contractors—think about the functionality they arecreating?

I found out that they thought about the lines of code theyhad to write. They talked a lot about the number of files theywere creating, the number of reports they were generating,and the number of screens they had to bring up. I saw thatthose types of entities were clearly related to the amount offunctionality a project had to create. I would have to relatethese functional entities to the schedule and effort needed toget the job done.

Here, my experience with analyzing nuclear effects datacame into play. I knew that I had to

• measure a number of samples of the activity• find the pattern in these measurements2

The next year-and-a-half to two years, I spent about a third tohalf of my daily Army schedule analyzing data. The first set ofdata was a group of about fifteen to twenty systems from theArmy Computer Systems Command. I attempted some mathe-matical curve-fitting, relating the size of those systems in linesof code, files, reports, or screens to the known developmentschedules and the associated person-months of effort.

The first approach was to use a simple regression analysisof functionality, expressed in lines of code, as the independentvariable; person-months of effort served as the dependent vari-able. I used the same approach with schedule.

Next, I did some multiple regression analysis in which Irelated effort to combinations of lines of code, files, reports,and screens. The statistical parameters that came out showedthat these relationships might be useful for predictions, butthey were not extraordinarily good fits. Certainly, more workand investigation were needed before any conclusions could bedrawn.

By this time (1975 to 1976), I had been in contact withother investigators in this area. Judy Clapp from the MitreCorporation had done some studies on ten to fifteen scientificand engineering systems that were being built for the Elec-tronics Systems Division of the Air Force Systems Commandat Hanscom Air Force Base. C.E. Walston and C.P. Felix atIBM Federal Systems Division had published a paper in the

^Lawrence H. Putnam and Ware Myers, Industrial Strength Software: Effective Manage-ment Using Measurement (Los Alamitos, Calif.: IEEE Computer Society, 1997), p. 5.

12 FIVE CORE METRICS

IBM Systems Journal,3 having amassed a database of aboutseventy-five projects that gave a good feel for a range of differ-ent parameters related to software. All of this information wasvery useful toward establishing relationships between lines ofcode, pages of documentation, time, effort, and staffing.

In trying to do this analytical work, I had to go back abouttwenty years, to my academic training at the Naval Postgradu-ate School, to refresh my memory on statistical analyses,working with data, and drawing logical inferences and conclu-sions. I had to do a lot of relearning to polish skills that hadbecome very rusty from years of neglect.

The Rayleigh Concept Leads to the Software Equation

One promising experiment employed multiple regressionanalysis to relate the size of the systems in lines of code to theschedule and the person-months of effort applied. I did thesecurve-fits first with the Army data, then with the ElectronicsSystems Division data, followed by the IBM data. I was luckyin that I got some very nice fits in about twenty of the Armydata systems.

Concurrently, I did some theoretical work on integratingRayleigh curves. I tried to establish the parameters of integra-tion from a little bit of the historic data from the Army andIBM. I found good consistency in generating the key parame-ters for the Rayleigh equation:

• the work effort (area under the curve)• the schedule parameter (distance along the horizontal

axis)

These different, independent approaches to getting a parame-ter-estimating equation were leading me in the same directionand producing similar results. What ultimately fell out iswhat I now call the software equation. It related the amount offunction that had to be created to the time and effort required.It originally looked like this:

Quantity of Function = Constant x Effort x Schedule

^C.E. Walston and C.P. Felix, "A Method of Programming Measurement and Estimation,'IBM Systems Journal, Vol. 16, No. 1 (1977), pp. 54-73.

INTRODUCTION 13

In the process of fitting a curve to the data representing anumber of systems, I made three significant observations:

1. A constant was generated.2. Effort and time were both raised to powers.3. Effort and time were present in one equation.

Constant. The fact that a constant was generated in theprocess of curve-fitting is not in itself significant. A constantis always generated in this process. When you generate a lineor a curve from data on a number of instances, you will findthat you need a constant to balance the ensuing equation.The point of importance here is that this constant is notinvented to balance the equation. It is an outcome of the his-toric data. Consequently, it has some kind of relationship tothe data from which it originated.

Parenthetically, we should note that this constant is notthe single, unchanging number its name implies. It is actuallya parameter. Although in any given instance, such as a par-ticular project, it is a single number, it may be different foreach project, depending on the associated facts.

I thought a lot about what this relationship might repre-sent, what the physical meaning of this parameter might be.Somehow, it seemed to be related to the efficiency of the devel-opment organization or the level of technology the organizationwas applying to its development practices. That is, whereexpert opinion believed an organization to be more advanced,a larger parameter was at work. That is, organizations thatexpert opinion believed to be more advanced fell heir to alarger parameter.

The first name I used to describe this empirically deter-mined parameter was Technology Factor. I used that term inthe first papers I published with the IEEE Computer Society,between 1976 and 1977. I have continued to use that para-meter to represent the efficiency of the software developmentorganization. Over the years, I have renamed it several times:

• Technology Factor• Technology Constant• Productivity Constant• Process Productivity Parameter

14 FIVE CORE METRICS

The latter is probably the most descriptive term for the realrelationship of this mathematical parameter to the softwaredevelopment process.

Powers. In the process of curve-fitting, I found that expo-nents were associated with both the time and effort parame-ters. The presence of these exponents means that the soft-ware equation is nonlinear. This, in turn, signifies that thesoftware development process is not simple or linear; it iscomplex, or nonlinear. Well, in view of the difficulty we havealways had with it, that complexity is not a great surprise!

Effort and time together. The curve-fitting brought effortand schedule together in the same equation. In other words,they influence each other. They are not independent entities.This conjunction leads to some possibilities in project plan-ning, such as the "time-effort trade-off that we will explore atlength in Chapter 11. The additional fact that both effort andtime carry exponents has further implications as well.

This work was the genesis of my software equation, whichwe still use today. Though originally derived by statisticalanalysis, the data I analyzed was itself real. Since then, theQSM software equation has been applied to tens of thousandsof projects that were also real. The result: It has proven to bea very good measurement and estimating tool for more thantwenty-five years.

The QSM software equation is a macro model that linksthe amount of function to be created to the management para-meters of schedule and the effort required for production. Theempirically determined constant, or process productivity para-meter, represents the productive capability of the organizationdoing the work. This equation brings together four of themanagement-oriented measures that the major general fromthe Engineer Corps yearned for, now some thirty years ago.

The presence of the process productivity parameter sug-gests that the software equation provides a very good way totune an estimating process. If you know the size, time, andeffort of your completed projects, you can calculate yourprocess productivity parameter. Then you can use this calcu-lated parameter in making an estimate for a new project. Solong as the environment, tools, methods, practices, and skillsof the people have not changed dramatically from one projectto the next, this process of playing back historic data canserve as a very useful, simple, straightforward calibration tool.

INTRODUCTION 15

Second Key Relationship: Manpower Buildup Equation

The other relationship that emerged in these studies was thedirect one between time and effort. Clearly, these two wereparameters in our Rayleigh equation. But was there anythingwe could learn from the basic data as to the relationshipbetween the two? Again, more curve-fitting. There was a dis-tinct relationship:

Effort is proportional to schedule cubed.

This relationship had been noted and discovered earlier byFelix and Walston and several other investigators who haddone research in software cost estimating.4

This finding was especially important because it gave methe basis for making estimates. I had two equations and twounknowns (time and effort). The software equation involvedsize, time, effort, and the process productivity parameter. Thesecond equation linked effort with development time. Withtwo equations, one could solve for the two unknowns.

The second equation, of course, also required some para-metric determinations. We found a parameter family thatseemed to relate to the staffing style of the organization. Orga-nizations that use large teams of people tend to build up staffrapidly. Rapid buildup produces a high value for the ratio ofeffort divided by development time cubed.

Organizations that work in small teams generally takelonger to complete development. Small teams are typical ofengineering companies that tend to solve a sequential set ofproblems one step at a time. For such companies, I saw thatthe relationship of effort divided by development time cubedproduced a much smaller number for the buildup parameter.This observation told us that organizations adopted differentstaffing styles. This parameter was actually a measure of themanpower acceleration being applied to a software project.

It became evident in studying these different types of orga-nizations that large-team organizations tend to finish their proj-ects a little bit faster than small-team organizations. The lattertook a little bit longer to complete their work, all other thingsbeing equal. This finding suggested some sort of trade-offbetween team size and how long it takes to get the work done.

4Ibid.

16 FIVE CORE METRICS

The Rayleigh Curve As a Process Control Vehicle

The other significant idea that I started working on during myArmy years was the notion of using the Rayleigh curve as aprocess control vehicle for projects while they are under way.The curve projected the planned staffing and effort. As theproject progressed, actual staffing and effort could be com-pared to the plan. Managers could observe deviations fromthe plan and initiate prompt corrective action.

An extension of the control idea was to take real data as itwas happening on a project. If this data differed from the orig-inal plan, something had to change. The purpose was to cal-culate the actual process productivity parameter beingachieved. Using it, a new Rayleigh curve could be projected toa new completion date. The new curve would let you dynami-cally predict effort, cost, and schedule to completion.

I did some early curve-fitting investigations but ran intosome problems and snags that for a time prevented this ideafrom being fully realized. Nevertheless, there was enoughwork and enough positive results to suggest that the processcontrol application should be pursued.

It had become evident, however, that not many peoplewere interested in dynamic control. Most organizations werehaving so much trouble coming up with the initial forecastthat the idea of learning how to control an ongoing project wasnot high on their priority list. Trying to reach good solutionsin dynamic measurement and control was premature. (How-ever, I did eventually work out the project control aspects, aswe report in Chapter 13.)

And now, the book itself, reflecting the experience of another quarter ofa century, is divided into four parts. Part I, What Software Stakehold-ers Want, begins on a positive note with the view that some softwareorganizations are doing very well. They are doing well because theyhave integrated key metrics into their development process. Thesepractices give them predictability.

Part II covers the five core metrics needed for effective control:schedule time, effort, functionality (as expressed in size), reliability,and productivity. Because these five metrics are related to each other,developers can use known values to predict the others. This providesa base for estimating. Then, they can control the progress of their

INTRODUCTION 17

projects against those predictions. Further, with these measures, theycan track improvements in the way they develop software.

Part III is given over to the application of metrics at the projectlevel. This set of chapters applies the metric concepts to the estima-tion and control of a single project. Estimation and control are impor-tant because software development does not in reality begin with pro-grammers sitting down to write code. There are such preliminarymaneuvers as finding out what to do. That "what" is the essentialbasis of the time, effort, and other resources needed to eventually writecode that implements the "what" successfully.

Part IV extends the employment of metrics to control that residesabove the project level. The general idea is that the acquirers of soft-ware development capability can employ these metrics to guide theirrelationship with development organizations. Higher levels of manage-ment can use them to manage a portfolio of projects and to guide theiremployment of reusable components. Organizations can gaugeimprovement by tracking these metrics.

This page intentionally left blank

Chapter 3Integrate Metrics with

Software Development

People are important, and people have to solve the intricate problemsencountered in software development. It is equally true that collabora-tive leadership is better than command management, especially inknowledge work, for people do not solve problems on command—atleast not well. It is also true that a software development process,clumsily applied, can get in the way of what people are trying to do.Moreover, we might as well admit—for it is true—that when metrics arepoorly chosen, inaccurately collected, and unwisely applied, people getupset. When people are upset, they solve problems poorly. To put itin a nutshell, people solve problems; metrics provide the schedule timeand staff allowance within which people solve problems.

Thus, when pertinent metrics are applied effectively, people becomemore productive. Metrics make the process of software developmentmore reliable and efficient. The challenge is to work metrics into thatprocess. This chapter begins the exploration of the typical phases ofdevelopment and the ways metrics can be integrated into the process.

Metrics Meter Limited Resources

As we pointed out in the previous chapter, software development, likeeverything else on this third rock from the sun, operates on a planet oflimited resources that measures time by rotations of its globe. Every

40

INTEGRATE METRICS WITH SOFTWARE DEVELOPMENT 41

rotation carries costs because the people doing the work like to eatevery day. That is, we mean to say, neither staff power nor scheduletime are free goods.

The reliability of the product isn't free either. People work over aperiod of time to produce software. That software may be of high relia-bility—with few defects—because the people had sufficient time toavoid or correct the defects. However, software released with manydefects merely transfers that working time, usually multiplied onehundred-fold, to the users, the help-line crew, and the maintenancestaff.

Staff, schedule time, and the number of defects represent three ofthe five core metrics. The fourth core metric is the amount of functioncontained in a software product. It is commonly measured in lines ofsource code or in function points. The fifth core metric is the produc-tivity level—or process productivity—of the project. But when do wemeasure the five core metrics: size, productivity, time, effort, and relia-bility?

In the beginning, the software project is just a gleam in somedreamer's eye, and there is nothing to measure. At the end, we have aproduct and can count the lines of code, but at that point, the count isof little value. Measurements would have come in handy earlier, whenwe were trying to estimate the time and effort. It is evident also thatwe need metrics at many in-between points. That is, between the startand the end of the software development process. We need to havemetrics integrated into that process. The work has to be measuredand evaluated on those metrics, and often redirected as a result.

Thus, development operates through process. Process, in turn,operates through resources. The key resources, time and effort, arescarce. In consequence, they have to be metered out to projects. Thatmetering is properly the province of a measurement system, or metricsfor short.

That is the nub of the metric side of the argument. Unfortunately,developers in the trenches often see the situation quite differently.They see a rather amorphous project, full of problems that may take along time to penetrate. They see a schedule imposed by upper man-agement or clients with little grasp of the problems still to beunearthed. They see a staff of inadequate size with junior membersinexperienced in the problems to come. To them, that too-short sched-ule and inexperienced staff are the "metrics" imposed on the project.And, of course, in that situation, these "metrics" are what pass as theoutcome of management's stab at the time and effort metrics.

3

42 FIVE CORE METRICS

That word "stab" is the key to a happier metric future. Supposethat instead of "stabbing" more or less blindly at the metrics, manage-ment were able to reach these metrics as the outcome of an intelligentmethodology. That is, the time and effort metrics were well suited tothe needs of the developers. The productivity level at which the staffcan function was well established. Under this scenario, developerscould be happy with the metrics offered with the project.

It is evident that developers are unhappy, not with metrics per se,but with lousy metrics. In a world of finite resources, metrics have tobe an integral part of an effective software process. But they don'thave to be lousy.

What Is the Process?

An early representation of the software development process was thewaterfall model, summarized in the first column of Table 3-1. Manyunderstood the model to prescribe a one-way path through the work-flows of software development: requirements capture, analysis, design,and so on, as shown in the table. In truth, that understanding wasbased on the first diagram of workflows in Winston Royce's landmarkpaper, "Managing the Development of Large Software Systems." Ifreaders of the paper had turned the page, they would have seen thatthe second diagram shows feedback arrows from each workflow to thepreceding one. "As each step progresses and the design is furtherdetailed, there is an iteration with the preceding and succeedingsteps," Royce wrote.1 Really, a quite modern attitude! Despite Royce'siterative view, many software people still consider his model to be asequential, single-pass flow of work — not an iterative process.

The U.S. Department of Defense (DoD) divided the software prob-lem into four phases: feasibility study, high-level design, main build,and operation and maintenance. These phases were convenient formanaging contracted parts of the development. With little hard knowl-edge to go on, the first two phases could be handled as level-of-effortcontracts. By the time the high-level design was completed, the knowl-edge was on hand for a fixed-price contract.

Developed in 1999, the Unified Process brought together the previ-ous processes or methodologies of Ivar Jacobson, Grady Booch, andJames Rumbaugh, each already a well-established methodologist.Although divided into four phases like the Department of Defense'sversion, the phase names are quite different. The four phases arenamed not in terms of their content, but in terms of their position in

1 Winston W. Royce, "Managing the Development of Large Software Systems," Proceedings, IEEE WESCON (August 1970), p. 2.

INTEGRATE METRICS WITH SOFTWARE DEVELOPMENT 43

the development sequence. Within the Elaboration phase, for example,the Unified Process locates a number of activities involved in gettingready to build the system in the next phase.

Table 3-1: The waterfall model of software development,originating in the 1960's, listed what we nowcall workflow activities. It was followed by thefour phases of the Department of Defenseprocess. The Unified Process, pulled togetherin 1999, redefines the four phases and assignsthem new names to make clear that they aresomething different In the final column, a sur-vey author, attempting to come up with ageneric set of names, confuses workflows andphases.

Waterfall2 Dept. of Defense3 Unified Process4 Generic5

I

II

III

IV

V

VI

Requirements

Analysis

DesignCoding

Testing

Operations

Feasibility Study

High-Level Design

Main BuildOperation,Maintenance

Inception

Elaboration

ConstructionTransition

Initiation

ConceptualRequirements

AnalysisDesign

Construction

Deployment

The generic set, shown in Table 3-1, presents a combination ofsequence names, such as Initiation, and workflow names, such asAnalysis, that is confusing. The confusion arises from our long immer-sion in the waterfall model. If you are going to go through the work-flow activities (those listed in the first column of the table) only once—in waterfall fashion—process and workflows are the same. However, inthe more modern sense of the spiral model or iterative development,which assume repeated passes through workflows instead of a singlepass, there is a distinction between process and workflows. For exam-ple, if a team decides that it needs to develop a prototype in the firstphase, it has to go through an abbreviated version of requirementscapture, analysis, design, and coding to get to the prototype. In otherwords, part or all of the workflows may occur in any phase. There is a

^Royce, loc. cit., pp. 1-9.3 Lawrence H. Putnam and Ware Myers, Measures for Excellence: Reliable Software onTime, Within Budget (Englewood Cliffs, N.J.: Prentice Hall, 1992).4Ivar Jacobson, Grady Booch, and James Rumbaugh, The Unified Software DevelopmentProcess (Reading, Mass.: Addison-Wesley, 1999).^Ellen Gottesdiener, "OO Methodologies: Process & Product Patterns," Component Strate-gies (November 1998), pp. 34-44.

3

44 FIVE CORE METRICS

distinction between phase and workflow, and the Unified Processgrasps this difference.

In the following sections, though, we describe each phase of thesoftware development process in terms of the Unified Process model,and we note that the DoD model, or some version of it, is widelyemployed.

Phase One: Inception

It would be nice to know, before spending a few million dollars, thatthe proposed system is technically feasible and that there is a businesscase for it. In the very beginning, when all you have is a high-levelexecutive waving his arm broadly and saying, "Wouldn't it be super tohave a single system that integrates all our operations?" you can't esti-mate time and effort. You don't have enough information to make evena rough estimate of the size of the necessary software.

You first have to take three preliminary steps: You have to breakthe grand vision down into something concrete enough to study; youhave to define what is within the system and what is external to it;you have to figure out if your organization can produce this systemthat you have roughly defined.

Is the system technically feasible at the current state of the art?This question seems to arise in areas such as the Department ofDefense, the Federal Aviation Administration, and other organizationswhere well-meaning but not technically based leaders dream dreamsthat reach beyond what is immediately possible. This question alsoarises in new areas, sometimes called "green fields," in which we havenot previously done a project. In more mundane fields, feasibility ofmany applications is more certain. It is enough to realize that theproject is a follow-on or that the developers have worked in this appli-cation area before. That experience helps us feel certain that the workis at least feasible.

Even if a proposal is technically feasible, though, it may not beappropriate for our particular organization at this time. We may lackcertain skills needed by the project. Our organization may not haveaccess to the needed funds. The product may not be consonant withour company's core competence. We explore these issues and othersin greater depth in Chapter 9. In the meantime, Figure 3-1 illustratesthe position of Phase One at the beginning of preliminary work on theproject.

INTEGRATE METRICS WITH SOFTWARE DEVELOPMENT 45

Figure 3-1: The Phase One (Inception or Feasibility) teamdoes enough work to justify the move intoPhase Two.

Several of the key decisions made in the first phase should rest on abackground of reliable metrics. Even the rough estimate of system sizethat is possible in this phase depends upon having readily availableknowledge of the size of past systems. Coming up with the very roughestimate of construction time, effort, and cost involves having someknowledge of how to turn the size estimate into this very rough estimate.

Phase Two: Elaboration

In order to bid on a business basis, you need to extend further yourknowledge of the project. You need to know with considerable preci-sion what you are going to do. That is the purpose of the Elaborationphase. This phase has three goals:

3

46 FIVE CORE METRICS

• Extend the requirements to the point of blocking outmost of the architecture.

• Block out most of the architecture.• Identify the risks that, if not mitigated, could result in

excess costs.

Before you can proceed with the architecture, you have to have a goodgrasp of the key requirements. It is these requirements on which youbase the architecture. You do not have to capture all the details of therequirements in this phase. In familiar areas, it may be sufficient toblock out known subsystems. In less familiar areas, you may have tocarry the architecture to a lower level of structure.

Are there aspects of the architecture you understand so poorly thatyou cannot count on implementing them within the time and effort esti-mates toward which you are aiming? If so, you have to explore theseaspects further. For example, you may have to find an algorithm, oreven code one, to assure that it can be done. You may have to carrythis portion of the project through complete requirements capture,analysis, design, implementation, and test, to a working prototype, tobe certain not only that it can be done but that it can be done withinbusiness constraints. The latter, of course, are metrics of some sort.

When does your estimate become good enough? During this sec-ond phase, as you find out more about what it will take to implementthe system, your size estimate becomes more precise. Your corre-sponding time and effort estimates become more accurate. You carrythe architecture, design, and risk reduction to the point at which yourestimate meets two criteria. First, it falls within the plus and minuslimits normally associated with practice in your field of business. Sec-ond, it is within some probability of successful completion. Note thatboth of these criteria are based on metrics!

Figure 3-2 summarizes what the Phase Two team needs to accomplish.Just where, during the progress through Phase Two, the size esti-

mate becomes good enough to base a bid on, is not a settled issue.Under pressure from management, customers, and the market, thisphase is often ended prematurely before enough is known to support avalid bid. Of course, for organizations with little in the way of a back-ground in metrics, a lack of knowledge of the system being bid makeslittle difference. They were just guessing anyway. For the organizationusing metric methods, the size estimate midway through this phasestill has too large a margin of error. Its estimates of time and effort atthis point will have a comparably large range of uncertainty. Biddingsuch numbers leads to all the troubles with which we are familiar in

INTEGRATE METRICS WITH SOFTWARE DEVELOPMENT 47

the third phase, Construction—time and effort overruns, poor quality,even cancelled projects.

Figure 3-2: The Phase Two, Elaboration, team carries theexploration of the project to the point of sup-porting a bid.

As long as bids are based more on executive intuition than on metrics,it makes little difference where you end the second phase. For organi-zations that are using better estimating techniques and learning howto make more precise estimates, it is more important to cany the sec-ond phase to the point of having an accurate size estimate.

Well, that sounds reasonable, you may be thinking. Then, on fur-ther thought, you ask, "Who is going to finance this up-front work?"Business units sometimes like to think they are passing costs to someother business unit. Between companies, the one requesting a bidlikes to think it is passing the cost of what it ought to absorb to thesoftware contractor. Similarly, within a company, a department need-ing software sometimes tries to pass the cost of planning that software

2

48 FIVE CORE METRICS

to the software department. If these little "successes" result in inade-quate attention to Phase Two, it is the using company or departmentthat eventually suffers from the poor software that results. Thus, who-ever finances Phase Two, the using organization needs to make surethat the financing reaches the goal of providing the schedule and effortfunding required to complete Phase Three, Construction, successfully.

Extending Phase Two long enough to reach these goals may delaythe beginning of Construction. Psychologically, this deferral can behard to take. We all like to see that construction is under way. Peoplelike to count lines of completed code. However, when construction isstarted prematurely, developers may try to proceed under an inade-quate bid, that is, in too short a time with an inadequate staffing level.That is bad from both a business standpoint and a technical stand-point. In mid-construction, developers realize that the architecturedoes not entirely satisfy the requirements, that unmitigated risksupset the schedule, and that necessary changes are costing moremoney than is available.

One way to reduce these problems is to keep the project lengthwithin bounds. Don't try to set up a five- or ten-year project that willsolve all your problems for all time. Too many factors—requirements,environment, reusable components, hardware, and operating systemplatform—are certain to change over such a long period of time.Instead, try to set up a flexible architecture within which you can setup a series of short projects. You should probably hold most projectswithin a two-year time scale. The trick is to design an architecturethat is extendable into subsequent releases and generations of theproduct.

In this second phase, Elaboration, a measurement system is evenmore critical than in the first phase. It is this phase that, by defini-tion, concludes with a business bid. A bid is the supreme metric thatgoverns the project through the Construction Phase and the TransitionPhase. But a bid does not spring into existence from nothing, likelightning from the brow of Zeus. Rather, it is built up from lesser met-rics, like size, productivity level, and reliability.

Phase Three: Construction

As we mentioned at the beginning of this chapter, people solve prob-lems; metrics don't. However, metrics can help people solve problems.There have to be enough people over time, and that is a metric knownas effort, measured in staff hours or staff months. There has to be asufficient period of time and that is another metric, schedule time,

3 • INTEGRATE METRICS WITH SOFTWARE DEVELOPMENT 49

measured in weeks, months, or years. Moreover, users of softwaregenerally want the people to solve the problems adequately—we mighthang the sobriquet "quality" on that desire. There are many aspects ofquality, some not readily reducible to a number. Reliability is quantifi-able, however, and reflects to some extent the other aspects of quality.In the form of defect rate (or its reciprocal, mean time to defect), relia-bility is the third key metric.

Therefore, the time and effort implicit in the business bid providethe problem-solving people with the effort and time they need in thethird phase to construct software to an acceptable level of reliabilityand quality. Ergo, metrics are critically important to the success of thethird phase.

In addition, metrics play a second role in the Construction Phase:control. That is an ominous word—nobody likes to be controlled. Wewould all like to be free spirits! But remember, we live in a world oflimited resources, and control is what keeps us safely within thoselimits.

Given the initial estimated metrics for effort (staff-months), sched-ule, and defects at the end of the second phase, we can project the rateat which effort will be expended and defects will be discovered duringthe Construction Phase. We can also project the rate at which func-tion, such as lines of code, will be completed. In addition to projectingthe average rate of occurrence of these metrics, we can project bandsabove and below the average.

We have now established the basis for "statistical control" of thekey variables in the Construction Phase. Statistical control meansthat we count up the actual number of staff members, the actual linesof code produced, and the actual number of defects found during eachweek and see if these actuals fall within the statistical-control band. Ifthey do, the project is proceeding according to plan. If an actual fallsoutside the control band, it is out of control. Work is not going accord-ing to plan. Something is probably wrong. It is a signal to look for theproblem.

Under statistical control, we discover the problems as they happen,every week. They are fresh. The people who were involved are stillaround. The traditional alternative is to discover such problems insystem test. At that point, there is little project schedule left. Some ofthe people are gone, while others no longer remember very clearly whatthey did—perhaps months ago—that now turns out to be wrong.

The Construction Phase is where most of the work of softwaredevelopment is accomplished. It is where most of the money is spent.Metrics are important in this phase for two reasons: to provide ade-

50 FIVE CORE METRICS

quate resources and time for the task, and to confirm that theresources are being utilized according to plan.

The alternative is to fly by the seat of your pants. That does notwork in the air. In a cloud, the seat of your pants is deceptive. Instru-ment-flight instructors preach, "Depend on your instruments, not yourpants." We preach a similar message: "Depend on your metrics, notyour hopes and fears."

Phase Four Transition

The Construction Phase ended with system test conducted in-house,resulting in a system state known as initial operational capability.However, the system has not yet operated in a user or customer envi-ronment. That is the province of the Transition Phase. This phaseoversees the movement of the product into the user environment,which may differ slightly from the in-house environment.

In a broad sense, there are two types of products. The first, theshrink-wrapped product, goes to many customers. As a start, the ven-dor may conduct a beta test (usually toward the end of the Construc-tion Phase) with a representative selection of those users. The secondtype of product goes to one customer, often for installation at one site,sometimes at several sites. The customer usually conducts an accep-tance test in its own environment.

In general terms, there will be two types of feedback from the userenvironment:

• Defects will turn up, either due to the new environmentor because they were undetected during the in-housetests. The software organization has to correct them. Insome cases, it may be possible to hold them for the nextrelease.

• The software may fail to meet some of the needs theusers now identify. Again, the software organizationmay be able to modify the system to accommodate theseneeds, at least those that it can readily add within thecurrent structure. In other cases, the modification willbe added to the list for consideration in the next release.

The amount of effort and schedule time needed by the TransitionPhase depends upon the extent to which users encounter new needsand defects. If requirements were carefully captured in the earlyphases, if users were consulted during analysis and design, if users

3 • INTEGRATE METRICS WITH SOFTWARE DEVELOPMENT 51

tried out operating prototypes, if they viewed operating incrementsearly in the Construction Phase, and if this early feedback resulted inmodification of the initial requirements to accommodate needs discov-ered along the way, then few new needs should turn up in the Transi-tion Phase. At best, however, a software organization will find it diffi-cult to estimate the amount of modification work that will feed backfrom users. Keeping a record of what happened on previous projectscan serve as a guide.

If a project has followed good inspection and testing practices, thesystem will reach initial operational capability with fewer remainingdefects. However, there are metric methods for estimating the numberof defects that do remain at this point. Then, as the users operate thesystem in their own environment, they gradually unearth thesedefects. They or the system builder correct the defects and revise theestimate of the number still remaining. Using this methodology, it ispossible to make a rough estimate of the amount of rework to beexpected from defect discovery.

On the whole, however, the core metrics—size, effort, time, produc-tivity, and defect rate—are less applicable in the Transition Phase thanin the earlier phases. The best guide is the experience of navigatingprevious projects through this phase. On that basis, the softwareorganization can allocate a certain amount of time and effort. It canthen make modifications and correct defects within that budget whiletrying to defer more time-consuming modifications to the next releaseor to the maintenance budget.

Index

Analysis, 43, 50, 59, 80, 103, 104,132, 135, 143, 176, 190, 215,216, 242, 260, 263, 267

Anti-manager story, 65-66Architectural baseline, 131, 134, 135,

169Architecture, 46, 48, 54, 72, 73, 80,

85, 86, 103, 104, 118, 122, 123,129ff., 151, 169, 170, 176, 190,191, 192, 195, 199, 204, 217,218, 237, 239, 240, 242, 247,254ff., 264, 298

Bassett, Paul B., 22, 301Bidding, 29, 36, 45ff., 69, 75, 87, 88,

96, 111, 113, 125ff., 131, 136ff,,153, 156, 158, 159-68, 169, 170,181, 196ff., 215, 217-26, 230,233, 234, 240, 242, 250, 272, 288evaluation of, 217-26

Booch, Grady, 42, 43, 129, 134, 242,255, 259, 285, 301, 302

Brodman, Judith G., 196, 303Brooks, Jr., Frederick P., 94, 99,

139-41, 162-63, 246, 247, 295, 301Budget, 21, 51, 53, 68, 69, 121, 128,

170, 189, 206, 207, 208, 209,

233, 234, 245, 246

Canal du Midi example, 1 19, 124Capability Maturity Model (CMM), 4,

24, 25, 27, 29, 35-36, 70, 89,128, 207, 247, 249, 250

Clapp, Judy, 1 1Clark, Elisabeth K. Bailey, 186, 301Clients, 3, 4, 19, 28, 29, 69, 72, 82,

98, 113, 115, 126, 127, 132, 133,136, 154, 155, 156, 158, 159-68,180, 181, 191, 196ff., 201, 203,206, 225, 232, 234, 237ff.,245-46, 255, 271ff., 277, 279,282, 283, 284

Code, 29, 33, 41, 46ff., 55, 68, 72,79ff., 87, lOlff., 110, 116, 133ff.,170, 171, 176ff., 185, 199, 200,202, 212, 242, 247, 252, 253,258ff., 266, 267measurements, 68-69

Coding, 43, 59, 79, 104, 113, 132,176, 177, 217

Component-based development, 176,255, 298, 299

Components, 17, 22, 48, 83, 84, 108,121, 140, 151, 174, 175, 176,

305

306 INDEX

195, 198, 204, 233, 235, 236,247, 251-68, 298managing, 251-68

Construction, 47, 48-50, 51, 113,115, 123, 126ff., 132, 133, 135,136, 143, 144, 158, 160, 205,240, 241, 266, 282. See alsoMain Build.

Contractors, 25, 47, 126, 197, 198,203ff., 211, 215, 216, 226, 232,234, 273

Cost, 9, 45, 46, 54, 55, 66, 69, 116,117, 125, 126, 129, 135, 137,140ff., 151, 156, 198, 205, 206,209ff., 216, 224, 225, 239, 244,246, 250, 257, 288estimate, 134, 206

Customers, 46, 50, 55, 72, 90, 117,126, 133, 166, 167, 185, 216, 218,224, 235ff., 245, 272, 273, 287

Davis, Alan, 115Defect rate, 37, 49, 51, 55, 72, 79,

90, 92, 94, 97, 103-11, 135,142, 183, 184, 199, 200, 202,275

Defects, 37, 38, 49ff, 55, 72, 79, 83,103, 104ff., 108ff., 114, 125,128, 130, 140, 142ff., 151, 170,177ff., 182ff., 187ff., 199, 200,211, 215, 216, 223, 224, 241,255, 257, 263, 275, 283, 284,288, 297-99correction of, 185, 189, 194detecting, 110, 177, 189, 191tracking, 182-85, 215

DeMarco, Tom, 270, 301Department of Defense (DoD), 24, 25,

35, 42ff., 54, 55, 82, 173, 186,193, 201, 207, 242software development phases, 42

Design, 43, 46, 50, 59, 80, 103, 104,109, 132ff., 170, 171, 174, 176,190, 192, 199, 241, 242, 247,259, 260

Designers, 76, 100, 134, 176, 234,242, 259, 280

Developers, 3ff., 19, 26, 29, 32, 33,36, 42, 44, 48, 58, 60, 61, 71,

72, 77, 81, 87, 90, 104, 108,115, 116, 131ff., 142, 157, 162,163, 176ff., 180, 182ff., 188ff.,209, 218, 219, 239, 242, 249,252, 253, 259, 260, 27 Iff., 283ff.

Development, 5, 32, 37, 53, 72, 81,82, 93, 104, 105, 121, 125, 129,132, 134, 142, 147, 156, 159,166, 169ff., 176ff., 183, 185,188ff., 194, 198ff., 209, 216ff.,236, 237, 240, 244, 246, 247,251, 256, 257, 292, 299

Development time, 15, 28-30, 60, 61,100, 108, 125, 139, 140, 141, 142minimum, 60, 61, 101, 108, 139,

140, 141, 142

Effort, 9ff., 19, 23, 24, 28-30, 31, 34,36ff., 41, 44ff., 48ff., 66, 69, 70,73, 83, 90, 94ff., 100, 101, 109,110, 113ff., 125ff., 131ff., 140ff.,147ff., 153ff., 162ff., 169ff.,175ff., 183, 188ff., 196, 198ff.,208, 21 Iff. , 220ff., 236, 240,248, 250ff., 257, 261ff., 274,275, 28 Iff., 288, 291, 293, 298small projects and, 143, 144time and, 161, 202

Elaboration, 45-48, 113, 116, 124,127, 129, 132, 133, 136, 158,240. See also Functional design.

Elton, Jeffrey, 227, 301Enterprise resource planning (ERP),

27, 117, 255, 261, 262, 263Errors, 79, 107, 108, 110, 143, 182,

183, 184, 185, 193, 199, 216.See also Defects.minimizing, 107-10rate of, 79, 108, 185

Ertel, Danny, 269, 270, 274, 277, 302Estimating, 14ff., 29, 33, 36, 37,

44ff., 51, 58, 66, 77-88, 93ff.,113, 116, 125ff., 151-57, 158,159-68, 170, 171, 190ff., 195,199ff., 204ff., 215, 218, 225ff.,230, 233, 240, 242, 250, 252,260ff., 266ff., 272, 279ff., 288constraints to, 161-63cost, 125, 205

INDEX 307

management and, 156-58method, 86-88, 151ff., 230, 282reuse and, 260-61, 266size, 84-88

Failure, 81, 105, 106, 120, 127, 185,215, 219, 242

"Faster, Better, Cheaper" mantra, 58,59, 60

Feasibility, 44, 45, 113, 115-27, 129,136, 171, 174, 196, 199, 217,239. See also Inception.

Felix, C.P., 11, 15, 304Five core metrics, 16, 33, 37-39, 41,

56, 58, 59, 60-61, 63-64, 68,116, 211, 218, 225, 277. Seealso Effort; Productivity; Reliabil-ity; Size; Time,relationship between, 34, 38,

90-92, 151-57trade-offs among, 138-58

Full operational capability (FOC), 187Functional design, 113, 124, 127,

128-37, 166, 169, 170, 196,199, 202, 215, 217

Functionality, 10, 11, 16, 29, 33, 37,38, 53, 56, 73, 77-88, 108, 113,121, 122, 126, 131, 135ff., 151,158, 159, 170, 175-76, 195ff.,216ff., 224, 225, 240, 260ff.,275, 282, 284, 299measuring, 81-82, 261-63

Function points (FP), 73, 74, 75, 80,81, 82, 93, 101, 175, 241, 260

Gabig, Jr., Jerome S., 207, 302Glass, Robert L., 106, 115, 302Gnome example, 57, 97, 98Gottesdiener, Ellen, 43, 302Greene, Jim, 225Griss, Martin, 254, 303Gross, Neil, 106, 302Grossman, Ira, 23

Hammer, Michael, 235, 302Heemstra, F.J., 208, 302Hemens, Anthony, 226, 262High-level design, 42, 72, 80, 129.

See also Functional design.

Hiring, 233, 279-80Humphrey, Watts S., 249, 250, 302

IBM, 6, 9, 11, 12, 99, 172, 274Inception, 44-45, 113, 116, 119,

120-24, 126, 127, 129, 239. Seealso Feasibility.

Initial Operational Capability (IOC),51, 188, 189

Inspections, 109, 110, 177, 183, 185Intelligence behind successful software

management, 20, 29, 30, 284Internet, 27, 55, 56, 67, 106, 119,

127, 133, 151, 238, 255, 260,296, 298

Internet time, 56, 58

Jacobson, Ivar, 42, 43, 129, 134, 242,254, 259, 285, 301, 302, 303

J.D. Edwards, 255, 262Johnson, Donna L., 196, 303Jonsson, Patrik, 254, 303

Keen, Peter, 270, 303Kempff, Geerhard W., 214, 215KISS principle, 65, 66Kozaczynski, W., 255, 303KPN Telecom BV, 225

Leschka, Stephan, 269Letkiewicz, Dominik, 262Lim, Wayne C., 255, 303

Mah, Michael C., 23, 269, 270, 283,303

Main Build, 42, 105, 113ff., 126,128, 131, 136, 137, 169-85,188, 189, 195, 196, 199, 200,202, 216, 222, 237, 241, 245,246, 266, 272, 282, 283. Seealso Construction.

Maintenance, 42, 114, 186, 190,209. See also Operation andMaintenance.

Management, 3, 5, 14, 17, 19, 29,34-36, 41, 42, 46, 58ff., 63, 64,66, 71, 72, 89, 90, 95, 98, 118,126, 136, 152ff., 168, 175, 179,183, 185, 192, 196, 201, 204,

308 INDEX

206, 209, 210, 212-13, 215,218, 226ff., 237, 242, 245, 250,253, 254, 278ff., 285ff.master-planning, 228-29, 231,

232-34, 235project, 227-35

Marketing, 140, 155, 156, 168, 196,223, 224, 232, 256, 275

McConnell, Steve, 69, 303Mean Time To Defect (MTTD), 33, 49,

105, 110, 210, 212, 297-299Mean Time To Failure (MTTF), 219Measurement, 11, 19, 21, 28ff., 41,

65-76, 100, 164, 178, 236, 250poor use of, 32-33

Methodologies, 27, 42, 198, 215,235, 260

Metrics, 4, 5, 19, 21, 24, 25, 28-30,32, 33, 36, 37, 42, 46, 49, 54,69, 71-76, 153, 154, 170, 177,185, 192, 199, 203, 204, 214,216, 219, 230, 237, 240ff., 251,263, 267, 271, 274-77, 286,290-99behavior of, 290-99evolution of, 5-16limited resources and, 40-42management and, 299negotiation and, 277people versus, 4, 48statistical control and, 177-80trade-offs, 143traffic light, 213truths about, 284-85

Minimum development time, 68, 139Monte Carlo simulation, 165, 168Mosemann II, Lloyd K., 54, 303Myers, Ware, 303-4

Negotiation, 269-77, 283Netron, 21, 22, 23, 24Norden, Peter V., 9, 99, 100, 172,

174, 263, 295, 303

Operation, 42, 114, 132, 193Operation and Maintenance, 186-94,

241Operational capability, 50, 51, 169ff.Outsourcing, 203, 204, 283Overruns, 22, 208, 212

Overtime, 28, 71, 157, 233

Paige, Jr., Emmett, 195, 303Paulk, MarkC., 35, 303Personal Software Process (PSP),

249, 250Predictability, 52-61, 195Probability curve, 166, 168Process improvement, 3, 4, 5, 28-30,

54, 66, 100, 236-50, 280difficulty of, 244-47

Process productivity, 23, 26, 27, 64,70, 73-76, 90, 92, 94ff., 107,109-11, 113, 116, 137ff., 142,149ff., 158ff., 164, 166, 176,178, 192, 193, 197ff., 211, 212,215, 224, 225, 237, 243-44,246, 247, 250, 260, 262ff., 275,285, 291, 294ff.functionality or, 264-68

Process productivity parameter, 14,15, 16

Productivity, 19, 24, 26, 31, 33, 34,37, 38, 41, 42, 51, 56, 59, 60,70, 73ff., 81, 88, 90, 92-98, 100,149, 150, 152, 158, 164ff., 176,178, 192, 193, 196, 200, 211,214, 215, 220, 224, 225, 239,243, 244, 250, 252, 254, 275,284, 286, 291, 299behavior of, 97-102by calibration, 98-100conventional, 73, 74, 75, 93,

94-97, 100, 101estimating, 33, 165improving, 109-10judgmental, 73, 75, 76

Productivity index (PI), 24, 149, 150,243, 244, 246, 247, 248

Programmers, 17, 27, 32, 33, 68, 76,92, 93, 132, 134, 143, 247, 252,253, 260, 261, 280

Programming, 6, 55, 170, 247Projects, 113, 150, 151, 195-202,

212, 229-31, 237, 242backlog of, 233-34controlling, 113, 116managing, 227-35, 242plans, 113, 195-202, 212, 229-31

PTT Telecom BV, 208, 209-16

INDEX 309

Purchasing, 209, 210, 211, 212-13,215

Putnam, Douglas T., 26, 146, 290Putnam, Lawrence H., 303-4

Quality, 49, 54, 59, 69, 70, 72, 81,100, 103, 104, 117, 130, 137,139, 152, 169, 210, 211, 234,246, 257, 263, 265, 283, 299defined, 104

Quantitative Software Manage-ment (QSM), 5, 14, 23ff., 61, 92,111, 139, 143, 144, 151, 153,171, 237, 242, 243, 244, 262,263, 290-99database, 26, 58, 61, 92, 111,

128, 143, 144, 151, 237,242, 243, 244, 262, 290-99

Rayleigh curve, 9, 10, 12, 15, 16, 37,172, 174, 175, 177, 183, 288

Reifer, Donald J., 89, 90, 304Reliability, 19, 28-30, 34, 37, 41, 49,

55, 56, 60, 61, 81, 84, 90,103-11, 143, 169, 171, 187ff.,209, 212, 216, 223-24, 225, 236240, 246, 275, 283, 291, 299defect rate and, 103-1 1defined, 104small projects and, 143

Requirements, 46, 48, 50, 59, 72, 73,81ff., 98, 100, 103, 104, 121,123, 129, 131ff., 166ff., 174ff.,180, 181, 187, 190ff., 209, 212214, 217, 218, 240ff., 252, 259,280capture, 43, 46, 79, 80, 103,

104, 132-34, 190, 241, 242Research and development, 217-18Reuse, 83-84, 140, 151, 174, 181,

197, 233, 235, 236, 247, 251-68,285, 298architecture -wide, 255-57five stages of, 252-56product-line, 254-55repository, 252-54sizing implications of, 83-84supports for, 257-60

Reviews, 105, 109, 110, 170, 173,

175, 176, 177, 185, 199, 200,202, 237

Riquet, Pierre Paul, 123-24Risks, 29, 30, 46, 48, 73, 86, 87, 88,

117, 121, 122, 123, 129ff., 135,159, 165ff., 176, 195, 199, 200,218, 237, 239, 240assessing, 117, 135, 163-68, 199management of, 167, 237mitigating, 123, 129, 131technical, 117

Roe, Justin, 227, 301Romer, Paul, 279Royce, Winston W., 42, 43, 304Rubin, Howard, 26, 304Rumbaugh, James, 42, 43, 129, 134,

242, 259, 285, 301, 302

Schedule, llff. , 21, 24, 27, 28-30,37, 38, 48, 49, 53ff., 60, 66,68ff., 81, 83, 91, 94ff., 101,108-9, 116, 117, 125ff., 134ff.,145, 147, 150ff., 156, 158, 163ff.,176, 185, 193, 196ff., 208ff., 216,217, 219-22, 223ff., 228, 245,246, 251, 284, 288, 294

Shannon, Claude E., 78-81, 304Siskens, W.J.A., 208, 302Size, 14ff., 23, 24, 29, 32, 33, 37, 38,

41, 46, 51, 56, 60, 63, 68, 73ff.,77-88, 90ff., 100, 101, 107, 108,113, 116, 125, 126, 131ff., 143,146, 151ff., 158ff., 164ff., 170,175, 178, 180, 192, 195, 197,198, 200ff., 21 Iff. , 218, 225,240, 250, 252, 265, 266, 275,282, 292, 296, 298estimating, 46, 200tracking, 180-81

Small projects, 143ff.Software development, 3ff., 20, 28,

32, 35, 63, 68, 69, 72, 73, 84,88, 93, 107, 110, 125, 127, 142,144, 153, 156, 163, 168, 170,173, 178, 183, 196, 199, 201,204, 206, 209, 214, 216, 217,226, 230, 234, 235, 236-50,251, 260, 279, 286complexity of, 237-38

310 INDEX

difficulty of, 246-50improving, 107, 236-50integrating metrics, 40-51phases of, 238-42planning, 63, 142state of, 53-61, 106-7truths about, 285-86

Software development process, 14,19, 33, 36, 77, 78, 80, 131, 237,240, 278

Software engineering, 69, 175Software Engineering Institute (SEI), 4,

24, 25, 35, 54, 70, 247, 248, 249303

Software equation, 12, 14, 15, 92,94, 96, 98ff., 151, 266, 294, 296

Software industry, 115, 128, 153,154, 157

Software Lifecycle Management(SLIM), 22

Software process, 35, 182, 202, 207241, 245

Software productivity, 26, 27, 89-102Software stakeholders, 16, 19-20Software Technology Support Center,

54, 302Source lines of code (SLOC), 24, 26,

33, 73, 74, 76, 80, 81, 82, 93, 94,131, 143, 144, 145, 146, 150,175, 187, 188, 215, 221, 241,252, 260, 261, 262, 266, 288

Specification 9001, 4Specifications, 35, 36, 85Staff, 19, 20, 27, 29, 38, 41, 48, 59,

81, 101, 125, 127, 130, 131, 140,144, 145, 153, 157, 160, 164, 169171ff., 182ff., 188, 196ff., 201,203, 204, 209ff., 225ff., 232-35,245, 246, 284, 285, 288, 295, 299allocation, 171-75buildup, 15, 160, 164planning, 234-35shortages, 225

Staffing, 12, 15, 16, 37, 48, 129,138, 145, 146, 153, 177, 178,183, 202, 227, 228, 229

Stakeholders, 58, 68, 80, 125, 129,135, 143, 151, 202

Standardization, 29, 54, 207, 235, 258Standish Group, 53, 301

Stanton, Steven, 235, 302Statistical control, 49, 177-80Subcontractors, 229, 233, 234Suppliers, 21, 22, 133, 191, 203,

205-16, 238, 274

Team Software Process (TSP), 249, 250Testers, 100, 177, 234, 280Testing, 49, 51, 59, 83, 132, 144, 170,

174, 176, 177, 182, 183, 185,213, 237, 241, 242, 247, 259system, 177, 242

Time, lOff., 29, 31, 34ff., 41, 44ff.,49, 51, 56ff., 66, 70, 73, 90ff.,100, 105, 110, 113, 116, 121,125ff., 131, 132, 135ff., 140, 141,143, 149, 150, 153ff., 163, 164,170, 171, 176, 178, 182, 188ff.,200, 201, 208, 21 Iff., 216, 218,220, 223, 227, 234, 236, 240,245, 250, 252, 261ff., 274, 275,28 Iff., 288, 291, 294, 298, 299

Time-defects trade-off, 140, 142-43Time-effort trade-off, 14, 140, 141-42,

159, 160-61, 163, 166, 200, 220Transition, 50-51, 114, 132, 186, 241

Uncertainty, 160-61, 164ff.Unified Software Development

Process, 29, 42, 43, 44-51, 72,114, 116, 129, 131, 186, 242,259, 260, 285

Use cases, 83, 175Users, 4, 50, 56, 87, 90, 103, 121,

125, 187, 192, 193, 201, 209,242, 271

van der Stelt, H., 208, 302

Walston, C.E., 11, 15, 304Ward, Rob, 262Waterfall model, 42, 43, 132, 173, 176Wessel, David, 278, 304Work, 19, 32, 33, 34, 37, 63, 68, 69,

70, 72-73, 81, 90, 100, 103,122, 151, 175, 183, 291

Y2K, 27, 53