24
RAC is a DoD Information Analysis Center Sponsored by the Defense Technical Information Center I N S I D E The Journal of the 3 Product Assurance Capability (PAC) Quantified 7 The Reliability Implications of Emerging Human Interface Technologies 14 A Strategy for Simultaneous Evaluation of Multiple Objectives 19 Sole 2004 - “Future Logistics: The Integrated Enterprise” 19 RAC Product News 21 Future Events 22 From the Editor 22 RMSQ Headlines 23 Upcoming November Training Reliability Analysis Center Second Quarter - 2004 Introduction NASA’s Academy of Program and Project Leadership (APPL) has attempted a unique way of capturing and disseminating lessons learned. ASK Magazine, an APPL product, collects les- sons learned in the form of stories told by man- agers and other project practitioners. Published bimonthly, ASK disseminates the lessons inside and out of NASA via a print publication and a web site (Reference 1). The APPL team members working on ASK capture the lessons by inviting project managers, mostly from NASA, but also from other government agen- cies, industry, and sometimes academia, to tell their stories about what happened on projects, and to reflect on what they have learned while telling the story. Stories usually focus on specific subject matter: how to use prototyping as a tool to com- municate better with a customer, how to let go of a popular person on a project without impacting the performance of the teammates and jeopardizing the project, or how to tailor a review so that it can be a learning experience as much as a milestone. And, as these examples show, the stories can be as vari- ous as there are issues to wrestle with on projects. Each bimonthly issue of ASK contains approxi- mately ten stories, and there are eighteen issues to date (June ’04). The print and online versions of the magazine are intended to complement one another. The print edition, an attractive 40-page volume, has a circulation of 6,000 readers, and brings fresh lessons to them every two months. By: Todd Post, Editor of ASK Magazine Not All Lessons Learned Systems Are Created Equal Figure 1. ASK Magazine Lessons Learned Page <http://appl.nasa.gov/ask/archives/searchbylesson/index.html>

product assurance capability

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: product assurance capability

RAC is a DoD Information Analysis Center Sponsored by the Defense Technical Information Center

IN

SI

DE

T h e J o u r n a l o f t h e

3Product Assurance

Capability (PAC)Quantified

7The ReliabilityImplications of

Emerging HumanInterface

Technologies

14A Strategy forSimultaneous

Evaluation of MultipleObjectives

19Sole 2004 - “Future

Logistics: TheIntegrated Enterprise”

19RAC Product News

21Future Events

22From the Editor

22RMSQ Headlines

23Upcoming November

Training

Reliability Analysis Center

Second Quarter - 2004

IntroductionNASA’s Academy of Program and ProjectLeadership (APPL) has attempted a unique wayof capturing and disseminating lessons learned.ASK Magazine, an APPL product, collects les-sons learned in the form of stories told by man-agers and other project practitioners. Publishedbimonthly, ASK disseminates the lessons insideand out of NASA via a print publication and aweb site (Reference 1).

The APPL team members working on ASK capturethe lessons by inviting project managers, mostlyfrom NASA, but also from other government agen-cies, industry, and sometimes academia, to tell theirstories about what happened on projects, and toreflect on what they have learned while telling the

story. Stories usually focus on specific subjectmatter: how to use prototyping as a tool to com-municate better with a customer, how to let go of apopular person on a project without impacting theperformance of the teammates and jeopardizing theproject, or how to tailor a review so that it can be alearning experience as much as a milestone. And,as these examples show, the stories can be as vari-ous as there are issues to wrestle with on projects.

Each bimonthly issue of ASK contains approxi-mately ten stories, and there are eighteen issues todate (June ’04). The print and online versions ofthe magazine are intended to complement oneanother. The print edition, an attractive 40-pagevolume, has a circulation of 6,000 readers, andbrings fresh lessons to them every two months.

By: Todd Post, Editor of ASK Magazine

Not All Lessons Learned Systems Are CreatedEqual

Figure 1. ASK Magazine Lessons Learned Page<http://appl.nasa.gov/ask/archives/searchbylesson/index.html>

Page 2: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 42

Often the issues are focused on singular themes, and recent oneshave included prototyping, reviews, project handoffs, and soft-ware project management. The online edition archives all of theback issues, making ASK lessons available to anyone with anInternet connection.

In November of 2000, the author was hired as the editor of ASK.He was not surprised then to learn that ASK was the first story-telling publication attempting to capture lessons learned. Today,however, he is surprised that it remains the only one to his knowl-edge. While storytelling now has several enthusiasts in the knowl-edge management community, there are scarcely more than a fewsuccessful examples of how it has been integrated into an organi-zation’s culture. The author knows little about the dynamics ofother organizations but has thought a lot about storytelling and les-sons learned and why NASA, particularly APPL, appreciates therelationship between the two as it relates to project management.

It is APPL’s view that a lessons-learned system that prescribes asolution to a project management problem is flawed from thestart. There is always nuance. The ASK audience astutely recog-nizes that no single management problem is identical to others.

What Constitutes a Lesson?We borrow much of our thinking about lessons learned fromDonald Schon, whose books The Reflective Practitioner andEducating the Reflective Practitioner lay the groundwork for ourwork with storytelling (Reference 2). In much broader terms thanwe apply to project management, Schon argues, “Reflection-in-action…is central to the art through which practitioners cope withthe troublesome divergent situations of practice” (Reference 3).

And this is what stories do. They provide a space for reflection.A project manager who is challenged to bring off a deliverable ontime and on budget can listen to one of his peers tell a story aboutsimilar challenges. The project manager who reflects on the storyhe has just heard can compare this to his own experience.

It is worth noting that no place in Schon’s work does he talkabout storytelling as a means of capturing lessons learned.Nevertheless, we do not believe we are skewing his message tosuit our paradigm. Our subjects, like Schon’s, are practitioners,each of whom “has to see on his own behalf and in his own waythe relations between means and methods employed and resultsachieved. Nobody else can see for him, and he can’t see just bybeing ‘told,’ although the right kind of telling may guide his see-ing and thus help him see what he needs to see” (Reference 4).

In this way, stories are another form of observation. Reading sto-ries of how expert practitioners have solved problems requireseven more from the observer. Stories demand we engage withthe protagonists. By reflecting on the stories told by other proj-ect practitioners, you reframe your experience and think about itagainst the context of the story being told. That’s what makes astory a more gratifying learning experience than many other les-sons learned models—because it requires active participation.

Not Any Lesson Will DoAt NASA, lessons packaged as stories are now considered legit-imate. This does not reflect, the author would argue, a predispo-sition towards lessons learned in general. Let us consider theevidence.

In January 2002, the GAO released a report, “NASA: BetterMechanisms Needed for Sharing Lessons Learned,” which painteda discouraging picture of knowledge sharing within the agency.“Although NASA requires managers to regularly share importantlessons learned from past projects through an agency-wide data-base,” Government Executive reported shortly after the GAOreport was released, “only 23 percent of managers surveyed hadever entered information into the system” (Reference 5). NASAmanagers explained that they neglected the database, known as theLessons Learned Information System (LLIS), because it was diffi-cult to navigate and it failed to provide them with useful lessons.

Improving the architecture of a database is simple enough. Thereal problem was it did not provide useful information. If thesystem provides value, it’s likely to get used regardless of defi-ciencies in its navigation.

Unfortunately, one inference suggested by the GAO report wasthat NASA managers don’t want to share knowledge. Frankly,the author finds that at odds with what he has seen since he start-ed working with NASA project managers on ASK. Who doesn’trecall during the Mars encounters the images inside the controlroom at the Jet Propulsion Laboratory of people congratulatingone another, jumping up and down even, when the Spirit andOpportunity rovers delivered signals of their successful land-ings—or for that matter, the images of agency members consol-ing one another as they mourned the loss of the Space ShuttleColumbia crew? Clearly, this is not an organization devoid ofcamaraderie and shared mission.

The success of ASK—6000 print issues published bimonthly,another 8,000 people receiving the electronic edition—shouldnot surprise anyone who understands that a lessons learned sys-tem that is genuinely useful is a sure winner in any organization.In the case of NASA, they know a good thing when they see it.

The Challenge to Your OrganizationThere is one thing we should recognize about project work in mostorganizations: an overwhelming number of people who performthis work are practitioners. Across all levels of project work, peo-ple learn quicker, smarter, and far better by reflecting on theirworkplace experiences than by consulting theory. Either they cando this in the privacy of their own thoughts, or far more effective-ly, as this author would argue, by reflecting on their experienceswith—and listening to the experiences of—their colleagues.

Again, stories stimulate reflection. “How might I address thatissue?” asks the practitioner of him or herself upon hearing a col-league tell a story about a workplace challenge. That, in turn,leads to the obvious next step: “How have I handled similar

Page 3: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 3

challenges?” This holds true not only at NASA. The author isconfident that you have witnessed this, too, in your own organi-zation among all levels of practitioners.

“Because professionalism is still mainly identified with technicalexpertise,” writes Donald Schon in The Reflective Practitioner,“reflection-in-action is not generally accepted—even by those whodo it—as a legitimate form of professional knowing” (Reference6). We should appreciate Schon’s insight here, because it may helpto explain why it has been so difficult for other storytelling initia-tives to get off the ground, even where there are enthusiastic pro-ponents for storytelling within the organization.

There is much to be said for consistency. Our work on ASKMagazine did not come crashing out of the gate with broad accept-ance throughout NASA. It was new, it was different, and therewas already some cynicism, based on the ineffectiveness of theLLIS, about initiatives aimed at providing lessons to help practi-tioners to do their jobs better. That we arrive in NASA mailbox-es every two months with stories by the “best of the best practi-tioners” has gone a long way towards winning over skeptics.

Like most fledgling initiatives, we began on a small scale, start-ing initially as a web-based publication and with a distributionlist of a few hundred, mostly NASA project managers who werealready happy customers of other APPL products. A print publi-cation followed for marketing purposes, and to address the mostcommon observation about ASK’s early issues: “I wish there wasa way I could read these stories while I was on the plane.”

ASK was fortunate to have a sponsor in APPL Director Dr.Edward Hoffman, whose own work on storytelling with ASK

Editor-in-Chief Dr. Alexander Laufer (including a book, ProjectManagement Success Stories) (Reference 7), gave him faith andconfidence that with time the magazine would find wide accept-ance, and it has. Testimonials about the efficacy of ASK lessonsrun the gamut from cog engineers to center directors and associ-ate administrators. The push is on throughout the Federal gov-ernment and across industry to capture knowledge, and findmechanisms like ASK to get that knowledge to the people whoneed it. And so that’s our story. Are you reflecting on yours?

References1. ASK Magazine can be accessed at <http://appl.nasa.

gov/ask>.2. Schon, D.A., The Reflective Practitioner, Basic Books, New

York, 1983; Schon, D.A., Educating the ReflectivePractitioner, John Wiley & Sons, Inc., San Francisco, 1987.

3. D.A. Schon, (1983), p. 62.4. D.A. Schon, (1987), p. 17.5. <http://www.govexec.com/dailyfed/0202/021102m1.htm>.6. D.A. Schon, (1983), p. 69.7. A. Laufer and E. Hoffman, (2000) Project Management

Success Stories: Lessons of Project Leaders, John Wiley &Sons, New York.

About the AuthorTodd Post is the editor of ASK Magazine, and has publishedother articles about ASK in Knowledge Management (Dec.‘01/Jan. ’02), the Knowledge Management Review (March/April’02), and Program Manager (Jan./Feb. ’03). He welcomes yourcomments on this article and about ASK Magazine at<[email protected]>.

Product Assurance Capability (PAC) QuantifiedBy: Ananda Perera, Honeywell Engines Systems & Services

IntroductionThe Product Assurance (Reliability, Maintainability, and QualityAssurance (RM&QA)) programs are an integral part of the contrac-tor (supplier) operations and, as such, are planned and developed inconjunction with other activities to attain the following goals:

a. Recognize RM&QA aspects of all programs and providean organized approach to achieve them.

b. Ensure RM&QA requirements are implemented and com-pleted throughout all program phases of design, develop-ment, processing, assembly, test and checkout, and useactivities.

c. Provide for the detection, documentation, and analysis ofactual and potential discrepancies, system(s) incompati-bility, marginal reliability, maintainability and quality,and trends that may result in unsatisfactory conditions.

The RM&QA program provides for participation, by RM&QApersonnel, in all phases of the design, development, and manu-facturing process. This effort should include reviews and assess-

ments of: human factors, design, hazard analyses, failure modeand effect analyses, test plans, and procedures. Quantifying asingle QA metric is difficult; however R&M together can bequantified and it is called Product Assurance Capability (PAC).

Product Assurance Capability is defined as the combinedProbability that an Item will perform its required functionsfor the duration of a specified mission profile and that therepair action under given conditions of use is carried outwithin a stated time interval.

Many times, reliability is represented by MTBF and maintain-ability is represented by MTTR. These metrics are used to cal-culate Inherent Availability (Ai) which shows the capability ofthe end-unit for service. Inherent Availability is the probabilitythat the system/equipment is operating satisfactorily at any pointin time when used under stated conditions, where the time con-sidered is operating time and active repair time.

MTTR MTBF

MTBF Ai +

=

Page 4: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 44

Ai becomes a useful term to describe combined reliability andmaintainability characteristics. Since this definition of availabil-ity is easily measured, it is frequently used as a contract-speci-fied requirement; however it is not a good Product AssuranceCapability metric.

Reliability and Maintainability (R&M) DesignPhilosophyReliable equipment has a high probability of performing itsrequired function without failure for a stated period of time whensubjected to specified operational conditions of use and environ-ment. The operational use and environment, therefore, need tobe taken into account at the outset of the design process. Thedesign should also be robust to expected variations in productionprocesses and quality of materials and components.

The ease with which the equipment can be returned to usable con-dition after failure and the time needed for preventive mainte-nance are important design criteria. Those items which need to beremoved, adjusted, or inspected most often, for whatever reason,should have the easiest accessibility, so maintainability design issignificantly reliability-driven but not reliability-dependent.

R&M, then, are related activities that need to be fully integratedinto all other project activities. Treating R&M subsequent todesign can lead to a situation where the unreliability and inferiorsupportability are discovered at the end of development, with theconsequent remedial action causing expense and delay.

Reliability and maintainability drive the logistics support aspectsand hence have a significant effect on the life cycle cost of theequipment/system.

R&M General ConsiderationsR&M design philosophy should be applied at all stages of theproject life cycle, from initial conceptual studies through to theIn-Service phase. R&M directly affect both operational effec-tiveness and life cycle cost and merit equal consideration withother parameters such as performance, acquisition cost, and proj-ect time scale. It requires that the contractor should integrateR&M aspects into each stage of the design activity.

At the conceptual stage, the R&M requirements should be con-sidered at the same time as the performance parameters. Theyshould be justified in terms of operational need (e.g.: probabili-ty of mission success, available maintenance manpower), so thatthey will receive due consideration in any subsequent trade-off.As the operational concept develops the R&M requirementsshould be reviewed.

The design procedure should:

a. Ensure that an analysis is conducted of the operating andenvironmental conditions, and also ensure that system andsub-system design specifications incorporate the results.

b. Embody R&M design criteria, and evolve a system that isno more complex than is adequate to satisfy its perform-ance requirements.

c. Ensure that the mechanisms of failure and their effects arethoroughly analyzed and understood, that critical featuresare identified and that the design process aims to reducethe effects of failure modes where possible.

d. Utilize materials and components that are procured toapproved quality standards and ensure that, in applica-tion, they are subject to stresses that are well within theirstrength/rating capabilities.

e. Take producibility into account, ensuring that as far as pos-sible, the design is insensitive to the expected variability ofthe materials, components and production processes.

f. Generate a system that is easy to test, for which failuresare accurately diagnosed and isolated, with a configura-tion that facilitates easy maintenance and repair underfield conditions, including the appropriate level of inte-grated diagnostic capability (Built-in-Test (BIT)).

The Objective of Quality Assurance (QA)The objective of Quality Assurance is to provide adequate confi-dence to the customer that the end product or service satisfies therequirements.

The Quality Assurance policy is to ensure, in conjunction withother integrated project and Product Assurance functions, thatrequired quality is specified, designed-in and will be incorporat-ed, verified and maintained in the relevant hardware, softwareand associated documentation throughout all project phases, byapplying a program where:

• Assurance is provided that all requirements are adequate-ly specified.

• Design rules and methods are consistent with the projectrequirements.

• Each applicable requirement is verified through a verifi-cation program that includes one or more of the followingmethods: analysis, inspection, test, review of design, andaudits.

• Design and performance requirements including the spec-ified margin are demonstrated through a qualificationprocess.

• Assurance is provided that the design is producible andrepeatable, and that the specification of the resultingproduct can be verified and operated within the requiredoperating limits.

• Adequate controls are established for the procurement ofcomponents, materials, software and hardware items, andservices.

• Fabrication, integration, test and maintenance are con-ducted in a controlled manner such that the end item con-forms to the applicable baseline.

• A nonconformance control system is established andmaintained to track non conformances systematically andto prevent reoccurrence.

Page 5: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 5

• Quality records are maintained and analyzed to report anddetect trends in a timely manner to support preventiveand corrective maintenance actions.

• Inspection, measuring and test equipment and tools arecontrolled to be accurate for their application.

• Procedures and instructions are established that providefor the identification, segregation, handling, packaging,preservation, storage and transportation of all items.

• Assurance that the operations including post-mission anddisposal are carried out in a controlled way and in accor-dance with the relevant requirements.

R&M Engineering Functions and TasksAn essential task for an R&M Engineer is estimating the preci-sion of an estimate (say MTBF, MTTR). This is an importanttask leading to the use of Confidence Intervals.

When we use two-sided confidence bounds (or intervals), we arelooking at a closed interval where a certain percentage of thepopulation indicating a result is likely to lie. For example, whendealing with 90% two-sided confidence bounds of (X, Y), we aresaying that 90% of the population lies between X and Y.

One-sided confidence bounds are essentially an open-ended ver-sion of two-sided bounds. A one-sided bound defines the pointwhere a certain percentage of the population is either higher orlower than the defined point. Most of the time one-sided confi-dence bounds are used for MTBF and MTTR estimates. MTBFis calculated at lower (Why? Usually the upper boundary is notknown; If the true MTBF is greater than the “lower”, the cus-tomer will be “happy”) one-sided limit and MTTR is calculatedat upper (Why? Usually the lower boundary is not known; If thetrue MTTR is less than the “upper”, the customer will be“happy”) one-sided limit. The Chi-Square (χ2) Distribution canbe used to find the confidence intervals of the MTBF or MTTR.When there are no failures in a time period, the Chi-SquareDistribution is used to find the MTBF at the lower bound.

1. Reliability prediction is a process of mathematically combin-ing the parts and elements of a system to obtain a singlenumerical figure that represents the system’s probability ofsuccess. In reliability prediction, we usually assume that allcomponents are required for successful system operation,resulting in the use of a series reliability model for predictionof system reliability. Since we’re using a series model, we canpredict such parameters as MTBF (MTTF), but the modelshould not be used for operational reliability parameters suchas MTBCF, unless the effect of redundant components isincluded in the calculation. The goal should be to try to pre-dict system behavior at least to the extent necessary to identi-fy possible risk areas or areas where the system reliabilityneeds to improve to meet requirements.

One method of reliability prediction still popular in the defensecontractor community (popular because a lot of contractors areexperienced with using it, not necessarily because it’s particu-larly good) is the Parts Count and Parts Stress method of MIL-

HDBK-217. In this method, each generic type of componentis assigned a basic failure rate that depends on component typeand operational environment. The basic failure rate can beadjusted by multiplying it by π factors that account for pre-sumed component quality, manufacturing learning curve, etc.RAC findings have shown that failures also stem from non-component causes, namely design deficiencies, manufacturingdefects, poor system management techniques etc. The RACPRISM methodology determines an initial base failure ratebased on PRISM component models. This failure rate is thenmodified with system level process assessment factors to givea truer failure rate prediction.

2. The mean time to repair (MTTR) is perhaps the most com-mon and most useful measure of maintainability. It is oftenincluded in system or product specifications because it’s eas-ier to visualize an average than a probability distribution, andthe mean is also easier to include in calculations than a distri-bution function would be. In general MTTR of a system is anestimated average elapsed time required to perform correctivemaintenance, which consists of fault isolation and correction.

For analysis purposes, fault correction is divided into disas-sembly, interchange, re-assembly, alignment, and checkouttasks. MTTR is a useful parameter that should be used earlyin planning and designing stages of a system. The parameteris used in assessing the accessibility and locations of systemcomponents, and it highlights those areas of a system thatexhibit poor maintainability in order to justify improvement,modifications, or a change of design. The assessed or esti-mated (Estimating methods are available in MIL-HDBK-472) MTTR helps in calculating the life cycle cost of a sys-tem, which includes cost of the average time techniciansspend on a repair task.

3. Testability is a measure of the ability to detect system faultsand to isolate them at the lowest replaceable component(s).The speed with which faults are diagnosed can greatly influ-ence downtime and maintenance costs. As technologyadvances continue to increase the capability and complexityof systems, use of automatic diagnostics as a means of FaultDetection Isolation and Recovery (FDIR) substantiallyreduces the need for highly trained maintenance personneland can decrease maintenance costs by reducing the erro-neous replacement of non-faulty equipment. FDIR systemsinclude both internal diagnostic systems, referred to as built-in-test or built-in-test-equipment (BITE), and external diag-nostic systems, referred to as automatic test equipment(ATE). BIT Effectiveness (BITEFF) is the probability ofobtaining the correct operational status of the system usingBIT. It is a function of: Total System Failure Rate (λ), FaultDetection Capability (FDC), False Alarm Probability (FAP),and The operating time (T) required to conduct BIT.

BITEFF is expressed by the following mathematical function:Minimum [Worst Case, when T → ∞] BITEFF = FDC/(1 +

( )[ ] ( )[ ]FAP 1*T*-FAP 1*T*- e - 1 * FAP 1

FDC e +λ+λ

++

Page 6: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 46

FAP). If BITEFF is high, Repair Times and MTTR will bereduced. Of potential concern is the fact that false alarmsand removals create a lack of confidence in the BIT systemto the point where maintenance or operations personnel mayignore fault detection indications.

Product Assurance Capability (PAC) ModelDescriptionThe PAC metric is a combination of Reliability andMaintainability Functions based on the Weibull Distribution.The R&M equations are shown in Figures 1 and 2. In theseequations Γ(1/β + 1) is the Gamma Function evaluated at thevalue of (1/β + 1). The “Mathcad” software is used to calculateR&M values and the PAC values shown in the figures and sum-marized in Table 1.

Figure 1. Calculation of Reliability Metrics

Figure 2. Calculation of Maintainability Metrics and ProductAssurance Capability

The Weibull, distribution has gained popularity as a time-to-fail-ure distribution. The Weibull distribution is characterized by two

parameters, a scale parameter, the characteristic life, η, and ashape parameter, β. The characteristic life, η, is the same as themean time to failure when β = 1. Often η is replaced for compu-tational convenience by its inverse, λ = 1/η, which can be definedas the failure rate. The two-parameter Weibull distribution isgiven by f(t) = β/η (t/η)β-1exp-(t/η)β, t ≥ 0. The reliability func-tion is R(t) = exp-(t/η)β.

One reason for the popularity of the Weibull distribution is thattimes to failure are better described by the Weibull distributionthan the exponential. For physics of failure approaches to relia-bility, the Weibull distribution is preferred. An advantage of theWeibull distribution is that it represents a whole family ofcurves, which, depending on the choice of β, can represent manyother distributions. For example, if β = 1, the Weibull distribu-tion is exactly the one-parameter exponential distribution. A β ofapproximately 3.3 gives a curve that is very close to the normaldistribution. The infant mortality and wear-out portions of thebathtub curve can often be represented by the proper Weibull dis-tribution. In the three-parameter Weibull distribution, a locationparameter, γ, is used to account for an initial failure-free operat-ing period or prior use (e.g., burn-in).

In the R&M Functions given in Figures 1 and 2, β = 1 and β =3.3 are selected, and can be assumed for new equipment. Forequipment already in use, Weibull analysis of the failure andrepair data needs to be performed to obtain true β values. Thecalculations of the reliability metrics and maintainability metricsare shown in Figures 1 and 2, respectively. Table 1 summarizesthe Ai and PAC metrics.

Table 1. Comparison of Ai, and PAC Metrics Using DifferentCombinations of MTBF, MTTR, t, and T

Notes: 1. In hours.2. In minutes.3. This means in the given operational environment, 9,989

out of 10,000 systems are available for service at any timein the useful life period.

4. For this combination of MTBF, MTTR, t & T the differ-ence between Inherent Availability and Product AssuranceCapability is high.

SummaryTo achieve high operational effectiveness with low life cyclecost, the RM&QA of systems should be given full considerationat all stages of the procurement cycle. This process should beginat the concept stage of the project and be continued in a disci-plined manner as an integral part of the design, development,production, and testing process and subsequently into service.

MTBF1 MTTR2 Ai PAC t2 T1

30,000 20 0.999989 0.9989063 40 2

10,000 20 0.999967 0.930108 30 1.5

5,0004 204 0.9999334 0.5026274 204 14

30,000 15 0.999992 0.999933 40 2

30,000 10 0.999994 0.999950 30 1.5

30,000 5 0.999997 0.999966 20 1

T 2 Specified Mission Time (Hours)

MTBF 10000 12000, 30000.. Mean Time (Hours) Between Failure Range

β 1 Weibull Shape ( Exponential ) Parameter

Reliability (R) Function R

MTBFe

T

MTBFΓ 1

1

β.

β

RMTBF

MTBF1 10

41.4 10

41.8 10

42.2 10

42.6 10

43 10

40.99975

0.99979

0.99983

0.99987

0.99991

0.99995

MTBF

1 104.

1.2 104.

1.4 104.

1.6 104.

1.8 104.

2 104.

2.2 104.

2.4 104.

2.6 104.

2.8 104.

3 104.

RMTBF

0.999800020.999833350.999857150.999875010.9998889

0.99990.99990910.999916670.999923080.999928570.99993334

t 40 Required Restoration Time (Minutes)

MTTR 14 16, 32.. Mean Time (Minutes) To Repair Range

β 3.3 Weibull Shape ( Normal ) Parameter

Maintainability (M) FunctionM

MTTR1 e

t

MTTRΓ 1

1

β.

β

MMTTR

MTTR10 14.4 18.8 23.2 27.6 32

0.7

0.76

0.82

0.88

0.94

1

MMTTR

10.999999430.999941220.998972670.993421310.976947760.944692470.896358230.8355667

0.76752148

MTTR

14161820222426283032

Product Assurance Capabilitywith MTBF=30000 Hrs & MTTR = 20 Minutes is ------>

P R30000

M20

.

P 0.99890608=

Page 7: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 7

IntroductionThis article discusses the reliability aspects of several emergingtypes of human-machine interfaces. These new interfaces are sub-stantially different from the now common interfaces of keyboards,mice, touch pads, and touch screens and the less common voice-driven interfaces. Readers who desire to acquaint or re-acquaintthemselves with the fundamentals of the current common types ofinterfaces are encouraged to consult the RAC guide entitled “APractical Guide to Developing Reliable Human-Machine Systemsand Processes,” (Order No. RAC-HDBK-1190, HUMAN). Thosewho desire a more interactive discussion of the fundamentals anda more extensive discussion of the new technologies should con-sider the RAC Human Factors (reliability-oriented) short course.Information on the referenced guide and course can be found at theRAC web site at <http://rac.alionscience.com>.

EEG-Based Computer ControlOne of the most exciting developments in human-machine inter-faces is implementing the control of computers by humanthought. Based on the fact that the brain prepares for a movinga limb a full half-second before the limb actually is moved, com-puter scientists at the Fraunhofer Institute for ComputerArchitecture and Software Technology and the BenjaminFranklin University Clinic, both in Berlin, and the University ofBritish Columbia (Reference 1) among others have been investi-gating controlling computers by thought alone. The long-term

objective of this research is to create a multi-position, brain-con-trolled switch that is activated by signals measured directly froman individual’s brain. By fitting subjects with an electroen-cephalograph (EEG) and training the students for approximately200 hours, the scientists have been able to get the students tomove simple objects on a computer screen. The scientists rec-ognize that the interface must be able to determine the intentionof the human in a single reading of brain waves. This requiresfiltering out noise produced by both the brain and the EEGequipment. Two current disadvantages of the current EEGapproach are that the EEG equipment is too expensive for com-mercial use yet and that a conductive gel is required to ensure agood electrical interface. Figure 1 illustrates the configurationfor EEG-based control of computers.

Figure 1. EEG-Based Control of Computers

(Continued on page 10)

By: Kenneth P. LaSala, Ph.D., KPL Systems

Operational (Mission & Restoration) Success R&M parametersrelate to the probability of failures occurring during a MissionTime that would cause an interruption of that Mission and to theprobability of correcting these failures during the requiredRestoration Time. The PAC Metric represents the overallOperational Success and can be calculated using predictedand/or estimated MTBF & MTTR Data. If there is an InherentAvailability requirement, it is recommended that the PAC Metricbe used for accuracy and good customer satisfaction.

Glossary of TermsReliability is the probability that an item can perform its functionunder stated conditions for a given amount of time without failure.

Maintainability is the probability that an item can be retained in,or restored to, a specified condition when maintenance is per-formed by personnel having specified skill levels, using prescribedprocedures and resources, at each prescribed level of maintenanceand repair. The term is also used to denote the discipline of study-ing and improving the maintainability of products, (e.g., by reduc-ing the amount of time required to diagnose and repair failures).

MTTF stands for Mean Time To Failure and is represented by themean life value for a failure distribution of non-repairable units.

MTBF stands for Mean Time Between Failure and is representedby the mean life value for a failure distribution of repairable units.

MTBCF stands for Mean Time Between Critical Failure, and isthe average time between failures which causes a loss of a sys-tem function defined as “critical” by the user.

MTTR stands for Mean Time To Repair and is represented bythe mean life value for a distribution of repair times.

Availability is a performance criterion for repairable systems thataccounts for both the reliability and maintainability properties of acomponent or system. It is defined as the probability that a systemis not failed or undergoing a repair action when it needs to be used.

Mission Time is the portion of the up time required to perform aspecified mission profile.

Restoration Time is the time taken to restore the delivery ofservice, when the repair is carried out by an adequately skilled

The Reliability Implications of Emerging Human InterfaceTechnologies

(Continued on page 23)

Page 8: product assurance capability

Consider...

Do you have a solution?

Your product is having majorproblems at a key customer site and

your customer is losing faith.

Your warranty costsdoubled last month and your VP calls

to ask you why.

Your customer is asking forreliability and availability numbers

and your reliability expert justleft the company.

www.relexsoftware.com

Page 9: product assurance capability

Building Solutions One Customer At A TimeAt Relex Software we realize that every customer has aunique vision to promote reliability. By working together to analyze your needs, Relex forges a partnership to build a complete, manageable solution for your reliability requirements. By taking advantage of our world-class tools,services, and support, you can rely on Relex to help youachieve your reliability management goals.

● Thousands of Satisfied Customers Since 1986

● Comprehensive Services Offerings

● World-Class Predictive Analytics

● Commitment to Quality

The Quality Choice Professionals Rely OnRelex Software, a trusted and proven market leader, is used by thousands of professionals worldwide to improve product reliability and quality. Whether yourneeds span a small team solution or a large-scale, enterprise-wide deployment, Relex Software is your partner for success.

Relex Software

● PREDICTIVE TOOLSMIL-HDBK-217 Telcordia/BellcoreRDF 2000 HRD5, 299B

● PRISMFully Integrated

● CORRECTIVE ACTIONFRACAS

● MODELING TOOLSRBD, Markov Simulation Optimization

● RISK ASSESSMENTFault Tree FMEA/FMECA

● R&M SUPPORTWeibull Maintainability Life Cycle Cost

● RELIABILITYCONSULTING

● TRAINING● IMPLEMENTATION

SERVICES● TECHNICAL SUPPORT

ComprehensiveSoftware Tools

ProfessionalServices

Quality Assurance● ISO 9001:2000

CERTIFICATION● TickIT 2000

STANDARD● ASQ CERTIFIED

RELIABILITY ENGINEERSON STAFF

www.relexsoftware.com724.836.8800

Relex®

The Solution Provider For Global Reliability Management

Call today to speak with one of our solution consultants.

Page 10: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 410

The Reliability Implications of . . . (Continued from page 7)

To understand the reliability implications of this new technology,one should examine the cognitive model of the human. The cog-nitive model is the most convenient model to use for evaluatingthe reliability of the human. Other forms, such as a servo-con-troller model could be used, but they tend to measure generalizedmodel parameters without providing the insight into the mentalprocess that the cognitive model provides. Figure 2 illustratesthe basic cognitive model plus some other influences.

The basic cognitive model consists of long- and short-term mem-ory, a sensing function, an information-processing function, adecision function, and a response function. With the exceptionof memory, the skill level of the human affects all of the func-tions in the cognitive model. Time and environmental factorsalso affect the overall task performance response. Three aspectsof the cognitive model are worth noting:

• Data can be obtained in some form for the sensory andmotor elements of the cognitive model

• There may be information processing and decision mod-els that can be applied

• The cognitive model “plugs” very nicely into reliabilityblock diagrams and fault trees (as bottom events)

The sensing function can be modeled in two forms according to thesensory modes selected for the task. Note that the most commonlyused modes are visual, auditory, and tactile. The first form is whata reliability engineer would consider a “parallel” mode. In thisform, one or more of the sensory modes is used in an “or” manner.The other form is when all of the selected sensory modes arerequired for the task, a serial mode that represents an “and” condi-tion. There are two mechanisms for sensing: similarity matchingand frequency gambling. Similarity matching is sensing based onthe similarity of the sensed subject to a previously sensed item; e.g.,sensing a traffic “stop” sign at an intersection. Frequency gamblingis a heightened awareness because a stimulus is likely to happenagain; i.e., repeated occurrence of a stimulus. Figure 3 illustrateshow the approach to sensory inputs can be modeled.

Information processing and decision-making are both based onrule-based action and comparison-based action. In rule-basedaction, the human has been given a set of processing or decisionrules for application. In comparison-based action, information isprocessed and decisions are made based on comparisons withprevious experience. In this approach, the reliability of the infor-mation processing and decision-making functions can be relatedto the skill level, education, training, and experience of humansystem components.

InformationProcessing

Memory Traditional Cognitive Model

Sensing Decision Response

Skill

Environmental Factors

DurationTemperaturePressureIllumination

Ambient NoiseOxygenHumidityVibration

Figure 2. Traditional Cognitive Model Plus Other Influences(Adapted from Reliable Human-Machine Systems Developer Training Course, KPL Systems, 1997)

Figure 3. Models of the Sensing Function(Adapted from Reliable Human-Machine Systems Developer Training Course, KPL Systems, 1997)

Detection = Any sensory mode

Visual

Auditory

Tactile

Visual Auditory Tactile

i=1 i=1

R = 1 - Π(1 - R )I R = Π RI

3 3

Detection = Combinations of sensory modese.g., all sensory modes

Page 11: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 11

Responses generally take one of three forms: a speech response, amotor response that involves some visual activity to locate a targetand some motor action associated with the target, or a combinedspeech-motor response in which both motor action and verbal con-firmation are required. The reliability of motor activities is drivenby the complexity of the action and the skill of the performer.Figure 4 illustrates the most common options for responses.

This article addresses only the human-computer interface. Formost common situations, this only involves the middle line ofFigure 4 – a motor-visual response. Although the figure does notshow it, there also is a feedback loop that returns back to the sen-sory (visual, in this case) input part of the cognitive model. Forsimplicity, this article will neglect the feedback loop. With afocus on the response portion of the cognitive model, the EEG-based computer control provides the upper response path shownin Figure 5. This path is derived from information in the refer-enced University of British Columbia site (Reference 1). Thelower path in that figure shows the traditional response path.

There is an interesting contrast in the two paths in terms of theirrespective reliabilities. One can obtain an estimate of the relia-bility of the upper path by applying appropriate electronic hard-ware prediction methods such as PRISM, if the failure rate datafor all of the components is accessible, and by estimating thereliability of the sophisticated software. The software estimationis not likely to be easy because the capability of the software isstill evolving. It is not clear that the EEG path software has been

subjected yet to the type of testing that would support the use oftest-based software reliability prediction methods. If the EEG-based path is to reach commercialization, then certainly theremust be a software reliability assessment effort. On the otherhand, for the lower path, the reliabilities of the hardware and thesoftware are well known. The intricate part of assessing the reli-ability of this path is estimating the reliabilities of the motor andvisual elements. The reliabilities of these elements are driven byfactors discussed in the above referenced RAC handbook andother human factors engineering resources.

However, with the exception of the REHMS-D software developedby the author, methods for estimating the reliabilities of these ele-ments based on physical design factors are not readily available.More information about REHMS-D can be found in Section 5.2.8of the referenced RAC handbook. Perhaps the simplest approachto a comparative reliability evaluation of the two paths would bean empirical one. The reader should note that the above discussionassumes that the human mental functions proceed correctly.Certainly, human misdirection or other forms of incorrect humanmental function would affect either path adversely.

Functional Magnetic Resonance ImagingThe concept of reading the mind for a variety of purposes is notparticularly new, but current technology greatly enhances theability of performance monitors and researchers to do so. In par-ticular, Magnetic Resonance Imaging (MRI) of the brain isextensively recognized for its excellent spatial resolution that

Figure 4. Common Response Options(Adapted from Reliable Human-Machine Systems Developer Training Course, KPL Systems, 1997)

Figure 5. A Comparison of EEG-Based and Traditional Computer Control

Visual

Visual Speech

Speech

Motor

Motor

EEG-Based Interface

Electronic hardware reliability factors Interface & CPUreliability factors

InformationProcessing

Decision

Electronic Cap& Electronics Amplifier

Software

Software

VisualMotor

Motor reliabilityfactors

Visual reliabilityfactors

Input devicereliability

factors

Interface & CPUreliability factors

A:D > D:AInterface

Moved objecton screen

Standard Visual/Motor Interface

Page 12: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 412

allows neurological anatomic structures to be viewed in sharpdetail. MRI is a technique for determining which parts of thebrain are activated by different types of physical sensation oractivity, such as sight, sound or the movement of a subject’s fin-gers. MRI is driven by nuclear magnetic resonance.

The term “functional MRI,” fMRI, usually refers to techniques thatimage a complete brain slice in 20 ms. This “brain mapping” isachieved by setting up an advanced MRI scanner in a special wayso that the increased blood flow to the activated areas of the brainshows up on the fMRI scans (see Figure 6). There are several typesof fMRI (Reference 2), of which the BOLD (Blood Oxygen LevelDependent) form is the form most commonly used. Researchers atthe University of Pennsylvania (Reference 3) are using fMRI toscrutinize the brains of subjects during question-and-answer peri-ods for purposes of lie-detection. These studies require subjects tolie very still in the scanner while performing cognitive tasks.

For those interested in reliable human performance, it is a simpleconceptual step to extend to the in-situ performance and conditionmonitoring of operators. Not only can performance be recordedand analyzed subsequently, but also operator fitness or conditioncan be monitored in a manner that allows a fatigued operator to bereplaced before he or she makes incorrect decisions or takes incor-rect actions. Section 6.5 of the above referenced guide providesadditional information about how time-on-station and several bio-logical rhythms can affect human performance reliability.

There are several major issues that must be accommodated ifcondition monitoring is to be considered:

• For each individual, a fMRI “normal” baseline must beestablished.

• Current equipment requirements preclude in situ moni-toring of operators via fMRI.

• If automated human condition monitoring is desired, thenvery sophisticated diagnostic software must be developed.

The first issue is not as direct as it may look. First, one mustdevelop fMRIs for the normally functioning brain under specifiedcircumstances. One would expect to develop a range of baselinefMRI profiles, not just a single one, to account for the expectedrange of normal conditions and tasks. Furthermore, each humanwould require his or her own set of profiles. The second issue isa consequence of the current state of fMRI technology. One can-

not just attach a head set to an in situ operator at the present time.However, one near-term approach could be to profile operatorsbefore they arrive on station and then to profile them again after awhile, such as on a break, to determine their condition. Of course,one could develop suitable instrumentation for in-situ monitoring,but this is more of a very long-range target. The third issuerequires the development of reliable diagnostic software thatcould read fMRI scans and determine whether or not abnormali-ties exist. This would be somewhat akin to the software that isemerging for the diagnosis of mammograms. Examining theseissues shows that much work is required before fMRI could beused in a practical way for human condition monitoring.

Automatic Speech RecognitionAutomatic speech recognition (ASR) is an evolving technologythat is finding its way into applications such as:

• Dictation systems• Voice-based communications such as telebanking, voice-

mail, database-query systems, information retrieval systems• System control - automotive, aircraft• Information systems• Security systems - speaker verification

Research in automatic speech recognition aims to develop meth-ods and techniques that enable computer systems to accept speechinput and to transcribe the recognized utterances into normalorthographic writing (Reference 4). Four basic approaches toattain this goal have been followed and tested over the years.

• Template-based approaches, where the incoming speechis compared with stored units in an effort to find the bestmatch

• Knowledge-based approaches that attempt to emulate thehuman expert ability to recognize speech

• Stochastic approaches, which exploit the inherent statisti-cal properties of the occurrence and co-occurrence ofindividual speech sounds

• Approaches which use networks of a large number of sim-ple, interconnected nodes which are trained to recognizespeech

All of these approaches require the following elements:

• Determining optimal speech parameters

Figure 6. Example Functional Magnetic Resonance Imaging Scans(From <http://www.musc.edu/psychiatry/fnrd/primer_fmri.htm>)

Page 13: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 13

• Various types of analytical models• Learning and testing algorithms• Hardware/software systems

There also are speech comprehension considerations such asspeaker identification and speaker verification.

As ASR moves toward commercialization, the reliability of eachof the approach elements and speech comprehension considera-tions becomes important. Incorrect interpretation in ASR in tele-banking or system control can have extremely grave conse-quences. While estimating the reliability of the hardware ele-ments of an ASR system can be accomplished by standard pre-diction techniques, estimating the reliability of the complex soft-ware and the embedded models and algorithms that the softwarerepresents is a very complex problem. Also, the potential uses ofASR suggest that there should be standards and specificationsfor ASR systems and that these documents include requisite lev-els of reliability and compliance verification requirements.

Other Human Interface Technology DevelopmentsWhile brain-computer interaction research is opening a newdimension for human-computer interfaces, most interfaceresearch and development is focusing on advanced uses of themore commonly recognized visual, auditory, and tactile sensorymodes. A convenient reference for some of the research in theseother interface technologies is Proceedings of the IEEE,September 2003, Vol. 91, No. 9.

A popular area of research appears to be animated interfaces, inwhich an operator converses with lifelike computer charactersthat speak, emote, and gesture. An interesting concept in human-computer interfaces is the use of animated agents to communicatewith humans (see Figure 7). These agents are life-like charactersthat speak, emote, and gesture. According to Ronald Cole of theUniversity of Colorado (Reference 5) and his IEEE Proceedingsco-authors, while technology supports the development of theseagents, there is a lack of experimental evidence that animatedagents improve human-computer interaction. Since face-to-facecommunication is the most effective, according to Cole, theobjective is to create interfaces with animated agents that act andlook like humans. Speakers’ facial expressions and gesturesenhance the spoken message because audio and visual informa-tion are presented in parallel. This is a multi-dimensional inter-face. Much of the work by Cole and others focuses on animatedagents that can carry on tutorial, task-oriented dialogs.

Although most research on such dialogs has focused on verbalcommunication, nonverbal communication can play many impor-tant roles as well, as is suggested in the speech-gesture researchmentioned above. The “flip-side” of this research area is havinga computer “read” the voice, facial expressions, and gestures ofthe operator for both control and condition monitoring purposes.The referenced RAC handbook provides an introduction to theadvantages and disadvantages of multi-dimensional interfaces.

Figure 7. University of Colorado Animated Interfaces(<http://mailweb.udlap.mx/~ingrid/caminoreal/Cole.ppt>)

Haptic (touch-based) feedback now is being explored especially inmodern medicine, in which visual-haptic activities play a majorrole. Sensor-haptic interfaces are playing a significant role in tele-operation systems. Teleoperation systems are an important tool forperforming tasks that require the sensor-motor coordination of anoperator but where it is physically impossible for an operator toundertake such tasks in situ. The vast majority of these devicessupply the operator with both visual and haptic sensory feedback inorder that the operator can perform the task at hand as naturally andfluently as possible and as though physically present at the remotesite. Closely related to haptic feedback is haptic holography,(Reference 6) a combination of computational modeling and multi-modal spatial display. Haptic holography combines various holo-graphic displays with a force feedback device to image freestand-ing material surfaces with programmatically prescribed behavior.

ConclusionsInterfaces that are based on reading the human brain have a longway to go before they are ready for commercialization. EEG-based computer control appears to be an approach that will maturesooner than other methods. The fMRI technology has potentialconditioning monitoring applications, but it requires much work tobring it to the point of practical use. ASR is evolving rapidly butstill requires significant work before it can realize its full potential.Animated interfaces can be constructed now with current comput-er graphics technologies, but it remains to be seen whether or notthey offer a significant advantage over simpler interfaces.

The potential uses of all of the above described emerging humaninterface technologies demand very high levels of reliability.There are many opportunities – indeed requisite work – for theconduct of reliability analyses, reliability testing, and the writingof standards and specifications with reliability requirements.

References1. <http://www.ece.ubc.ca/~garyb/BCI.htm>2. <http://www.musc.edu/psychiatry/fnrd/primer_fmri.htm>3. <http://amishi.com/lab/facilities/>

Page 14: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 414

4. <http://www.hltcentral.org/page-827.0.shtml>5. <http://www.is.cs.cmu.edu/SpeechSeminar/Slides/

RonCole-September2003.abstract>6. <http://web.media.mit.edu/~wjp/pubs/thesisAbstract.pdf>,

W. Plesniak et al.

About the AuthorKenneth LaSala currently is the Director of KPL Systems, anengineering consulting firm that focuses on reliability, maintain-ability, systems engineering, human factors, information tech-nology, and process improvement. Dr. LaSala has over 33 yearsof technical and management experience in engineering. He hasmanaged engineering groups and served as a senior technicalstaff member in systems engineering, reliability and maintain-ability (R&M), and product assurance for the Air Force, theNavy, the Army, the Defense Mapping Agency, and NOAA.

Dr. LaSala was the President of the IEEE Reliability Society dur-ing 1999-2000 and is the chairman of the IEEE ReliabilitySociety Human Interface Technology Committee. He also cur-rently participates in the DoD Human Factors EngineeringTechnical Advisory Group and the DoD Advisory Group onElectron Devices. His publications include several papers onR&M, systems requirements analysis, and other engineering top-ics. He also is the author of a chapter on human-machine relia-bility in the McGraw-Hill Handbook of Reliability Engineeringand Management, a co-author of the IEEE video tutorial onhuman reliability, and the author of a MIL-HDBK-338 sectionon the same topic. His research interests include techniques fordesigning human-machine systems and progressive system engi-neering approaches. He received a B.S. degree in Physics fromRensselaer Polytechnic Institute, an M.S. in Physics from BrownUniversity, and a Ph.D. in Reliability Engineering from theUniversity of Maryland.

IntroductionProper measurement and evaluation of performance is the key tocomparing the performance of products and processes. Whenthere is only one objective, carefully defined quantitative evalu-ation most often serves the purpose. However, when the productor process under study is to satisfy multiple objectives, perform-ances of the subject samples can be scientifically compared onlywhen the individual criteria of evaluations are combined into asingle number. This report describes a method in which multi-ple objectives are evaluated by combining them into an OverallEvaluation Criteria (OEC).

In engineering and scientific applications, measurements, andevaluations of performance are everyday affairs. Although thereare situations where measured performances are expressed interms of attributes such as Good, Poor, Acceptable, Deficient,etc., most evaluations can be expressed in terms of numericalquantities (instead of Good and Bad, use 10 and 0). When theseperformance evaluations are expressed in numbers, they can beconveniently compared to select the preferred candidate. Thetask of selecting the best product, a better machine, a taller build-ing, a champion athlete, etc. is much simpler when there is onlyone objective (performance) which is measured in terms of a sin-gle number. Consider a product such as a 9 Volt TransistorBattery whose functional life expressed in terms of hours is theonly characteristic of concern. Given two batteries: Brand A (20hours) and Brand B (22.5 hours), it is easy to determine whichone is preferable. Now suppose that you are not only concernedabout the functional life, but also the unit costs which are: $1.25for Brand A and $1.45 for Brand B. The decision about whichbrand of Battery is better is no longer straightforward.

Multiple performance objectives (or goals) are quite frequent inthe industrial arena. A rational means of combining various per-formances evaluated by different units of measurement is essential

for comparing one product performance or process output withanother. In experimental studies like the Design of Experiments(DOE) technique, performances of a set of planned experimentsare compared to determine the influence of the factors and thecombination of the factor levels that produce the most desirableperformance. In this case the presence of multiple objectives posesa challenge for analysis of results. Inability to treat multiple crite-ria of evaluations (measure of multiple performance objectives)often renders some planned experiments ineffective.

Combining multiple criteria of evaluations into a single numberis quite common practice in academic institutions and sportingevents. Consider the method of expressing a Grade Point Average(GPA, a single number) as an indicator of student’s academic per-formance. The GPA is simply determined by averaging the GPAof all courses (such as scores in Math, Physics, or Chemistry –individual criteria evaluations) which the student achieves.Another example is a sporting event like Figure SkatingCompetition where all performers are rated in a scale of 0 to 6.The performer who receives 5.92 wins over another whose scoreis 5.89. How do the judges come up with these scores? Peoplejudging such events follow and evaluate each performer in anagreed upon list of items (criteria of evaluations) such as style,music, height of jump, stability of landing, etc. Perhaps eachitem is scored in a scale of 0 - 6, then the average scores of alljudges are averaged to come up with the final scores.

If academic performances and athletic abilities can be evaluatedby multiple criteria and are expressed in terms of a single num-ber, then why isn’t it commonly done in engineering and sci-ence? There are no good reasons why it should not be. For aslight extra effort in data reduction, multiple criteria can be eas-ily incorporated in most experimental data analysis schemes.

To understand the extra work necessary, let us examine how scien-tific evaluation differs from those of student achievement or from

A Strategy for Simultaneous Evaluation of Multiple ObjectivesBy: Ranjit K. Roy, PhD., P.E.

Page 15: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 15

an athletic event. In academic as well as in athletic performances,all individual evaluations are compiled in the same way, say 0 - 4(in case of student’s grade, there are no units). They also carry thesame Quality Characteristic (QC) or the sense of desirability (thehigher score the better) and the same Relative Weights (level ofimportance) for all. Individual evaluations (like the grades in indi-vidual courses) can be simply added as long as their (a) units ofmeasurement, (b) sense of desirability, and (c) relative weight(importance) are the same for all courses (criteria). Unfortunately,in most engineering and scientific evaluations, the individual crite-ria are likely to have different units of measurement, QualityCharacteristic, and relative weights. Therefore, methods specificto the application, and that which overcomes the difficulties posedby differences in the criteria of evaluations, must be devised.

Units of MeasurementsUnlike GPA or Figure Skating, the criteria of evaluations in engi-neering and science, generally have different units of measure-ments. For example, in an effort to select a better automobile,the selection criteria may consist of: fuel efficiency measured inMiles/Gallon, engine output measured in Horsepower, reliabili-ty measured as Defects/1000, etc. When the units of measure-ments for the criteria are different, they cannot be combined eas-ily. To better understand these difficulties, consider a situationwhere we are to evaluate two industrial pumps of comparableperformances (see Table 1). Based on 60% priority on higherdischarge pressure and 40% on lower operating noise, whichpump would we select?

Table 1. Performance of Two Brands of Industrial Pumps

Pump A delivers more pressure, but is noisier. Pump B has a lit-tle lower pressure, but is quieter. What can we do with the eval-uation numbers? Could we add them? If we were to add themwhat units will the resulting number have? Would the totals beof use? Is Pump A with 250 total better than Pump B?

Obviously, addition of numbers (evaluations) with different units ofmeasurements is not permissible. If such numbers are added, thetotal serves no useful purpose, as we have no units to assign, nor dowe know whether bigger or smaller value is better. If the evalua-tions were to be added, they must first be made dimensionless (nor-malized). This can be easily done by dividing all evaluations (suchas 160 psi, 140 psi) of a criteria by a fixed number (such as 200 psi),such that the resulting number is a unitless fraction.

Quality Characteristic (QC)Just because two numbers have the same or no units, they maynot necessarily be meaningfully added. Consider the followingtwo players’ scores (see Table 2) and attempt to determine whichplayer is better.

Table 2. Golf and Basketball Scores of Two Players

Observe that the total of scores for Player 1 (42 + 28) is 70 andfor Player 2 (52 + 18) is also 70. Are these two players of equalcaliber? Are the additions of the scores meaningful and logical?Unfortunately, the total of scores do not reflect the degree bywhich Player 1 is superior over Player 2 (score of 42 is betterthan 52 in Golf and score of 28 is better than 18 in basketball).The total scores are meaningful only when the QC’s of both cri-teria are made the same before they are added together.

One way to combine the two scores is to first change the QC ofthe Golf score by subtracting from a fixed number, say 100, andthen add it to the Basketball score. The new total score then is:

Overall score for Player 1 = 30 x 0.50 + (100-45) x 0.50 = 42.5 Overall score for Player 2 = 20 x 0.50 + (100-55) x 0.50 = 32.5

The overall scores indicate the relative merit of the players.Player 1 having the score of 42.5 is a better sportsman comparedto Player 2 who has a score of 32.5.

Relative WeightIn formulating the GPA, grades of all courses for the student areweighted the same. This approach is generally not valid in sci-entific studies. For the two Players in the earlier example, theirskills in Golf and Basketball were weighted equally. Thus, therelative weight did not influence the judgment about their skillsin the games. If the relative weights are not the same for allobjectives, the contribution from the individual criteria of evalu-ations must be multiplied by the respective relative weights. Forexample, if Golf had a relative weight of 40%, and Basketballhad 60%, the computation for the overall scores must reflect theinfluence of the relative weight as follows:

Overall score for Player 1 = 30 x 0.40 + (100-45) x 0.60 = 45Overall score for Player 2 = 20 x 0.40 + (100-55) x 0.60 = 35

The Relative Weight is a subjective number assigned to eachindividual criteria of evaluation. Generally it is determined bythe team during the experiment planning session and is assignedsuch that the total of all weights is 100 (set arbitrarily).

Thus, when the preceding general concerns are addressed, crite-ria of evaluations for evaluation of any product or process per-formance can be combined into a single number as demonstrat-ed in the following application example.

An Example ApplicationA group of process engineers and researchers, involved in man-ufacturing baked food products, planned an experiment to deter-

(Continued on page 17)

EvaluationCriteria

RelativeWeight Pump A Pump B

Discharge Pressure 60% 160 psi 140 psiOperating Noise 40% 90 Decibels 85 Decibels

Totals: 250 (?) 225 (?)

CriteriaRelativeWeight

Player1

Player2 QC

Golf (9 holes) 50% 42 52 ==> Smaller is betterBasketball 50% 28 18 ==> Bigger is better

Total Score 70 70

Page 16: product assurance capability
Page 17: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 17

mine the “best” recipe for one of their current brand of cakes.Surveys showed that the “best” cake is judged on taste, moist-ness, and smoothness rated by customers. The traditionalapproach has been to decide the recipe based on one criterion(say taste) at a time. Experience, however, has shown that whenthe recipe is optimized based on one criterion; subsequent analy-ses using other criteria do not necessarily produce the samerecipe. When the ingredients differ, optimizing the final recipebecomes a difficult task. Arbitrary or subjectively optimizedrecipes have not brought the desired customer satisfaction. Thegroup therefore decided to follow a path of consensus decision,and carefully devise a scientific scheme to incorporate all crite-ria of evaluations simultaneously into the analysis process.

In the planning session convened for the Cake BakingExperiment, and from subsequent reviews of experimental data,the applicable Evaluation Criteria and their characteristics asshown in Table 3 were identified. Taste, being a subjective crite-rion, was to be evaluated using a number between 0 and 12, with12 being assigned to the best tasting cake. The Moistness was tobe measured by weighing a standard size cake and by noting itsweight in grams. It was the consensus that a weight of about 40grams represents the most desirable moistness and indicates thatits Quality Characteristic is “nominal.” In this evaluation, resultsabove and below the nominal are considered equally undesirable.Smoothness was measured by counting the number of voids in thecake, which made this a “smaller is better” (QC) evaluation. Therelative weights were assigned such that the total was 100. Thenotations X1, X2, and X3 as shown in Table 3, are used to repre-sent the evaluations of any arbitrary sample cake.

Table 3. Evaluation Criteria for Cake Baking Experiments

Two samples cakes were baked following two separate recipesunder study. The performance evaluations for the two samplesare as shown in Table 4. Note that each sample is evaluated byall three criteria of evaluations (taste, moistness, and smooth-ness). The OEC for each sample is created by combined individ-ual evaluations into a single number (OEC = 66 for sample 1),which represents the performance of the sample cake and can becompared for the relative merit. In this case, cake sample #1 withOEC as 66 is slightly better than sample #2 with 64 as OEC.

Table 4. Trial #1 Evaluations

To examine how the OEC of the cake samples is formulated,note that the individual sample evaluations were combined by“appropriate normalization.” The term normalization refers tothe act of reducing the individual evaluations into dimensionlessquantities, aligning their quality characteristics to conform to acommon direction (commonly bigger), and allowing each crite-ria to contribute in proportion of their relative weight. The OECequation appropriate for the cake baking project is:

The contribution of each criteria is first turned into fractions (adimensionless quantity) by dividing the evaluation by a fixednumber, such as the difference between best and the worstamong all the respective sample evaluations (12 - 0 for Taste, seeTable 3). The numerator represents the evaluation reduced bysmaller magnitude of the Worst or the Best evaluations in case ofbigger and smaller QC’s and by the Nominal value in case ofNominal QC. The contributions of the individual criteria arethen multiplied by their respective Relative Weights (55, 20,etc.). The Relative Weights which are used as a fraction of 100,assures the OEC values to fall within 0 - 100.

Since Criteria 1 has the highest Relative Weight, all other crite-ria are aligned to have a Bigger QC. In the case of a NominalQC, as it is the case for Moistness (second term in the equationabove), the evaluation is first reduced to deviation from the nom-inal value (X2 - nominal value). The evaluation reduced to devi-ation naturally turns to Smaller QC. The contributions from theSmoothness and Moistness, both of which now have SmallerQC, are aligned with Bigger QC by subtracting the normalizedfraction from 1. An example calculation of OEC using the eval-uations of cake sample #1 (see Table 4) follows.

Sample calculations:

Trial 1, Sample 1 (x1 = 9, x2 = 34.19, x3 = 5)

OEC = 9 x 55/12 + (1 - (40 - 34.19)/15) x 20) + (1 - (5 - 2)/6) x 25

= 41.25 + 12.25 + 12.5 = 66 (shown in Table 4)

Similarly, the OEC for the second sample is calculated to be 64.The OEC values are considered as the “Results” for the purpos-es of the analysis of the results of designed experiments.

The OEC concept was first published by the author in the refer-ence text in 1989. Since then it has been successfully utilized innumerous industrial experiments, particularly those that fol-lowed the Taguchi Approach of experimental designs. The OECscheme has been found to work well for all kinds of experimen-tal studies regardless of whether it utilizes designed experiments.

(Continued on page 19)

A Strategy for Simultaneous ... (Continued from page 15)

CriteriaDescription

Evaluation QualityCharacteristic (QC)

RelativeWeightingWorst Best

Taste (x1) 0 12 Bigger is better 55Moistness (x2) 25 40 Nominal 20Smoothness (x3) 8 2 Smaller is better 25

Criteria Sample #1 Sample #2Taste 9.00 8.00Moistness 34.19 33.00Smoothness 5.00 4.00

OEC 66.00 64.00

( )( )

( )( )

( )( ) 25 x

2 - 8

2 - x - 1 20 x

25 - 40

40 - x- 1 55 x

0 - 12

0 - x OEC 321

+

+=

Page 18: product assurance capability

PUTTING THE PIECES OF RELIABILITY, AVAILABILITY,MAINTAINABILITY, SAFETY AND QUALITY ASSURANCE TOGETHER

YOU ASKED, AND WE LISTENED

Item Software (USA) Inc.2190 Towne Centre Place, Suite 314, Anaheim, CA 92806

Tel: 714-935-2900 - Fax: 714-935-2911URL: www.itemsoft.com

E-Mail: [email protected]

Item Software (UK) Limited1 Manor Court, Barnes Wallis Road, Fareham, Hampshire

PO15 15TH, U.K.Tel: +44 (0) 1489 885085 - Fax: +44 (0) 1489 885065

E-Mail: [email protected]

Visit our Web site at www.itemsoft.com,or call us today for a free demo CD and product catalog.

New Fault Tree Analysis Engine!

ITEM QA MODULES■ Design FMEA■ Process FMEA■ Control Plan■ Document Control and Audit (DCA)■ Calibration Analysis■ Concern and Corrective Action

Management (CCAR)■ Statistical Process Control (SPC)

ITEM TOOLKIT MODULES■ MIL-217 Reliability Prediction■ Bellcore/Telcordia Reliability Prediction■ 299B Reliability Prediction■ RDF Reliability Prediction■ NSWC Mechanical Reliability Prediction■ Maintainability Analysis■ Failure Mode, Effects and Criticality Analysis■ Reliability Block Diagram■ Fault Tree Analysis■ Markov Analysis■ SpareCost Analysis

Binary DecisionDiagram (BDD)AVAILABLE

NOW

Page 19: product assurance capability

S e c o n d Q u a r t e r - 2 0 0 4 19

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

One of the means by which the RAC carries out its responsibilityto disseminate information on reliability, maintainability, support-ability, and quality (RMSQ) is by developing and selling a varietyof products. One look at our catalog (go to <http://rac.alionscience.com/rac/jsp/webproducts/products.jsp> and click on “Downloadthe entire catalog of RAC’s Products and Services in PDF format”)reveals over 60 products, most of them downloadable, for theRMSQ practitioner and manager. Our newest addition isJackknife, a PDA reliability application for decision makers on themove. Jackknife contains 8 tools and 5 databases that support splitsecond reliability based decision making whether in a designreview or during concept development. By the time this issue ofthe Journal is published, our products will include two handbooks

on systems engineering and another on integrated supply chainmanagement. Also, sometime this summer, a new toolkit on sup-portability will be available.

We are always trying to ferret out the most pressing needs of theRMSQ community and to identify products to help meet thoseneeds. Do you have ideas for a new RAC product? Let us know bytaking the 30-second product survey. Simply go to <http://rac.alionscience.com/productsurvey> and complete the on-line, shortquestionnaire. If you have any questions or comments that wouldprefer to share person-to-person, please contact the RAC ProductDevelopment Manager, Ned H. Criscimagna by E-mail at <[email protected]>, or call him at (301) 918-1526.

RAC Product News

A Strategy for Simultaneous ... (Continued from page 17)

References1. Roy, Ranjit K., A Primer on The Taguchi Method, Society of

Manufacturing Engineers, P.O. Box 6028, Dearborn,Michigan, USA 48121, ISBN: 087263468X.

2. Roy, Ranjit, Design of Experiments Using the Taguchi Approach:16 Steps to Product and Process Improvement by Hardcover(January 2001) John Wiley & Sons; ISBN: 0471361011.

3. Qualitek-4 Software for Automatic Design and Analysis ofTaguchi Experiments, <http://rkroy.com/wp-q4w.html>.

About the AuthorRanjit K. Roy, Ph.D., P.E. (M.E.), Nutek, Inc. is a trainer and con-sultant specializing in the Taguchi approach to quality improve-ment. He is the author of the Design of Experiments UsingTaguchi Approach: 16 Steps to Product, Process Improvement andA Primer on the Taguchi Method, and of Qualitek-4 software fordesign and analysis of Taguchi experiments. He can be contactedby E-mail at: <[email protected]> and additional informationis available at <www.rkroy.com/wp-rkr.html>.

SOLE – The International Society of Logistics (‘SOLE’ or ‘theSociety’) will hold its 39th Annual International Conference andExhibition from 29 August through 2 September 2004 in Norfolk,Virginia. This year’s conference theme is “Future Logistics: TheIntegrated Enterprise.” BG Scott G. West, Quartermaster Generalof the United States Army and Commandant of the U.S. ArmyQuartermaster Center will serve as both the Defense Chair and theconference host. Joining him, as the Industry Chair, is Clayton(Clay) M. Jones, Chairman, President and Chief Executive Officerof Rockwell Collins, selected in January 2004 by Forbes maga-zine as the “best managed aerospace and defense company inAmerica.” Senior leaders from the defense, industry, academicand business communities will participate throughout the confer-ence, not just in sharing their vision and experiences but also ininteractive dialogue and participation with the attendees.

Over the three days the symposium will explore the integration,expansion, and connection of the logistics enterprise both intra-logistics and inter-functional; from tactical to strategic; and frompresent to future - all supported by the themes of logistics supportrequirements driving technology development (instead of logisticshaving to adjust to/for design efficiencies/shortcomings/inadequa-cies that impact support delivery) and best practices/processimprovements that significantly reduce the logistics ‘tail/foot-print’ (i.e., make an impact on the bottom line). Plenary sessions,panels, best practice paper presentations and the development ofwhite papers reflecting the positions of the defense, industry, aca-demic, and commercial global attendees will address and integrate

six major focus areas: Organization for Optimization; Defense/Industry/Commercial/Academic Alliances & Integration; LifeCycle Systems Design; Life Cycle Systems Support; LogisticsChain Management; and Logistics Enterprise ResourceOptimization.

The week’s offerings include a number of pre-conference work-shops on Sunday and Monday, August 29th and 30th; the sympo-sium’s technical program on Tuesday through Thursday (August31st through September 2nd); an exhibition of companies and gov-ernment agencies, from the opening Exhibitor’s Reception onMonday evening, August 30th through the close of the Exhibit Hallon Wednesday, September 1st; and the Society’s Annual AwardsProgram and Banquet on Thursday evening, September 2nd. Inaddition, the Tuesday evening reception will be held at Nauticus(The National Maritime Center), conveniently within walking dis-tance of the Norfolk Marriott Waterside (the conference venue).

SOLE – The International Society of Logistics is a non-profit inter-national professional society composed of individuals organized toenhance the art and science of logistics technology, education, andmanagement. The Society is in no way sponsored by any group,company or other association. SOLE was founded in 1966 as theSociety of Logistics Engineers “to engage in educational, scientif-ic, and literary endeavors to advance the art of logistics technolo-gy and management.” For more information, visit SOLE’s website at <www.sole.org/conference.asp>; or contact SOLEHeadquarters at <[email protected]> or (301) 459-8446.

Sole 2004 - �Future Logistics: The Integrated Enterprise�

Page 20: product assurance capability
Page 21: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 21

Also visit our Calendar web page at<http://rac.alionscience.com/calendar>

2004 International Military & Aerospace/ Avionics COTS Conference

August 3-5, 2004Seattle, WAContact: Edward B. HakimThe C3I Inc.2412 Emerson AvenueSpring Lake, NJ 07762 Tel: (732) 449-4729Fax: (775) 655-0897E-mail: <[email protected]>On the Web: <http://nppp.jpl.nasa.gov/docs/

cots2004_cfp.pdf>

22nd International System Safety Conference

August 2-6, 2004Providence, RIContact: Dave O’KeeffeRaytheon E-mail: <[email protected]>On the Web: <http://www.system-safety.

org/~22ndissc/>

SOLE 2004 39th Annual InternationalLogistics Conference and Exhibition

August 29 - September 2, 2004Norfolk, VAContact: Sarah R. James, Executive DirectorSOLE - The International Society of

Logistics8100 Professional Place, Suite 111Hyattsville, MD 20785Tel: (301) 459-8446Fax: (301) 459-1522E-mail: <[email protected]>On the Web: <http://www.sole.org/

conference.asp>

ASTR 2003: Workshop on Accelerated Stress Testing & Reliability

October 1-3, 2004Seattle, WAContact: Mark GibbelNASA/JPLTel: (818) 542-6979Fax: N/AE-mail: <[email protected] the Web: <http://www.ewh.ieee.org/soc/

cpmt/tc7/ast2003/>

Special Symposia on ContactPhenomena in MEMs

October 24-27, 2004Long Beach, CAContact: Dr. Lior KogutUniversity of California at BerkeleyDepartment of Mechanical Engineering5119 Etcheverry HallBerkeley, CA 94720-1740 Tel: (510) 642-3270Fax: (510) 643-5599E-mail: <[email protected]>On the Web: <http://www.asmeconferences.

org/IJTC04/> (click on Special Symposiaon Contact Mechanics)

7th Annual Systems Engineering ConferenceOctober 25-28, 2004Dallas, TXContact: Dania KhanNDIA2111 Wilson Blvd., Suite 400Arlington, VA 22201 Tel: (703) 247-2587Fax: (703) 522-1885E-mail: <[email protected]>On the Web: <http://register.ndia.org/

interview/register.ndia?PID=Brochure&SID=_1430NWW>

DoD Maintenance Symposium & ExhibitionOctober 25-28, 2004Houston, TXContact: Customer ServiceSAE World Headquarters400 Commonwealth DriveWarrendale, PA 15096-0001Tel: (877) 606-7323Fax: (724) 776-0790E-mail: <[email protected]> On the Web: <http://www.sae.org/

calendar/dod/>

World Aviation CongressNovember 2-4, 2004Reno, NVContact: Chris DuranteSAESAE World Headquarters 400 Commonwealth DriveWarrendale, PA 15096-0001 Tel: (520) 621-6120Fax: (520) 621-8191E-mail: <[email protected]>On the Web: <http://www.sae.org/events/

wac/>

30th International Symposium for Testing &Failure Analysis

November 14-18, 2004Worchester, MAContact: Matthew ThayerAdvanced Micro DevicesAustin, TX Tel: (512) 602-5603E-mail: <[email protected]>On the Web: <http://www.edfas.org/istfa>

CMMI Technology ConferenceNovember 15-18, 2004Denver, COContact: Dania KhanNDIA2111 Wilson Blvd., Suite 400Arlington, VA 22201-3061 Tel: (703) 247-2587Fax: (703)522-1885E-mail: <[email protected]>On the Web: <http://www.sei.cmu.edu/

cmmi/events/commi-techconf.html>

19th International Maintenance ConferenceDecember 5-8, 2004Naples Coast, FLContact: Reliabilityweb.comP.O. Box 07070Fort Myers, FL 33919 Tel: (239) 985-0317Fax: (309) 423-7234E-mail: <[email protected]>On the Web: <http://www.

maintenanceconference.com/>

Aging Aircraft 2005January 31 - February 3, 2005Palm Springs, CAContact: Ric LoesleinNAVAIR, Aging Aircraft ProgramPatuxent River, MD 20670-1161 Tel: (301) 342-2179Fax: (301) 342-2248E-mail: <[email protected]>On the Web: <http://www.agingaircraft.

utcdayton.com/>

Future Events in Reliability, Maintainability, Supportability & Quality

The appearance ofadvertising in this publi-cation does not consti-tute endorsement by theDepartment of Defenseor RAC of the productsor services advertised.

Page 22: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 422

Predictions Remain a Controversial IssueGiven recent E-mails and discussions in our reliability courses,the subject of reliability prediction clearly remains a controver-sial, and, all-too-often, emotional subject. Although “a rose byany other name …,” I often wonder if the controversy would beas intense if we used “assessment” rather than “prediction.”Alas, I doubt that we can ever escape our past, and prediction islikely to remain in the reliability vocabulary.

Even if we were to use another term, however, the controversymight remain unless and until some fundamentals become, well,fundamental! What are these fundamentals?

1. Reliability prediction (RP) is any method used to assessthe level of reliability that is potentially achievable, orbeing achieved, at a point in time.

2. RP is a process, not a one-time activity. It begins in earlydevelopment and continues throughout the life of the sys-tem, with different methods used at varying times.

3. No one method of RP is right for every item at all times.The “right” method is the one for which the requisite dataare available, and that is appropriate for the intended useof the RP (e.g., comparison, sparing, contractual verifica-tion, part characterization, system field performance, etc.)

4. An RP stated at a confidence level is more meaningfulthan a point estimate.

5. The results of any method used to make an RP must betempered by an understanding of the method itself, thematurity of the design, and the fidelity of the data.

These fundamentals are not new; to old-time reliability engi-neers, they are common-sense andfamiliar to the point of being second-nature. I am unsure if that same level ofunderstanding is enjoyed by others in theacquisition and logistics communities.The arguments I hear against onemethod or another, and the search forthat one magic method lead me to sur-mise that this understanding of predic-tion and its uses remains elusive.

I do not claim to have a monopoly onunderstanding predictions. I certainlydo not have all, or even most, of theanswers to making good predictions. As an engineer, however, Ido know that we must have some way to quantify reliability aswe move through the acquisition process. Furthermore, as a one-time aircraft maintenance officer and acquisition logistics offi-cer, I know that we must have estimates of the number of failuresexpected for the equipment in our systems to determine the spar-ing and other logistics resources needed to support our systems.In that light, I think our time is better spent on using the “right”prediction method correctly (see item 3) rather than on arguingthat one method is “better” than another.

Ned H. Criscimagna

From the Editor

RMSQ HeadlinesLearning from Columbia, QUALITY PROGRESS, published byAmerican Society for Quality, March 2004, page 38. A year afterthe space shuttle Columbia disintegrated on reentry into theearth’s atmosphere, killing all seven astronauts aboard, lessonslearned are emerging from the investigation into the disaster.Rather than attempting to cast blame, the investigation is seekingto improve the processes used in the shuttle program, to preparefor and carry out shuttle missions.

Virtual Maintenance, NATIONAL DEFENSE, published byNational Defense Industries Association, March 2004, page 21.As part of its continuing effort to reduce cost of ownership andincrease availability, the US Navy is looking to “net-centric”maintenance, also called distance report. By automating mainte-nance and repair tasks, the Navy hopes to reduce Operating &Support costs by 60 percent.

Changes on the Way for Army Logistics Ops, NATIONAL DEFENSE,published by National Defense Industries Association, April 2004,page 24. To realize its goal of becoming more of an “expedi-tionary” force, the US Army is considering making sweepingchanges in logistics and support operations. According to Lt.General Claude V. Christianson, Army Deputy Chief of Staff for

logistics, expeditionary means being able to “open up the theaterand set up a sustainment base” quickly. He says that the inabilityto do that is a fundamental shortfall the Army must solve.

Case Study: Designing for Quality, QUALITY DIGEST, publishedby QCI International, April 2004, page 29. The article presents acase study of how one company reduced cycle time andimproved first-time yield by implementing standardized productqualification processes in collaboration with suppliers.

Commanders Ponder How Best to Mend Battlefield Logistics,NATIONAL DEFENSE, published by National Defense IndustriesAssociation, May 2004, page 12. Various reports and studieshave documented shortcomings in the logistics systems used tosupport military operations in Iraq. DoD organizations are work-ing to address the problems.

Weapon System Evaluators Must Change, or Risk Irrelevance,Warns Christie, NATIONAL DEFENSE, published by NationalDefense Industries Association, May 2004, page 22. Operationaltesting is facing challenges in several areas. The Director ofOT&E discusses the need for change.

Page 23: product assurance capability

T h e J o u r n a l o f t h e R e l i a b i l i t y A n a l y s i s C e n t e r

S e c o n d Q u a r t e r - 2 0 0 4 23

Electronic Design ReliabilityThis intensive course is structured for all key participants in thereliability engineering process. Included are systems and circuitdesign engineers, quality engineers and members of related dis-ciplines having little or no previous reliability training. Thecourse deals with both theoretical and practical applications ofreliability; all considerations related to the design process includ-ing parts selection and control, circuit analysis, reliability analy-sis, reliability test and evaluation, equipment production andusage, reliability-oriented trade-offs, and reliability improve-ment techniques.

Reliability Engineering StatisticsThe Reliability Statistics Training Course is a three-day, applica-tions-oriented course on statistical methods. Designed for thepractitioner, this course covers the main statistical methods usedin reliability and life data analysis. The course starts with anoverview of the main results of probability and reliability theory.Then, the main discrete and continuous distributions used in reli-ability data analysis are overviewed. This review of reliabilityprinciples prepares the participants to address the main problemsof estimating, testing and modeling system reliability data.Course materials include the course manual and RAC’s publica-tion “Practical Statistical Tools for the Reliability Engineer.”

Weibull AnalysisThis three-day hands-on workshop starts with an overview ofbest practice Weibull analysis techniques plus a quick illustrativevideo of three case studies. The entire New Weibull Handbook©

by Dr. Abernethy, the workbook provided for the class, is cov-ered beginning with how to make a Weibull plot, plus interpreta-tion guidelines for “good” Weibulls and “bad” Weibulls.Included are failure prediction with or without renewals, testplanning, regression plus maximum likelihood solutions such asWeiBayes, and confidence calculations. All students will receiveWinSMITH™ and VisualSMITH™ Weibull software and willget experience using the software on case study problems fromindustry. Computers are provided for the class. Related tech-niques Duane/AMSAA Reliability Growth, Log-Normal,Kaplan-Meier and others will be covered. This class will preparethe novice or update the veteran analyst to perform the latestprobability plotting methods such as warranty data analysis. Itis produced and presented by the world-recognized leaders inWeibull research.

For more information <http://rac.alionscience.com/training>.

Date: November 2-4, 2004Location: Orlando, FL

Upcoming November Training

repairman who has the necessary tools, equipment and spareparts, etc. Restoration Time is denoted as the active repair time.

FDC is the ratio of “BIT Detectable System Failure Rate” andthe “Total System Failure Rate”.

FAP is the ratio of the “BIT False Alarm Rate” and the “TotalSystem Failure Rate” excluding “Failure Rate of BIT Circuitry”

For Further Study1. Def Stan 00-40 (Part 4) “Reliability and Maintainability Part

4: Guidance for writing NATO R&M RequirementsDocuments” (Issue 2 Publication Date 13 June 2003).

2. Def Stan 00-41 “Reliability and Maintainability Mod Guideto Practices and Procedures (Issue 3 Publication Date 25June 1993).

3. SSP 50182 “NASA/ASI Bilateral Safety and ProductAssurance Requirements” (Publication Date 2 May 1996).

4. ECSS-Q-00A “Space Product Assurance: Policy andPrinciples” (Publication Date 19 April 1996).

5. NASA PLLS Database – Lesson 0827: QuantitativeReliability Requirements Used as Performance-BasedRequirements for Space Systems.

6. NASA PLLS Database – Lesson 0831: MaintainabilityProgram Management Considerations.

7. NASA PLLS Database – Lesson 0835: Benefits ofImplementing Maintainability on NASA Programs.

8. NASA PLLS Database – Lesson 0837: False AlarmMitigation Techniques.

9. NASA PLLS Database – Lesson 0841: AvailabilityPrediction and Analysis.

About the AuthorAnanda Perera has 25 years of North American experience inReliability/Maintainability/Safety Engineering. He is presentlyemployed at Honeywell Engines Systems & Services, Ontario,Canada as a Reliability/Maintainability Engineer for 21 years.

Mr. Perera has a Bachelor of Science in Production Engineering(1972) from the University of Aston, Birmingham, England. Heis a Professional Engineer (1976 to present) and a member of theAssociation of Professional Engineers of Ontario. He is also aCertified Reliability Engineer (American Society for Quality)(1983 to present). He is Honeywell Six Sigma Plus Green BeltCertified (2001) and Design For Six Sigma Certified (2003).

His published papers are: Adaptive Environmental StressScreening • Reliability of Mechanical Parts • Optimum CostMaintenance.

Product Assurance ... (Continued from page 7)

Page 24: product assurance capability

Reliability Analysis Center201 Mill StreetRome, NY 13440-6916

PRSRT STDUS Postage Paid

Utica, NYPermit #566

Reliability Analysis Center

(315) 337-0900 General Information

(888) RAC-USER General Information

(315) 337-9932 Facsimile

(315) 337-9933 Technical Inquiries

[email protected] via E-mail

http://rac.alionscience.com Visit RAC on the Web

T h e J o u r n a l o f t h e