12
Design products not to fail in the field; you will simultaneously reduce defectives in the factory. Robust Quality by Genichi Taguchi and Don Clausing When a product fails, you must replace it or fix it. In either case, you must track it, transport it, and apologize for it. Losses will be much greater than tbe costs of manufacture, and none of this expense will necessarily recoup the loss to your reputation. Taiichi Ohno, tbe renowned former executive vice president of Toyota Motor Corporation, put it this way: Whatever an executive thinks the losses of poor quality are, they are actually six times greater. How can manufacturing companies minimize tbem? If U.S. managers learn only one new principle from the collection now known as Taguchi Methods, let it be this: Quality is a virtue of design. Tbe "ro- bustness" of products is more a function of good In 1989. Genichi Taguchi received MITI's Purple Ribbon Award from the emperor of Japan for his contribution to Japanese industrial standards. He is executive director of the American Supplier Instit ute and director of the fapan Industrial Technology Transfer Association. He is also the author of The System of Experimental Design (ASI Press. 1987). Don Clausing, formerly of Xerox, is the Bernard M. Gordon Adjunct Professor of Engineering Innovation and Practice at MIT. He has edited Mr. Taguchi's w^orks In En- glish and is a leading exponent of his views. He is the au- thor (with John R. Hauserj of "The House of Quality" (HBR May-June 1988). design than of on-line control, however stringent, of manufacturing processes. Indeed - though not nearly so obvious-an inberent lack of robustness in prod- uct design is the primary driver of superfluous manu- facturing expenses. But managers will have to learn more than one principle to understand why. Zero Defects, Imperfect Products For a generation, U.S. managers and engineers have reckoned quality losses as equivalent to the costs ab- sorbed by the factory when it builds defective prod- ucts-the squandered value of products that cannot he shipped, the added costs of rework, and so on. Most managers think losses are low when the factory ships pretty much what it builds; such is the message of statistical quality control and other traditional quality control programs that we'll subsume under the seductive term "Zero Defects." Of course, customers do not give a hang about a factory's record of staying "in spec" or minimizing scrap. For customers, the proof of a product's qual- ity is in its performance when rapped, overloaded, dropped, and splashed. Then, too many products dis- HARVARD BUSINESS REVIEW fanuary-February 65

Robust Quality

  • Upload
    tinolc

  • View
    120

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Robust Quality

Design products not to fail in the field; you willsimultaneously reduce defectives in the factory.

Robust Qualityby Genichi Taguchi and Don Clausing

When a product fails, you must replace it or fix it.In either case, you must track it, transport it, andapologize for it. Losses will be much greater thantbe costs of manufacture, and none of this expensewill necessarily recoup the loss to your reputation.Taiichi Ohno, tbe renowned former executive vicepresident of Toyota Motor Corporation, put it thisway: Whatever an executive thinks the losses of poorquality are, they are actually six times greater.

How can manufacturing companies minimizetbem? If U.S. managers learn only one new principlefrom the collection now known as Taguchi Methods,let it be this: Quality is a virtue of design. Tbe "ro-bustness" of products is more a function of good

In 1989. Genichi Taguchi received MITI's Purple RibbonAward from the emperor of Japan for his contribution toJapanese industrial standards. He is executive director ofthe American Supplier Instit ute and director of the fapanIndustrial Technology Transfer Association. He is also theauthor of The System of Experimental Design (ASI Press.1987). Don Clausing, formerly of Xerox, is the Bernard M.Gordon Adjunct Professor of Engineering Innovation andPractice at MIT. He has edited Mr. Taguchi's w^orks In En-glish and is a leading exponent of his views. He is the au-thor (with John R. Hauserj of "The House of Quality"(HBR May-June 1988).

design than of on-line control, however stringent, ofmanufacturing processes. Indeed - though not nearlyso obvious-an inberent lack of robustness in prod-uct design is the primary driver of superfluous manu-facturing expenses. But managers will have to learnmore than one principle to understand why.

Zero Defects, Imperfect Products

For a generation, U.S. managers and engineers havereckoned quality losses as equivalent to the costs ab-sorbed by the factory when it builds defective prod-ucts-the squandered value of products that cannothe shipped, the added costs of rework, and so on.Most managers think losses are low when the factoryships pretty much what it builds; such is the messageof statistical quality control and other traditionalquality control programs that we'll subsume underthe seductive term "Zero Defects."

Of course, customers do not give a hang about afactory's record of staying "in spec" or minimizingscrap. For customers, the proof of a product's qual-ity is in its performance when rapped, overloaded,dropped, and splashed. Then, too many products dis-

HARVARD BUSINESS REVIEW fanuary-February 65

Page 2: Robust Quality

ROBUST QUALITY

Taguchi's Quality Imperativesn Quality losses result mainly from product failureafter sale; product "robustness" is more a functionof product design than of on-line control, howeverstringent, of manufacturing processes.D Robust products deliver a strong "signal" regard-less of external "noise" and with a minimum of in-ternal "noise." Any strengthening of a design, thatis, any marked increase in the signal-to-noise ratiosof component parts, will simultaneously improvethe robustness of the product as a whole.D To set targets at maximum signal-to-noise ratios,develop a system of trials that allows you to analyzechange in overall system performance according tothe average effect of change in component parts,that is, when you subject parts to varying values,stresses, and experimental conditions. In new prod-ucts, average effects may be most efficiently dis-cerned by means of "orthogonal arrays."• To build robust products, set ideal target val-ues for components and then minimize the aver-age of the square of deviations for combined com-ponents, averaged over the various customer-use conditions.

D Before products go on to manufacturing, toler-ances are set. Overall quality loss then increases bythe square of deviation from the target value, that is,by the quadratic formula L = D'C, where the con-stant, C, is determined by the cost of the counter-measure that might be employed in the factory. Thisis the "Quality Loss Function."D You gain virtually nothing in shipping a productthat just barely satisfies the corporate standardover a product that just fails. Get on target, don'tjust try to stay in-spec.n Work relentlessly to achieve designs that can beproduced consistently; demand consistency fromthe factory. Catastrophic stack-up is more likelyfrom scattered deviation within specifications thanfrom consistent deviation outside. Where deviationfrom target is consistent, adjustment to the target ispossible.n A concerted effort to reduce product failure in thefield will simultaneously reduce the number of de-fectives in the factory. Strive to reduce variances inthe components of the product and you will reducevariances in the production system as a whole,n Competing proposals for capital equipment orcompeting proposals for on-line interventions maybe compared by adding the cost of each proposal tothe average quality loss, that is, the deviations ex-pected from it.

play temperamental behavior and annoying or evendangerous performance degradations. We all prefercopiers whose copies are clear under low power; weall prefer cars designed to steer safely and predictably,even on roads that are wet or bumpy, in crosswinds,or with tires that are slightly under or overinflated.We say these products are robust. They gain steadfastcustomer loyalty.

Design engineers take for granted environmentalforces degrading performance (rain, low voltage, andthe like). They try to counter these effects in productdesign-insulating wires, adjusting tire treads, seal-ing joints. But some performance degradations comefrom the interaction of parts themselves, not fromanything external happening to them. In an idealproduct-in an ideal anything-parts work in perfect

Ambient variations in thefactory are rareiy asdamaging as variationsin customer use.

harmony Most real products, unfortunately, containperturbations of one kind or another, usually the re-sult of a faulty meshing of one component with cor-responding components. A drive shaft vibrates andwears out a universal joint prematurely; a fan's mo-tor generates too much heat for a sensitive micro-processor.

Such performance degradations may result eitherfrom something going wrong in the factory or froman inherent failure in the design. A drive shaft mayvibrate too much because of a misaligned lathe or amisconceived shape; a motor may prove too hot be-cause it was put together improperly or yanked intothe design impetuously. Another way of saying thisis that work-in-progress may be subjected to widevariations in factory process and ambience, andproducts may be subjected to wide variations in theconditions of customer use.

Why do we insist that most degradations resultfrom the latter kind of failure, design failures, and notfrom variations in the factory? Because the ambientor process variations that work-in-process may besubjected to in the factory are not nearly as dramaticas the variations that products are subjected to in acustomer's hands-obvious when you think about it,but how many exponents of Zero Defects do? ZeroDefects says. The effort to reduce process failure inthe factory will simultaneously reduce instances ofproduct failure in the field. We say. The effort to re-duce product failure in the field will simultaneouslyreduce the number of defectives in the factory

66 HARVARD BUSINESS REVIEW January-February 1990

Page 3: Robust Quality

Still, we can leam something interesting about theroots of robustness and the failures of traditionalquality control by confronting Zero Defects on itsown ground. It is in opposition to Zero Defects thatTaguchi Methods emerged.

Robustness as Consistency

According to Zero Defects, designs are essentiallyfixed before the quality program makes itself felt;serious performance degradations result from thefailure of parts to mate and interface just so. Whenmanufacturing processes are out of control, that is,when there are serious variations in the manufactureof parts, products cannot he expected to perform wellin the field. Faulty parts make faulty connections.A whole product is the sum of its connections.

Of course, no two drive shafts can he made per-fectly alike. Engineers working within the logic ofZero Defects presuppose a certain amount of vari-ance in the production of any part. They specify a tar-get for a part's size and dimension, then tolerancesthat they presume will allow for trivial deviationsfrom this target. What's wrong with a drive shaft thatshould he 10 centimeters in diameter actually com-ing in at 9.998?

Nothing. The problem-and it is widespread —comes when managers of Zero Defects programsmake a virtue of this necessity. They grow accus-tomed to thinking ahout product quality in terms ofacceptahle deviation from targets-instead of theconsistent effort to hit them. Worse, managers mayspecify tolerances that are much too wide becausethey assume it would cost too much for the factory tonarrow them.

Consider the case of Ford vs. Mazda (then knownas Toyo Koygo), which unfolded just a few years ago.Ford owns ahout 25% of Mazda and asked the Japa-nese company to build transmissions for a car it wasselling in the United States. Both Ford and Mazdawere supposed to build to identical specifications;Ford adopted Zero Defects as its standard. Yet afterthe cars had been on the road for a while, it becameclear that Ford's transmissions were generating farhigher warranty costs and many more customercomplaints ahout noise.

To its credit. Ford disassembled and carefully mea-sured samples of transmissions made by both compa-nies. At first. Ford engineers thought their gaugeswere malfunctioning. Ford parts were all in-spec, hutMazda gearboxes betrayed no variability at all fromtargets. Could that he why Mazda incurred lowerproduction, scrap, rework, and warranty costs?'

Who's the Better Shot?

Sam John

Sam is. His shooting is conNistent and predictable. A smatladjustment in hi.'< eights will give him many perfect bull's-eyesin the next round.

That was precisely the reason. Imagine that insome Ford transmissions, many components nearthe outer limits of specified tolerances-that is, finehy the definitions of Zero Defects-were randomlyassembled together. Then, many trivial deviationsfrom the target tended to "stack up." An otherwisetrivial variation in one part exacerbated a variation inanother. Because of deviations, parts interacted withgreater friction than they could withstand individu-ally or with greater vihration than customers wereprepared to endure.

Mazda managers worked consistently to bringparts in on target. Intuitively, they took a much moreimaginative approach to on-line quality control thanFord managers did; they certainly grasped factoryconformance in a way that superseded the pass/fail,in-spec/out-of-spec style of thinking associated withZero Defects. Mazda managers worked on the as-sumption that robustness hegins from meeting exacttargets consistently-not from always staying withintolerances. They may not have realized this at thetime, hut they would have been even hetter off miss-ing the target with perfect consistency than hittingit haphazardly - a point that is illuminated hy thissimple analogy:

Sam and John are at the range for target practice.After firing ten shots, they examine their targets.Sam has ten shots in a tight cluster just outside thebull's-eye circle. John, on the other hand, has fiveshots in the circle, hut they are scattered all over i t -as many near the perimeter as near dead center-andthe rest of his shots are similarly dispersed around it(see the "Who's the Better Shot?" diagram).

Zero Defects theorists would say that John is thesuperior shooter because his performance betrays no

1. See Lance A. Ealey's admirable acccount of this case in Quality by De-sign: Taguchi Methods* ami U.S. Industry (Dearbum, Mich.; ASI Press,1988),pp.6t-62.

HARVARD BUSINESS REVTtW fanuary-February 1990 67

Page 4: Robust Quality

ROBUST QUALITY

failures. But who would you really rather hire on asa hodyguard?

Sam's shooting is consistent and virtually predict-ahle. He probably knows why he missed the circlecompletely. An adjustment to his sights will givemany perfect bull's-eyes during the next round. Johnhas a much more difficult prohleni. To reduce the dis-persion of his shots, he must expose virtually all thefactors under his control and find a way to changethem in some felicitous comhination. He may decideto change the position of his arms, the tightness ofhis sling, or the sequence of his firing: hreathc, aim,slack, and squeeze. He will have little confideticethat he will get all his shots in the bull's-eye circlenext time around.

When extrapolated to the factory, a Sam-likeperformance promises greater product robustness.Once consistency is estahlished-no mean feat, theproduct of relentless attention to the details of designand process both-adjusting performance to target isa simple matter: stack-up can be entirely obviated. Ifevery drive shaft is .005 centimeters out, operatorscan adjust the position of the cutting tool. In the ab-sence of consistent performance, getting more nearlyon target can be terribly time-consuming.

But there is another side to this. There is a muchhigher probability of catastrophic stack-up frotn ran-dom deviations than from deviations that show con-sistency. Assuming that no part is grossly defective, aproduct made from parts that are all off target in ex-actly the same way is more likely to be robust than aproduct made from parts whose deviations are in-spec but unpredictable. We have statistical proofs ofthis, but a moment's reflection should be enough. Ifall parts are made consistently, the product will per-form in a uniform way for customers and will hemore easily perfected in the next version. If all partsare made erratically, some products will be perfect,and some will fall apart.

So the case against Zero Defects begins witb this:Robustness derives from consistency. Where devia-tion is consistent, adjustment to the target is pos-sihle; catastrophic stack-up is more likely fromscattered deviation within specifications than fromconsistent deviation outside. This regard for consis-tency, for being on target, has a fascinating and practi-cal application.

The Quality Loss Function

Analysis of Ford's overall losses as compared withMazda's suggests that when companies deviate fromtargets, they run an increasingly costly risk of loss.

Overall loss is quality loss plus factory loss. Themore a manufacturer deviates from targets, thegreater its losses.

From our experience, quality loss-the loss thatcomes after products are shipped - increases at a geo-metric rate. It can be roughly quantified as the Qual-ity Loss Function (QLF}, hased on a simple quadraticformula. Loss increases by the square of devia-tion from the target value, L - D'C, where the con-stant is determined hy the cost of the countermea-sure that the factory might use to get on target.

If you know what to do to get on target, then youknow what this action costs per unit. If you balk atspending the money, then witb every standard devia-tion from the target, you risk spending more andmore. The greater the deviation frotn targets, thegreater the compounded costs.

Let's say a car matiufacturer chooses not to spend,say, $20 per transmission to get a gear exactly on tar-get. QLF suggests that the tnanufacturer would windup spending (when customers got mad) $80 for twostandard deviatiotis from the target ($20 multipliedhy the square of two), $180 for three, $320 for four,and so forth.

This is a simple approximation, to he sure, not alaw of nature. Actual field data cannot be expected tovindicate QLF precisely, and if your corpt)ration has amore exacting way of tracking the costs of productfailure, use it. But the tremendous value of QLF, apartfrom its bow to common sense, is that it translatesthe engineer's notion of deviation from targets into asimple cost estimate managers can use. QLF is espe-cially helpful in the important early stages of newproduct development, wben tolerances are set andquality targets are established.

Sony Televisions: Tokyo vs. Son Diego

The compelling logic of QLF is best illustrated hythe performance of Sony televisions in the late 1970s.Tbe case demonstrates how engineering data andeconomic data can (and should) he seen in tandem.

Sony product engineers had ascertained that cus-tomers preferred pictures witb a particular color den-sity, let's call it a nominal density of 10. As colordensity deviated from 10, viewers became increas-ingly dissatisfied, so Sony set specification limits atno less than 7 and no more than 13.

Sony manufactured TV sets in two cities, SanDiegd and Tokyo. Sets shipped from San Diego wereuniformly distributed within specs, which meantthat a customer was as likely to huy a set with a colordensity of 12.6 as one witb a density of 9.2. At the

68 HARVARD BUSINESS REVIEW January-Ffhmary

Page 5: Robust Quality

same time, a San Diego set was as likely to be nearthe corporate specification limits of 13 or 7 as nearthe customer satisfaction target of 10. Meanwhile,shipments from Tokyo tended to cluster near thetarget of 10, though at that time, about 3 out of ev-ery 1,000 sets actually fell outside of corporatestandards.

You gain nothing inshipping a product thatbareiy satisfiescorporate standards overone that just taiis.

Akio Morita, the chairman of Sony, reflected on thediscrepancy this way: "When we tell one of our Japa-nese employees that the measurement of a certainpart must be within a tolerance of plus or minus five,for example, he will automatically strive to get thatpart as close to zero tolerance as possible. When westarted our plant in the United States, we found thatthe workers would follow instructions perfectly. Butif we said make it between plus or minus five, theywould get it somewhere near plus or minus fiveall right, but rarely as close to zero as the Japaneseworkers did."

If Morita were to assign grades to the two factories'performances, he might say that Tokyo had manymore As than San Diego, even if it did get a D nowand then; 68% of Tokyo's production was in the Arange, 28% in the B range, 4% in the C range, and0.3% in the D range. Of course, San Diego made someout-of-spec sets; but it didn't ship its Fs. Tokyoshipped everything it built without bothering tocheck them. Should Morita have preferred Tokyo toSan Diego?

The answer, remember, must be boiled down todollars and cents, which is why the conventions ofZero Defects are of no use here. Suppose you bought aTV with a color density of 12.9, while your neighborbought one with a density of 13.1.If you watclia pro-gram on his set, will you be able to detect any colordifference between yours and his? Of course not. Thecolor quality does not present a striking problem atthe specification limit of 13. Things do not suddenlyget more expensive for the San Diego plant if a setgoes out at 13.1.

The losses start mounting when customers seesets at the target value of 10. Then, anything muchaway from 10 will seem unsatisfactory, and custom-ers will demand visits from repairpeople or will de-mand replacement sets. Instead of spending a few

dollars per set to adjust them close to targets, Sonywould have to spend much more to make good on thesets-about two-thirds of the San Diego sets-thatwere actually displeasing customers. (Dissatisfac-tion certainly increases more between 11.5 and 13than between 10 and 11.5.)

What Sony discovered is that you gain virtuallynothing in shipping a product that just barely satis-fies the corporate standard over a product that justfails. San Diego shipped marginal sets "without de-fects," but their marginal quality proved costly.

Using QLF, Sony might have come up with evenmore striking figures. Say the company estimatedthat the cost of the countermeasure required to putevery set right-an assembly line countermeasurethat puts every set at a virtual 10 - was S 9. But for ev-ery San Diego set with a color density of 13 [threestandard deviations from the target), Sony spent not$9 but $81. Total quality loss at San Diego shouldhave been expected to be three times the total qualityloss at the Tokyo factory.

Deviation: Signai to Noise

If Zero Defects doesn't work, what does? We havesaid that quality is mainly designed in, not controlledfrom without. In development work, engineers mustdiscipline their decisions at virtually every step bycomparing expected quality loss with known manu-facturing cost. On the other hand, the reliability ofQLF calculations is pretty obviously restricted by theaccuracy of more preliminary measures. It is impos-sible to discern any loss function properly withoutfirst setting targets properly

How should design engineers and manufacturingmanagers set targets? Let us proceed slowly, reconsid-ering what engineers do when they test componentsand subassemblies and how they establish wbat noparticular part "wants to be" in the context of thingsthat get in its way.

When Sony engineers designed their televisions,they assumed that discriminating customers wouldlike a design that retained a good picture or "signal"far from the station, in a lightning storm, when thefood processor was in use, and even when the powercompany was providing low voltage. Customerswould be dismayed if the picture degraded every timethey turned up the volume. They would reject a TVthat developed snow and other annoying "noises"when afflicted by nasty operating conditions, whichare themselves considered noises.

In our view, this metaphorical language - signal ascompared with noise-can be used to speak of all

HARVARD BUSINESS REVIEW lanuary-Fehruarv IWO

Page 6: Robust Quality

Orthogonal Arrays: Setting the Right Targels for DesignU.S. product engineers typically proceed by the

"one factor at a time" method. A group of automo-tive-steering engineers-having identified 13 criti-cal variahles governing steering performance-would begin probing for design improvement byholding all variables at their current values and re-cording the result. In the second experiment, theywould change just one of the variables-spring stiff-ness, say-to a lower or higher value, and if the re-sult is an improvement (skidding at 40 mph, not35), they will adopt that value as a design con-stant. Then comes the next experiment in whichthey'll change a different variable but not recon-sider spring stiffness. They'll continue in this man-ner until they have nudged the steering systemas close as possible to some ideal performance tar-get (skidding at 50 mph, not 40).

The obvious trouble with such results is that theyfail to take account of potentially critical interac-tions among variables - let alone the real variationsin external conditions. While a certain spring stiff-ness provides ample performance when tire pres-sure is correct, how well will this stiffness workwhen tire pressure is too low or too high?

What these engineers need, therefore, is some effi-cient method to compare performance levels of allsteering factors under test-and in a way that sepa-rates the average effect of spring stiffness at itshigh, low, and current settings on the various pos-sible steering systems. The engineers could thenselect the spring-stiffness setting that consistentlyhas the strongest positive effect on the best combi-nation of variahles.

If a particular spring-stiffness setting performswell in conjunction with each setting of all 12 other

suspension factors, it stands a very good chance ofreproducing positive results in the real world. Thisis an important advantage over the "one factor at atime" approach.

The orthogonal array can be thought of as a distil-lation mechanism through which the engineer's ex-perimentation passes. Its great power lies in itsability to separate the effect each factor has on theaverage and the dispersion of the experiment as awhole. By exploiting this ability to sort out individ-ual effects, engineers may track large numbers offactors simultaneously in each experimental runwithout confusion, thereby obviating the need toperform all possible comhinations or to wait for theresults of one experiment hefore proceeding withthe next one.

Consider the orthogonal array for steering. Eachof the rows in the array shown here constitutes oneexperiment, and each vertical column represents asingle test factor. Column 1, for example, could rep-resent spring stiffness. Engineers test each of the13 steering factors at three different settings. (Forspring stiffness, then, these would be the currentsetting, a stiffer setting, and a softer setting, notated1, 2, and 3 in the array.)

Engineers perform 27 experiments on 13 vari-ables, A through M. They run 27 experiments be-cause they want to expose each performance level(1,2, and 3) to each other performance level an equalnumber of times (or 3 x 3 x 3). The engineer mustperform all 27 experiments, adhering to the arrange-ment of the factor levels shown here and drawinglots in order to introduce an element of randomnessto the experimentation.

products, not just televisions. The signal is what theproduct (or component or subassembly) is trying todeliver. Noises are the interferences that degrade sig-nal, some of them coming from outside, some fromcomplementary systems within the product. Theyare very much like the factors we spoke of as ac-counting for variations in product performance-environmental disturbances as well as disturbancesengendered by the parts themselves.

And so it seems reasonable to define robustness asthe virtue of a product with a high signal-to-noiseratio. Customers resent being told, "You were notexpected to use our product in high humidity orin below-freezing temperatures." They want goodperformance under actual operating conditions —which are often less than perfect. We all assume thata product that performs better under adverse condi-

tions will be that much more durable under normalconditions.

Signal-to-noise ratios are designed into productsbefore the factory ramps up. The strength of a prod-uct's signal-hence, its robustness-is primarily theresponsibility of the product designers. Good facto-ries are faithful to the intention of the design. Butmediocre designs will always result in mediocreproducts.

Choosing Targets: Orthogonal Arrays

How, then, do product designers maximize signal-to-noise ratios- World-class companies use a three-step decision-making process:

70 fiARVARD BUSINESS REVIEW January-February 1990

Page 7: Robust Quality

ROBUST QUALITY

Factor Levels

No

123

4S6

789

101112

« 13

1 '*i 15161718

192021

222324

2S2627

A

11

1

11

1

111

,

2

2

,

,

t

t

B

1I1

222

333

11I

222

333

111

222

333

C

111

222

333

222

3

3

111

333

111

222

D

111

222

333

333

I11

222

222

333

111

E

123

123

123

123

I23

123

123

I23

123

ColumnF

123

123

123

231

231

231

312

312

312

G H

1 12 i3 -

123

123

312

312

312

2 1

31

231

231

1

123

23I

3I2

231

312

123

312

123

23I

J

1Z3

231

312

312

123

231

231

312

123

K

123

3I2

231

123

312

231

123

312

23I

L

123

312

23I

2J1

123

312

312

231

123

M

I23

312

231

312

231

123

231

I23

312

Foctof Levelsfor FirstExperiment

Factor Levelsfor 27thFxperlment

that the three leveis of factor A 11, 2, and . 1 each areexposed to the thri;e levels of the other 12 factors an equalnumber of times. This is true for atl factors, A through M.Though not immediately obvious, the three ievels offactor H, for example, are exposed equatiy, each appearingnine times for a total of 27.

nation of performance values. In this case, for in-stance, there is no experiment in which spring stiff-ness at 2 is exposed to all the other variables at 2;there is no row that shows 2 straight across. Butif you look down each column, you will see thateach performance level of each variable is exposedto each other variable an equal number of times.Before the experiment is completed, all three lev-els of spring stiffness will be exposed equally to allthree levels of tire pressure, steering geometry,and so forth.

The engineer who uses an orthogonal array isn'tprimarily interested in the specific values yieldedby the 27 experiments. By themselves, these valuesmay not produce large improvements over exist-ing performance. Instead, he or she is interested indistilling the effect that each of the three spring-stiffness settings has on the system as a whole. The"essence" of this approach is extracted along thearray's vertical rather than horizontal axis.

The array allows the engineer to document re-sults: those factor levels that will reduce perfor-mance variation as well as those that will guide theproduct back to its performance target once consis-tency has been achieved. The array also may beused to find variables that have little effect on ro-bustness or target values: these can and should beset to their least costly levels.

-Lance A. Ealey

It is important to note that each performancevalue is not exposed to every other possihle comhi-

Lance A. Ealey, a member of the consulting staff of McKin-sey a' Co., is the author of Quality by Design: TaguchiMethods' and U.S. Industry (ASI Press, 1988).

1. They define the specific objective, selecting ordeveloping the most appropriate signal and estimat-ing the coneomitant noise.

2. They define feasible options for the critical de-sign values, such as dimensions and electricalcharacteristics.

3. They select the option that provides the greatestrobustness or the greatest signal-to-noise ratio.

This is easier said than done, of course, which iswhy so many companies in Japan, and now in theUnited States, have moved to some form of simulta-neous engineering. To define and select the correctsignals and targets is no mean feat and requires theexpertise of all product specialists. Product design,manufacturing, field support, and marketing-all ofthese should be worked out concurrently by an in-terfunctional team.

Product designers who have developed a "feel" forthe engineering of particular products should takethe lead in such teams. They can get away with only afew, limited experiments, where new people wouldhave to perform many more. Progressive companiesmake an effort to keep their product specialistsworking on new versions rather than bump them upto management positions. Their compensationschemes reward people for doing what they do best.

But the virtues of teamwork beg the larger ques-tion of how to develop an efficient experimentalstrategy that won't drain corporate resources as youwork to bring prototypes up to customer satisfaction.Intuition is not really an answer. Neither is inter-functionality or a theory of organization. Product de-signers need a scientific way to get at robustness.They have depended too long on art.

HARVARD BUSINESS REVIEW ianuary-February 1990 71

Page 8: Robust Quality

ROBUST QUALITY

The most practical way to go about setting signal-to-noise ratios builds on the work of Sir RonaldFisher, a British statistician whose brilliant contribu-tions to agriculture are not much studied today. Mostimportant is his strategy for systematic experimen-tation, including the astonishingly sensible planknown as the "orthogonal array."

Consider the complexity of improving a car'ssteering. Customers want it to respond consistently.Most engineers know that steering responsivenessdepends on many critical design parameters-springstiffness, shock absorber stiffness, dimensions of thesteering and suspension mechanisms, and so on-allof which might be optimized to achieve the greatestpossible signal-to-noise ratio.

It makes sense, moreover, to compare the initialdesign value to both a larger and a smaller value. Ifspring stiffness currently has a nominal value of 7,engineers may want to try the steering at 9 and at 5.One car engineer we've worked with established thatthere are actually 13 design variables for steering. Ifengineers were to compare standard, low, and highvalues for each critical variable, they would have1,594,323 design options.

Proceed with intuition? Over a million possiblepermutations highlight the challenge-that of ablind search for a needle in a haystack-and steeringis only one subsystem of the car. In Japan, managerssay that engineers "like to fish with one rod"; engi-neers are optimistic that "the next cast will bringin the big fish"-one more experiment and they'llhit the ideal design. Naturally, repeated failureleads to more casts. The new product, still not robustto customers' conditions, is eventually forced intothe marketplace by the pressures of time, money,and diminishing market share.

To complete the optimization of robustness mostquickly, the search strategy must derive the maxi-mum amount of information from a few trials. Wewon't go through the algebra here, but the key is todevelop a system of trials that allows product engi-neers to analyze the average effect of change in fac-tor levels under different sets of experimentalconditions.

And this is precisely the virtue of the orthogonalarray (see the insert, "Orthogonal Arrays: Setting theRight Targets for Design"). It balances the levels ofperformance demanded by customers against themany variables-or noises-affecting performance.An orthogonal array for 3 steering performance lev-els-low, medium, and high-can reduce the experi-mental possibilities to 27. Engineers might subjecteach of the 27 steering designs to some combina-tion of noises, such as high/low tire pressure, rough/smooth road, high/low temperature. After all of the

trials are completed, signal-to-noise values may beused to select the best levels for each design variable.

If, for example, the average value for the first ninetrials on spring stiffness is 32.4, then that could char-acterize level one of spring stiffness. If the averagevalue for the second group of trials is 26.7, and the av-erage for the third group 28.9, then we would selectlevel one as the best value for spring stiffness. Thisaveraging process is repeated to find the best level foreach of the 13 design variables.

The orthogonal array is actually a sophisticated"switching system" into which many different de-sign variables and levels of change can be plugged.This system was conceived to let the relatively inex-perienced designer extract the average effect of eachfactor on the experimental results, so he or she canreach reliable conclusions despite the large numberof changing variables.

Of course, once a product's characteristics are es-tablished so that a designer can say with certaintythat design values-that is, optimized signal-to-noiseratios-do not interact at all, then orthogonal arraysare superfluous. The designer can instead proceed totest each design variable more or less independentlywithout concern for creating noise in other parts orsubassemblies.

System Verification Test;The Moment ot Truth

After they've maximized signal-to-noise ratios andoptimized design values, engineers build prototypes.The robustness of the complete product is now veri-fied in the System Verification Test (SVT)-perhapsthe most critical event during product development.

In the SVT, the first prototypes are compared withthe current benchmark product. Engineers subjectboth the prototype and the benchmark to the sameextreme conditions they may encounter in actualuse. Engineers also measure the same critical signal-to-noise ratios for all contenders. It is very importantfor the new product to surpass the robustness of thebenchmark product. If the ideal nominal voltage is115 volts, we want televisions that will have a signalof 10 even when voltage slips to a noisy 100 or surgesto an equally noisy 130. Any deviations from the per-fect signal must be considered in terms of QLF, thatis, as a serious financial risk.

The robust product, therefore, is the one that mini-mizes the average of the square of the deviation fromthe target-averaged over the different customer-useconditions. Suppose you wish to buy a power supplyand learn that you can buy one with a standard dcvia-

72 HARVARD BUSINESS REVIEW lanuary-February 1990

Page 9: Robust Quality

On-Line Quality: Shigeo Shingo's Shop FloorDoes tightening tolerances necessarily raise the

specter of significantly higher production costs?Not according to Shigeo Shingo, the man whotaught production engineering to a generation ofToyota managers. Now over 80, Shingo still activelypromotes "Zero Quality Control," by which heaims to eliminate costly inspection processes orreliance on statistical quality control at the shop-floor level.

Shingo advocates the use of low-cost, in-processquality control mechanisms and routines that, in ef-fect, incorporate 100% inspection at the source ofquality problems. He argues for checking the causesrather than the effects of operator errors and ma-chine abnormalities. This is achieved not throughexpensive automated control systems but throughfoolproofing methods such as poka-yoke.

Poka-yoke actually means "mistake proofing";Shingo resists the idea tbat employees make errorsbecause they are foolishly incompetent. Shingo be-lieves all human beings have lapses in attention.The system, not the operator, is at fault when suchdefects occur. Tbe poka-yoke method essentiallybuilds the function of a checklist into an operationso we can never "forget what we have forgotten."

For example, if an operator needs to insert a totalof nine screws into a subassemhly, modify tbe con-tainer so it releases nine at a time. If a screw re-mains, the operator knows the operation is notcomplete. Many poka-yoke ideas are based on famil-iar foolproofing concepts; tbe mechanisms are alsorelated in principle to jidohka. or autonomation-the concept of low-cost "intelligent machines" thatstop automatically wben processing is completed orwhen an abnormality occurs.

Shingo recommends four principles for imple-menting poka-yoke:

1, Control upstream, as close to the source of thepotential defect as possible. For example, modifythe form of a symmetrical workpiece just slightly toassure correct positioning with a jig or sensor keyedto an asymmetrical characteristic. Also, attach amonitoring device that will detect a material abnor-mahty or an abnormal machine condition and willtrigger shutdown before a defect is generated andpassed on to the next process.

2. Establish controls in relation to the severity ofthe problem. A simple signal or alarm may be suffi-cient to check an error that is easily corrected by theoperator, but preventing further progress until theerror is corrected is even better. For example, acontrol-board counter counts the number of spotwelds performed and operates jig clamps; if one isomitted, the workpiece cannot be removed from thejig until the error is corrected,

3. Think smart and small. Strive for the simplest,most efficient, and most economical intervention.Don't overcontrol - if operator errors result from alack of operations, improve methods before at-tempting to control the results. Similarly, if the costof equipment adjustment is high, improve equip-ment reliability and consider how to simplify ad-justment operations before implementing a costlyautomated-inspection system.

4. Don't delay improvement by overanalyzing.Poka-yoke solutions are usually the product of deci-siveness and quick action on the shop floor. Whiledesign improvements can reduce manufacturingdefects in the long run, you can implement manypoka-yoke ideas at very low cost within hours oftheir conception, effectively closing the quality gapuntil you develop a more robust design.

Developed cooperatively by operators, produc-tion engineers, and machine-shop personnel, poka-yoke methods are employed extensively in pro-cessing and assembly operations in Japan and rep-resent one of tbe creative pinnacles of continuousshop-floor improvement. In its most developedstate, such improvement activity can support off-line quality engineering efforts by feeding back acontinuous flow of data about real, in-process qual-ity problems.

-Connie Dyer

Connie Dyer is senior editor at Productivity Press. She worked with Shigeo Shingo on. among other books, Zero QualityControl: Source Inspection and the Poka-Yoke System (Productivity Press, 1956).

DRAWINGS BY JULIA M. TALCOTT 73

Page 10: Robust Quality

ROBUST QUALITY

tion of one volt. Should you take it? If the mean valueof output voltage is 1,000 volts, most people wouldthink that, on average, an error of only one volt is verygood. However, if the average output were 24 volts,then a standard deviation of one seems very large. Wemust always consider the ratio of the mean valuedivided hy the standard deviation.

The SVT gives a very strong indication, long beforeproduction begins, of whether customers will per-ceive the new product as having world-class qualityand performance. After the new design is verified tohave superior rohustness, engineers may proceed tosolve routine problems, fully confident that the prod-uct will steadily increase customer loyalty.

Back to the Factory

The relationship of field to factory proves to he asubtle one-the converse of what one m ight expect.We know that if you control for variance in the fac-tory, you reduce failure in the field. But as we said atthe outset, a concerted effort to reduce product fail-ure in the field will simultaneously reduce the num-her of defectives in the factory.

Strive to reduce variances in the components ofthe product and you will reduce variances in the pro-duction system as a whole. Any strengthening of adesign-that is, any marked increase of a product'ssignal-to-noise ratio-will simultaneously reduce afactory's quality losses.

Why should this he so? For many reasons, most im-portantly the symmetries between design for rohust-ness and design for manufacture. Think of how muchmore robust products have hecome since the intro-duction of molded plastics and solid-state circuitry.Instead of serving up many interconnected wires andtuhes and switches-any one of which can fail-engineers can now imprint a million transistors on avirtually indestructible chip. Instead of joining manycomponents together with screws and fasteners, wecan now consolidate parts into subassemblies andmount them on molded frames that snap together.

All of these improvements greatly reduce opportu-nities for noise interfering with signal; they were de-veloped to make products robust. Yet they also havemade products infinitely more manufacturable. Theprinciples of designing for robustness are often indis-tinguishable from the principles of designing formanufacture-reduce the number of parts, consoli-date subsystems, integrate the electronics.

A rohust product can tolerate greater variations inthe production system. Please the customer and youwill please the manufacturing manager. Prepare for

variances in the field and you will pave the way forreducing variations on the shop floor. None of thismeans the manufacturing manager should stop try-ing to reduce process variations or to achieve thesame variations with faster, cheaper processes. Andthere are obvious exceptions proving the rule-chipproduction, for example, where factory controls areever more stringent-though it is hard to think ofexceptions in products such as cars and consumerelectronics.

The factory is a place where workers must try tomeet, not deviate from, the nominal targets set forproducts. It is time to think of the factory as a productwith targets of its own. Like a product, the factorymay be said to give off an implicit signal - the consis-tent production of robust products - and to he subjectto the disruptions of noise-variable temperatures,degraded machines, dust, and so forth. Using QLI-choices in the factory, like choices for the product,can be reduced to the cost of deviation from targets.

Consider, for instance, that a cylindrical grindercreates a cylindrical shape more consistently than alathe. Product designers have argued for such dedi-cated machines; they want the greatest possible pre-cision. Manufacturing engineers have traditional-ly favored the less precise lathe because it is moreflexible and it reduces production cost. Should man-agement favor the more precise cylindrical grinder?How do you compare each group's choice with re-spect to quality loss?

In the absence of QLF's calculation, the most com-mon method for establishing manufacturing tol-erances is to have a concurrence meeting. Design en-gineers sit on one side of the conference room, manu-facturing engineers on the opposite side. The productdesign engineers start chanting "Tighter Tolerance,Tighter Tolerance, Tighter Tolerance," and the manu-facturing engineers respond with "Looser Tolerance,Looser Tolerance, Looser Tolerance." Presumahly, thefactory would opt for a lathe if manufacturingchanted louder and longer. But why follow such an ir-rational process when product design people andmanufacturing people can put a dollar value on qual-ity precision?

Management should choose the precision levelthat minimizes the total cost, production cost plusquality loss-the basics of QLF. Managers can com-pare the costs of competing factory processes by add-ing the manufacturing cost and the average qualityloss (from expected deviations) of each process. Theygain economical precision by evaluating feasible al-temative production processes, such as the lathe andcylindrical grinder. What would he the quality loss ifthe factory used the lathe? Are the savings worth thefuture losses?

74 HARVARD BUSINESS REVIEW [anuary-Febniaxy 1990

Page 11: Robust Quality

Similar principles may be applied to larger sys-tems. In what may be called "process parameter de-sign/' manufacturers can optimize productionparameters-spindle speed, depth of cut, feed rate,pressure, temperature - according to an orthogonalarray, much like the spring stiffness in a steeringmechanism. Each row of the orthogonal array maydefine a different production trial. In each trial, engi-neers produce and measure several parts and then usethe data to calculate the signal-to-noise ratio for thattrial. In a final step, they establish the best value foreach production parameter.

The result? A robust process-one that producesimproved uniformity of parts and often enables man-agers to simultaneously speed up production and re-duce cycle time.

How Much Intervention?

Finally there is the question of how much to inter-vene during production.

Take the most common kind of intervention, on-line checking and adjusting of machinery and pro-cess. In the absence of any operator monitoring, partstend to deviate progressively from the target. With-out guidance, different operators have widely varyingnotions of (1} how often they should check their ma-chines and (2) how big the discrepancy must be be-fore they adjust the process to bring the part valueback near the target.

By applying QLF, you can standardize intervention.The cost of checking and adjusting has always beeneasy to determine,- you simply have to figure the costof downtime. With QLF, managers can also figure thecost of not intervening, that is, the dollar value tothe company of reduced parts variation.

Let's go back to drive shafts. The checking intervalis three, and the best adjustment target is 1/ 1,000th ofan inch. If the measured discrepancy from the targetis less than 1/1,000th of an inch, production contin-ues. If the measured discrepancy exceeds this, theprocess is adjusted back to tbe target. Does this reallyenable operators to keep the products near the targetin a way that minimizes total cost?

It might be argued that measuring every third shaftis too expensive. Why not every tenth? There is a wayto figure this out. Say the cost of intervention is 30cents, and shafts almost certainly deviate from thetarget value every fifth or sixth operation. Then, outof every ten produced, at least four bad shafts will goout, and quality losses will mount. If the seventhshaft comes out at two standard deviations, the costwill be $ 1.20; if the tenth comes out at three standard

deviations, the cost will be $2.70; and so on. Per-haps the hest interval to check is every fourth shaftor every fifth, not every third. If the fourth shaft isonly one standard deviation from the target value,intervention is probably not worth the cost.

The point, again, is that these things can andshould be calculated. There isn't any reason to befanatical about quality if you cannot justify yourfanaticism by QLF. Near the target, productionshould continue without adjustment; the qualityloss is small. Outside the limit, tbe process should beadjusted before production continues.

This basic approach to intervention can also be ap-plied to preventive maintenance. Excessive preven-tive maintenance costs too much. Inadequatepreventive maintenance will increase quality lossexcessively. Optimized preventive maintenance willminimize total cost.

In Japan, it is said that a manager who trades awayquality to save a little manufacturing expense is"worse than a thief"-a little harsh, perhaps, butplausible. Wben a thief steals $200 from your com-pany, there is no net loss in wealth between the twoof you, just an exchange of assets. Decisions that cre-ate huge quality losses throw away social productiv-ity, the wealth of society.

QLF's disciplined, quantitative approach to qualitybuilds on and enhances employee involvement ac-tivities to improve quality and productivity. Cer-tainly, factory-focused improvement activities do notby and large increase the robustness of a product.They can help realize it, however, by reducing thenoise generated by the complex interaction of shop-floor quality factors-operators, operating methods,equipment, and material.

Employees committed to hitting the bull's-eyeconsistently cast a sharper eye on every feature ofthe factory environment. When their ingenuity andcost-consciousness are engaged, conditions changedramatically, teams prosper, and valuable data pro-liferate to support better product and process de-sign. An early, companywide emphasis on robustproduct design can even reduce development timeand smooth the transition to full-scale production.

Too often managers think that quality is the re-sponsibility of only a few quality control people off ina factory corner. It should be evident by now thatquality is for everyone, most of all the business'sstrategists. It is only through the efforts of every em-ployee, from the CEO on down, that quality will be-come second nature. The most elusive edge in thenew global competition is the galvanizing pride ofexcellence. ^Reprint 90114 ~

HARVARD BUSINESS REVIEW I anuary-February t99(l 75

Page 12: Robust Quality