25
Introduction to Tolerance Analysis (Part 1 / 13) Written by:Karl - CB Expert 8/10/2010 3:57:00 PM http://www.crystalballservices.com/Resources/ConsultantsCornerBlog/Entr yId/29/Introduction-to-Tolerance-Analysis-Part-1-13.aspx Tolerance Analysis is the set of activities, the up-front design planning and coordination between many parties (suppliers & customers), that ensure manufactured physical parts fit together the way they are meant to. Knowing that dimensional variation is the enemy, design engineers need to perform Tolerance Analysis before any drill bit is brought to raw metal, before any pellets are dropped in the hopper to mold the first part. Or, as the old carpenter's adage goes: Measure twice, cut once. 'Cause once all the parts are made, it would be unpleasant to find they don't go together. Not a good thing. That is the obvious reason to perform Dimensional Tolerance Analysis; to avoid the "no-build" scenario. There are many other reasons: When parts fit together looser than intended, products feel sloppy. Perhaps they squeak & rattle when used, like that old office chair you inherited during a budget crunch. When parts fit together tighter than intended, it takes more time and effort for manufacturing to assemble (which costs money). The customer notices that it takes more effort or force to operate the moving parts. Like that office chair whose height adjuster you could never move. Either of these conditions will most likely lead to customer dissatisfaction. Even more disconcerting, there could be safety concerns where overloaded part features suddenly break when they were not designed to bear loads. Here is another way to look at Tolerance Analysis: One group designs the part; another one makes it (whether internal manufacturing or external suppliers). So the group that makes things needs to know what values the critical nominal dimensions of the manufactured parts should be and how much variation can be allowed around those values. Some parts go together in fixed positions; some allow relative motion between the parts. For the latter, there is a need to specify the smoothness or finish of the mating surfaces (in order that they move effortlessly as intended). The communicated nominal dimensions and their allowable variation sets up a reference framework by which all parties can identify the critical dimensions and perhaps even set up regularly-scheduled measurements to ensure that one month, ten months, even five years down the road, those part dimensions are conforming to the design intentions. To the trained CAD operator (those people who draw shapes on computer screens in dimly-lit cubicles), these activities describe Geometric Dimensioning & Tolerancing (GD&T). It is generally considered to be the godfather of modern Tolerance Analysis. It provides discipline and rigor to the management (identifying and communicating) of allowable dimensional variation via standardized symbols. Depending on the part feature of interest, there are quite a few different kinds of symbols that apply to feature types, such as flat surfaces, curved surfaces, holes and mating features like notches or tabs. Examples of GD&T terminology & associated symbols as shown in Figure 1-1 are:

Tolerance Analysis Using Worst Case Approach

Embed Size (px)

DESCRIPTION

tolerance stack up by worst case method

Citation preview

Page 1: Tolerance Analysis Using Worst Case Approach

Introduction to Tolerance Analysis (Part 1 / 13)Written by:Karl - CB Expert8/10/2010 3:57:00 PM

http://www.crystalballservices.com/Resources/ConsultantsCornerBlog/EntryId/29/Introduction-to-Tolerance-Analysis-Part-1-13.aspx

Tolerance Analysis is the set of activities, the up-front designplanning and coordination between many parties (suppliers &customers), that ensure manufactured physical parts fit together theway they are meant to. Knowing that dimensional variation is theenemy, design engineers need to perform Tolerance Analysis beforeany drill bit is brought to raw metal, before any pellets are dropped inthe hopper to mold the first part. Or, as the old carpenter's adagegoes: Measure twice, cut once. 'Cause once all the parts are made, itwould be unpleasant to find they don't go together. Not a good thing.

That is the obvious reason to perform Dimensional ToleranceAnalysis; to avoid the "no-build" scenario. There are many otherreasons: When parts fit together looser than intended, products feelsloppy. Perhaps they squeak & rattle when used, like that old officechair you inherited during a budget crunch. When parts fit together

tighter than intended, it takes more time and effort for manufacturing to assemble (which costs money). Thecustomer notices that it takes more effort or force to operate the moving parts. Like that office chair whoseheight adjuster you could never move. Either of these conditions will most likely lead to customerdissatisfaction. Even more disconcerting, there could be safety concerns where overloaded part featuressuddenly break when they were not designed to bear loads.

Here is another way to look at Tolerance Analysis: One group designs the part; another one makes it(whether internal manufacturing or external suppliers). So the group that makes things needs to know whatvalues the critical nominal dimensions of the manufactured parts should be and how much variation can beallowed around those values. Some parts go together in fixed positions; some allow relative motion betweenthe parts. For the latter, there is a need to specify the smoothness or finish of the mating surfaces (in orderthat they move effortlessly as intended). The communicated nominal dimensions and their allowablevariation sets up a reference framework by which all parties can identify the critical dimensions and perhapseven set up regularly-scheduled measurements to ensure that one month, ten months, even five years downthe road, those part dimensions are conforming to the design intentions.

To the trained CAD operator (those people who draw shapes oncomputer screens in dimly-lit cubicles), these activities describeGeometric Dimensioning & Tolerancing (GD&T). It isgenerally considered to be the godfather of modern ToleranceAnalysis. It provides discipline and rigor to the management(identifying and communicating) of allowable dimensionalvariation via standardized symbols. Depending on the partfeature of interest, there are quite a few different kinds ofsymbols that apply to feature types, such as flat surfaces, curvedsurfaces, holes and mating features like notches or tabs.

Examples of GD&T terminology & associated symbols as shownin Figure 1-1 are:

Page 2: Tolerance Analysis Using Worst Case Approach

• Dimensional Call-outs or numerical values with arrows indicating the dimension of interest• Feature Frames or boxes that point to a particular feature with a myriad of mystical symbols• Datum Feature indicates a part feature that contacts a Datum, which is a theoretically exact plane, point or

axis from which dimensions are measured.• Datum Reference Frames (DRF) fulfill the need to define a reference frame via a set of three mutually

perpendicular datum planes (features).• Critical Characteristics (CC) are a subset of dimensions whose production control is absolutely critical to

maintain for safety concerns. (That will lead us further down the path of Statistical Process Control (SPC).)

But how does a designer or engineer know how much dimensional variation could be allowed in a part sothat the GD&T guys can define those mystical symbols? I intend on answering that question in the followingseries of posts summarizing the primary approaches a design engineer should consider when performingTolerance Analysis. I will expand on and compare three approaches to illuminate their strengths andweaknesses:

• Worst Case Analysis• Root Sum Squares• Monte Carlo Simulation

Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 3-11.

Sleeper, Andrew D., Design for Six Sigma Statistics (2006); McGraw-Hill, pp. 765-769.

Tolerance Analysis using Worst Case Approach (Part 2 / 13)

Aug 12 Written by: Karl - CB Expert 8/12/2010 3:53 PM

As stated in my last post, there are three common approaches toperforming Tolerance Analysis. Let us describe the simplest of thethree, the Worst Case Analysis (WCA) approach. An engineering-centric term in the Tolerance Analysis world would be ToleranceStacks, usually meaning in a one-dimensional sense. Theexplanation begins with probably the most overworked examplefound in dusty tomes (my apologies in advance).

(I would like acknowledge James Ministrelli, DFSS MasterBlack Belt and GD&T Guru Extraordinaire, for his help &advice in these posts. Thanks, Jim!)

Examine Figure 2-1. There are four parts in this assembly. There isone part with a squared-off, U-shaped cavity in the middle. In thatcavity will sit three blocks of the same width (at least they aredesigned that way). Those blocks were designed to fit side-by-side incontact with each other and within the cavity of the larger part. The

intent is that they will "stack up" against the side of the cavity, thus leaving a designed gap on the other side.This designed gap has functional requirements (being part of a greater assembly) and, for proper operation,the gap size must be such-and-such-a-value with an allowable variation of such-and-such-a-value.

Page 3: Tolerance Analysis Using Worst Case Approach

How would the designer determinewhat variation can be allowed on thewidths (blocks and cavity) to ensurethe gap variation meets therequirements? In WCA, a one-dimensional stack analysis wouldlook at the extreme values allowedon the individual dimensions thatwould then "stack up" to be either thesmallest possible gap or the largestpossible gap. If those WCA values

are within the desired system gap limits, then we are good to go. If not, what normally follows is a series of"what-ifs" where the individual tolerance values, perhaps even the nominal values they are controlling, aretinkered with until a winning combination is discovered that places the WCA values within the gap

dimension requirements.

The biggest gap possible occurswhen each of the individual blockwidths are at their smallest possiblevalue and the width of the cavity isat its largest. For the block andcavity widths, this wouldcorrespond to the Least MaterialCondition. (LMC: The condition inwhich a feature of size contains theleast amount of material everywherewithin the stated limits of size.) Allexternal features like widths are attheir largest limit and all internalfeatures like holes diameters are attheir smallest limit under LMC.

The smallest gap possible occurs when these same dimensions are at the opposite limits for MaximumMaterial Condition (MMC). LMC and MMC are important conditions when calculating dimensional WCAvalues. Below is a summary of the LMC and MMC dimension calculations and the resulting WCA valuesfor the gap:

• Individual Block widthso LMC → 1.95" (= 2.00" - 0.05")o MMC → 2.05" (= 2.00" + 0.05")

• Cavity widtho LMC → 7.60" (= 7.50" + 0.10")o MMC → 7.40" (= 7.50" - 0.10")

• WCA Extreme Values for Gapo Minimum → 1.75" (= 7.60" – 3*1.95")o Maximum → 1.25" (= 7.40" – 3*2.05")

The process of tinkering with individual part tolerances, based on what the system variation goal is, termedTolerance Allocation. (Another appropriate term is "Roll-Down" analysis, in that variance rolls down fromsystem output and assigns allowable variation into the inputs.) There is an overall amount of variationallowed at the system level requirement (the gap) which is split into contributors at the individual partdimensions. Think of the overall variation as a pie that must be split into edible slices. Each slice thenrepresents a portion of the pie that is allocated to each individual dimension so that the overall system levelvalue variation is maintained within allowable limits.

Page 4: Tolerance Analysis Using Worst Case Approach

In my next post, I will expound more on the Worst Case Analysis approach and introduce the concept of atransfer function. We will use transfer functions to calculate WCA extreme values, as well as for the otherapproaches still on the radar.

Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 105-111.

Sleeper, Andrew D., Design for Six Sigma Statistics (2006); McGraw-Hill, pp. 690-692, 704-707.

Tolerance Analysis using Worst Case Approach,continued (Part 3 / 13)

Written by:Karl - CB Expert8/16/2010 3:50:00 PMIn my last couple of posts, I provided an introduction into thetopic of Tolerance Analysis, relaying its importance in doingupfront homework before making physical products. Idemonstrated the WCA method for calculating extreme gapvalue possibilities. Implicit in the underlying calculationswas a transfer function (or mathematical relationship)between the system inputs and the output, between theindependent variables and the dependent variable. In order todescribe the other two methods of allocating tolerances, it isnecessary to define and understand the underlying transferfunctions.

For the stacked block scenario (as inFigure 3-1), the system output of interestis the gap width between the right end ofthe stacked blocks and the right edge ofthe U-shaped cavity. The inputs are theindividual widths of each block and theoverall cavity width. Simple addition andsubtraction of the inputs results in thecalculated output. Such is the case withall one-dimensional stack equations ascan be seen with this transfer function:

Do all dimensional transfer functions look like this? It depends. Certainly, the one-dimensional stacks do.But let us also see how this might work with dimensional calculations of a two-dimensional nature.

Consider the case of the overrunning or freewheel one-way clutch, a mechanism that allows a shaft orrotating component to rotate freely in one direction but not the other. Fishing reels use this mechanism toallow fish line to spool out freely in one direction but also allow the fish to be reeled in when rotating in theopposite direction. A typical cross-section of the one-way clutch is depicted in Figure 3-2. Primarycomponents of the system are the hub and outer race (cylinder), connected by a common axis. They rotatefreely with respect to each other, as wheels revolve around an axle. They also have internal contact via fourcylindrical bearings that roll against both the outer surface of the hub and the inner surface of the race. Thebearing contacts are kept in place with a series of springs that push the bearings into both the hub and race

Page 5: Tolerance Analysis Using Worst Case Approach

(springs are also in contact withhub). The end result is that the racerotates freely in one direction(counter-clockwise) but not the other(clockwise); thus the name.

The two system outputs of interestare the angle at which the bearingscontact the outer race and the gapwhere the springs reside. Why arethese two outputs important? Toreduce shocks and jerks whencontact is made during clockwiserotation, the design engineers mustknow where the rotation will stopand what potential variation exists

around that stopping point. For the springs to have some nominal compression and maintain bearingposition, a desired gap with some allowable variation needs to be defined. Neither of the two system outputscan be defined by a simple one-dimensional stack equation in that there are common input variablesaffecting both outputs simultaneously. The approach in defining the transfer functions, however, is the same.It is done by following a "closed loop" of physical locations (like contact points) through the system back tothe original point, as a bunch of "stacked" vectors. Those vectors are broken down into their cartesiancomponents which are the equivalent of two one-dimensional stacks. A more notable difference in the natureof the one-way clutch

transfer functions versus those of the gap stack is their nonlinearities. Nonlinearities can introduce theirunique influence to a transfer function. (But I digress.)

After some mathematical wrangling of the equations, the transfer functions for stop angle and spring gap arefound to be:

Page 6: Tolerance Analysis Using Worst Case Approach

Let us apply the WCA approach anddetermine the extreme range of the outputs.(Figure 3-4 displays nominal and tolerancevalues for input dimensions in our transferfunctions.) After some pondering themechanical realities (or perhaps tinkeringwith high/low input values in the transferfunctions), it can be seen that when thebearing diameters (dB1, dB2), the hub height(hHUB) and the race inner diameter (DCAGE)

are at LMC limits, the contact angle ( ) is atits extreme maximum value. Vice versa, ifthose same component dimensions are atMMC limits, the contact angle is at itsextreme minimum value. The same exercise

on spring gap brings similar results. Based on the information we know, the WCA approach results in thesepredicted extreme values for the two outputs:

OUTPUT WCA Minimum Value WCA Maximum ValueStop Angle 27.380 ˚ 28.371 ˚

Spring Gap (mm) 6.631 7.312

What is the allowable variation of the stop angle and spring gap? And do these minimum and maximumvalues fit within the customer-driven requirements for allowable variation?

For design engineers, these requirements come on holygrails that glow in the dark; Specification Limits areinscribed on their marbled surfaces (cue backgroundthunder). They are much like the vertical metal posts ofa hockey goal crease. Any shots outside the limits donot ultimately satisfy the customer; they do not score agoal. These values will now be referred to as the UpperSpecification Limit (USL) and the Lower SpecificationLimit (LSL). (Some outputs require either USL or LSL;not so for these outputs.) This table provides thespecification limit values:

OUTPUT LSL USLStop Angle 27.50 ˚ 28.50 ˚Spring Gap (mm) 6.50 7.50

Comparing the WCA outcomes against the LSL/USLdefinitions, it appears we are in trouble with the stopangle. The extreme minimum value for stop angle falls

below the LSL. What can be done? The power in the transfer functions is that it allows the design engineerto play "what-ifs" with input values and ensure the extreme WCA values fall within the LSL/USL values. Ifdone sufficiently early in the design phase (before design "freezes"), the engineer has the choice of tinkeringwith either nominal values or their tolerances. Perhaps purchasing decisions to use off-the-shelf parts haslocked in the nominal values to be considered but there is still leeway in changing the tolerances; in which

Page 7: Tolerance Analysis Using Worst Case Approach

case, the tinkering is done on only input tolerances. The opportunities for tinkering get fewer and fewer asproduct launch approaches so strike while the iron is hot.

How does this approach compare to the Root Sum Squares (RSS)? Before we explain RSS, it would behelpful to understand the basics of probability distributions, the properties of the normal distribution, and thenature of transfer functions and response surfaces (both linear and non-linear). So forgive me if I go off on atangent into my next two posts. I promise I will come back to RSS after some brief digressions.

Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 111-117.

Probability Distributions in Tolerance Analysis(Part 4 / 13)

Written by:Karl - CB Expert 8/19/20104:11:00 PMWith uncertainty and risk lurking around every corner, it isincumbent on us to account for it in our forward business projections,whether those predictions are financially-based or engineering-centric. For the design engineer, he may be expressing dimensionalvariance in terms of a tolerance around his nominal dimensions. Butwhat does this mean? Does a simple range between upper and lowervalues accurately describe the variation?

It does not. The lower and upper bounds of a variable say nothingabout specific probabilities of occurrence within that range. To put itanother way, there is no understanding of where those dimensionalvalues will be more likely to occur within that range. That can havefar-reaching consequences when trying to control the variation of acritical output. Therefore, world-class quality organizations capture

variable behavior with probability distributions.

What is a probability distribution? It is a mathematicalrepresentation of the likelihood for potential variable values tooccur randomly over other possible values. More commonly, it isrepresented by the Probability Density Function (PDF), amathematical function that models the probability density inhistogram format. Displayed in Figure 4-1 is a rounded, hump-likecurve over a range of possible variable values (along the x-axis).The height of the curve along the x-axis indicates a greaterprobability density for that particular value. It provides insight as towhat values are more likely to occur. All possibilities should occur

sometime if we sampled values to infinity and beyond. But some will occur more frequently than others. Byusing PDFs to represent the variation, as opposed to simple lower and upper bounds, the design engineer hasmade a great leap forward in understanding how to control variation of critical outputs.

There are many probability distributions that can be selected to model an input's variation. Frequently, thereis not enough data or subject-matter expertise to select the appropriate distribution before ToleranceAnalysis is performed. In the manufacturing world (as well as in many natural processes), the Normal

Page 8: Tolerance Analysis Using Worst Case Approach

distribution is king so baseline assumptions are typically made for normality behavior. (This can bedangerous in some circles but not so bad fordimensional ones.)

It is defined by two PDF parameters, the meanand the standard deviation and is symmetrical (ie,non-skewed). The mean represents the centralvalue; the engineer strives to center the mean ofhis observed production input values on thedesign-intent nominal dimensions. This placeshalf of the possible dimension values to be greaterthan the mean and half to be less than the mean(symmetry). The other engineering goal would beto reduce (or control) the variation of the input byreducing the standard deviation, which is a"width" along the x-axis indicating the strength ofthe variance. Another cool property of the Normaldistribution is that potential ranges along the x-axis have a cumulative probability associatedwith them, knowing how wide that range is in

multiples of standard deviations (see Figure 4-2).

With this percentage-of-cumulative-probability concept associated with a range of variable values, we canestimate any range's cumulative probability using the two parameters' values, mean and standard deviation.(The key word here is "estimate." If the variable's behavior does not imitate normality, then we get furtherand further from the truth when using this estimation.) One particular range of interest is that which contains99.73% of the variable's cumulative probability. That is the range contained within three standard deviations(or three "sigmas") on both sides of the mean. If a supplier can control dimensional variation where the 3-sigma range are within his spec limits (meaning 3 sigmas on both sides of the mean within LSL & USL), heis at least 3-sigma "capable." As a baseline, we assume most suppliers are at least 3-sigma capable, that theycan keep their product performing within specs at least 99.73% of the time. (I will also use the 99.73%, 3-sigma range to estimate extreme values for RSS results, in order to have a decent "apples-to-apples"comparison against the WCA results.)

The magical world of RSS beckons. Yet we would be wise to understand the particulars of transfer functionsand response surfaces and how that plays into RSS and Robust Design. (Permit me another side-bar in thenext post.)Breyfogle, Forrest W., Implementing Six Sigma: Smarter Solutions Using Statistical Methods (2nd ed)(2003); John Wiley & Sons, pp. 148-152.Sleeper, Andrew D., Six Sigma Distribution Modeling (2006); McGraw-Hill, pp. 1-7.

Transfer Functions & Response Surfaces inTolerance Analysis (Part 5 / 13)

Written by:Karl - CB Expert8/23/2010 3:36:00 PM

Page 9: Tolerance Analysis Using Worst Case Approach

Transfer Functions (or Response Equations) are useful to understandthe "wherefores" of your system outputs. The danger with a goodmany is that they are not accurate. ("All models are wrong, some areuseful.") Thankfully, the very nature of Tolerance Analysis variables(dimensions) makes the models considered here concrete andaccurate enough. We can tinker with their input values (bothnominals and variance) and determine what quality levels may beachieved with our system when judged against spec limits. That issome powerful stuff!

But beyond tinkering with input values, they are useful forunderstanding the underlying nature of the engineered system andguiding future design decisions in a highly effective fashion. Whenthe nature of the transfer function is understood, along with itsdesign-peculiar nonlinearities or curvature, it opens or closes more

avenues for design exploration tothe engineer. Always a good thingto learn this up front.

Examine the transfer functionsdepicted as graphical curves inFigure 5-1. All of them representone output (Y) on the vertical axiswhile the horizontal representsone input (x). The first two are thewell-known linear and quadraticpolynomial equations (line andparabola) while the last is anexponential. Many texts would

indicate the second curve to be "non-linear" and since it does not resemble a line, there is a certain truth tothat. But I much prefer calling the quadratic a linear combination higher-order terms (squared, cubed, etc…)of the baseline linear terms, output proportional to input, without any exponentiation. Non-linear (when Iuse it here without quotes) represents just about everything else except the polynomials (linear combinationof first- and higher-order terms). Like the exponential curve or inverse cosines (like stop angle in the one-

way clutch).

Let us assume that each of thesecurves represents a differentdesign solution where a particularoutput response (Y0) is desired(see Figure 5-2). The plots areshown in the same scales for bothinputs and outputs. Also, forsimplicity, we will assume there isno variation in the design inputvalue selected (even though weknow that assumption is false). Ifthe middle value of the inputalong the x-axis is selected for all

three curves (x0) and that results in a corresponding output value that is the same for all three, which designis better than the rest?

Page 10: Tolerance Analysis Using Worst Case Approach

Assuming there is no variation in the input, there is no difference associated with this output across all threedesigns. But if there is variation in the input, the answer is very different. The distinguishing aspect of thethree curves is in their slopes at the design point of interest. The slope of a curve can be expressed as thefirst derivative of the output with respect to the input of concern. The greater this value is at the design pointof interest, the greater the slope and thus the greater the sensitivity is to that input variable. For our design tobe robust to the inherent variation in any input variable, we desire a lesser sensitivity with respect to that

variable. This will become apparent with the RSSand Monte Carlo methodologies.

Now let us examine a transfer function with twoinputs to a singular output (Figure 5-3). Instead ofrepresenting the function with a line or curve andthe two axes (input to output), it is now representedas a surface in three-dimensional space whichundulates at various heights over a two-dimensionalsurface. The flat level surface at the bottomrepresents two inputs whose values must be"frozen" in the design phase. Because of thisrepresentation, these are usually termed ResponseSurfaces. Note that if a design decision has beenmade to "freeze" the nominal value of one inputvariable (in effect, "slicing" the surface whileholding that input constant), the resulting slice ofthe Response Surface is back to a one-output-to-one-input line or curve. Thusly, we could take"slices" of the Response Surface and plot those aslines or curves. Figure 5-4 shows the results when

we hold the first input (x1) constant at three different values and when we hold the second input (x2) constantat three different values. This produces six curves. Note that by holding x1 constant, all three lines producedhave the same slope over the entire range of x2 values of concern. This is true despite the height of thecurves being different (greater nominal response when x1 is held constant at the greater values). The

sensitivity of the output to x2 is thesame no matter what values of x1 orx2 are chosen.

The same cannot be said ofsensitivity of the output to x1. Whenx2 is held constant, the "slices"produced are quadratic, second-orderpolynomials. So our output is verysensitive to the design decision madeon the first input, x1. With no otherconsiderations in mind (and keep inmind there probably are others), thedesign engineer may be inclined toselect the x1 input value that resultsin zero slope (as in at the top of theparabolas shown). This minimizesSensitivity with respect to thatvariable. Doing this across allvariables should result in a RobustDesign (robust as much as possibleto the input variations).

Page 11: Tolerance Analysis Using Worst Case Approach

I have laid some groundwork for understanding the Root Sum Squares approach while performing ToleranceAnalysis. Stay tuned over the next couple of posts as we explore the magical world of RSS.

Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 149-159.

Myers, R.H. and Montgomery, D.C., Response Surface Methodology: Process and Product OptimizationUsing Designed Experiments (1995); John Wiley & Sons, pp. 208-214.

Tolerance Analysis using Root Sum SquaresApproach (Part 6 / 13)

Written by:Karl - CB Expert 8/30/20105:00:00 PMRoot Sum Squares (RSS) approach to Tolerance Analysis has solid afoundation in capturing the effects of variation. In the days of thegolden abacus, there were no super-fast processors willing tocalculate the multiple output possibilities in a matter of seconds (ascan be done with Monte Carlo simulators on our laptops). It has itsmerits and faults but is generally a good approach to predictingoutput variation when the responses are fairly linear and inputvariation approaches normality. That is the case for plenty ofTolerance Analysis dimensional responses so we will utilize thismethod on our non-linear case of the one-way clutch.

The outputs of the RSS calculations are for a predicted mean andpredicted standard deviation. It requires a mathematical expressionsuch that

Using Taylor Series-based approximations of the underlying response equation, it can be shown that theresponse variance is equal to the sum product of individual variances (σxi

2) and their sensitivities (1st

derivatives) squared:

Taking the square root yields the first of the RSS equations, which predicts output variation in the form of astandard deviation:

Page 12: Tolerance Analysis Using Worst Case Approach

The other RSS equation predicts the mean value of the response. Intuitively, one would expect that byplugging in the means of the input variations to the transfer function yields the output mean. Close but withthe exception of an additional term dependent on 2nd derivatives as follows:

(An important side note: The RSS derivations are made with many simplifying assumptions where higher-order terms are discarded, as done with most Taylor Series work. It can get you in trouble if the transferfunction displays non-linearities or curvature but is generally well respected.)

In order to calculate them, the engineer needs to assume values for the inputs (both means and standarddeviations) and to perform differential calculus on the underlying response equations. Not a task for the un-initiated. For this reason alone, many engineers are apt to throw their hands up in frustration. Persist and yeshall be rewarded, I say. Let us apply these RSS equations to the two responses of the one-way clutch.

Here are the original non-linear transfer functions for stop angle and spring gap from the one-way clutch:

Up front, we recognize that, in the closed-form solution approach of RSS, the variation contributions of bothbearing diameters (all being bought from the same supplier) will be equal. For simplification purposes inreducing the number of inputs (independent variables), we will assume them to be one and the samevariable. (This will not be the case in Monte Carlo analysis.) After this simplification and some nasty firstderivative handiwork, we end up with the following equations:

Page 13: Tolerance Analysis Using Worst Case Approach

Have your eyes sufficiently glazed over? Those were only the first derivatives of this three-inputs-to-two-outputs system. I spare you the sight of the second derivatives. (They will be available within the Excel fileattached to my next post.)

The WCA approach used the desired nominal values to center the input extreme ranges so it makes sense touse the desired nominals as input means. But what about the input standard deviations also required tocomplete RSS calculations? Assuming there is no data to define the appropriate variation, we resort to thetried-and-true assumption that your suppliers must be at least 3-sigma capable. Thus, one side of thetolerance would represent three standard deviations. Three on both side of the mean capture 99.73% of thecumulative possibilities. We assume the standard deviation is equal to one-third of the tolerance on eitherside of the mean.

In my next post, I finish the RSS application to the one-way clutch and compare them to WCA results.Please stay tuned!

Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 126-147.

Hahn, G.J. and Shapiro S.S. (1994), Statistical Models in Engineering; J Wiley and Sons, pp. 252-255.

Sleeper, Andrew D., Design for Six Sigma Statistics (2006); McGraw-Hill, pp. 716-730.

Page 14: Tolerance Analysis Using Worst Case Approach

Tolerance Analysis using Root Sum Squares Approach,continued (Part 7 / 13)

Written by:Karl - CB Expert9/1/2010 6:07:00 PMAs stated before, the first derivative of the transfer function withrespect to a particular input quantifies how sensitive the output is tothat input. However, it is important to recognize that Sensitivity doesnot equal Sensitivity Contribution. To assign a percentage variationcontribution from any one input, one must look towards the RSSoutput variance (σY

2) equation:

Note that the variance is the sum product of the individual inputvariances (σxi

2) times their Sensitivities (1st derivatives) squared. Thosesummed terms represent the entirety of each input's variation contribution. Therefore, it makes sense todivide individual terms (product of variance and sensitivity squared) by the overall variance. Running thecalculation this way ensures the total will always add up to 100%. (BTW: The results are included in anExcel spreadsheet titled "One-Way Clutch with RSS" from our file archives. To download this file isavailable for all registered users who have logged in. If you have not currently logged in, this link willdirect to our quick login/registration page.)

After the dust settles, the end result for the two output means and standard deviations are:

OUTPUT RSS Mean RSS Standard DeviationStop Angle 27.881 ˚ 0.0019 ˚Spring Gap (mm) 6.977 0.0750

We can use these mean and standard deviation values to estimate the 99.73% range and use these extremerange values to be an "apples-to-apples" comparison against the WCA results. Those values, standing atthree standard deviations on either side of the predicted mean are:

OUTPUT RSS "Minimum Value" RSS "Maximum Value"Stop Angle 27.380 ˚ 28.371 ˚Spring Gap (mm) 6.631 7.312

Page 15: Tolerance Analysis Using Worst Case Approach

How do these extreme results compare to WCA values? Andhow do each of them compare to the LSL & USL,respectively? Figure 7-1 plots the three pairs of values tomake visual comparisons.

RSS predicts both extreme values for stop angle to be withinthe spec limits; WCA does not. Not only does WCA placeboth of its predicted extreme values further out than RSSdoes, one of them is lower than LSL (which is a bad thing).Why? The reason is that RSS accounts for the jointprobability of two or more input values occurringsimultaneously accurately while WCA does not. In theWCA world, any input value has equal merit to any other, aslong as it is in the bounds of the tolerance. It is as if auniform probability distribution has been used to describetheir probability of occurrence. RSS says "not so." Itaccounts for the unlikely joint probability that those extremecombinations of values are much less likely to occur thanthe central tendency combinations. Thus, RSS is much lessconservative than WCA and also more accurate.

Another big distinction between the two approaches is that RSS provides sensitivity and sensitivitycontribution values according to each input while WCA does not. Sensitivities and contributions allow theengineer to quantify which input variables are variation drivers (and which ones are not). Thus, a game plancan be devised to reduce or control variation on the drivers that matter (and eliminate those drivers that donot from any long-winded control plan). The sensitivity information is highly valuable in directing thedesign activities focused on variation to the areas that need it. It makes design engineering that moreefficient.

A summary of pros and cons when comparing WCA against RSS:

APPROACH PROS CONS

Worst CaseAnalysis

• Lickety-split calculations based ontwo sets of extreme input values

• Easy to understand• Accounts for variation extremes

• Very unlikely WCA values will occur in reality• Very conservative in nature• "What-if" experiments may take more time to

find acceptable design solutions

Root SumSquares

• Provides estimation of mean andstandard deviation

• More accurate and less conservativein predicting variation

• Provides Sensitivities and %Contributions to enable efficientdesign direction

• Not easy to understand• Requires math & calculus skills• Relies on approximations that are violated when

either:o Input probabilities are non-normal

and/or skewedo Transfer function is non-linear

Perhaps RSS is the clear winner? Not if you lack a penchant for doing calculus. Not if your transfer functionis highly non-linear. Before we progress to Monte Carlo analysis, let us step back and develop a firmunderstanding of the RSS equations. In my next post, I will illustrate the RSS properties in graphical formatbecause pictures are worth a thousand words.Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 126-147.Hahn, G.J. and Shapiro S.S. (1994), Statistical Models in Engineering; J Wiley and Sons, pp. 252-255.Sleeper, Andrew D., Design for Six Sigma Statistics (2006); McGraw-Hill, pp. 716-730.

Page 16: Tolerance Analysis Using Worst Case Approach

Root Sum Squares Explained Graphically (Part8 / 13)

Written by:Karl - CB Expert 9/21/20102:42:00 PMA few posts ago, I explained the nature of transfer functions andresponse surfaces and how they impact variational studies when non-linearities are concerned. Now that we have the context of the RSSequations in hand, let us examine the behavior of transfer functionsmore thoroughly.

Sensitivities are simply the slope values (1st derivatives) of thetransfer functions. If I take "slices" of a two-inputs-to-one-output ofthe response surface, I can view the sensitivities (slopes) along thoseslices. But why are steeper slopes considered more sensitive?

Examine Figure 8-1. It shows three mathematical representations of aone-input-to-one-output transfer function. These curves (all straightlines) represent three different design solutions under consideration.When the designer selects the mid-point of the horizontal axis as his

design input value (x0), all three transfer functions provide the same output (Y0) response value along thevertical axis. What makes one better than the others?

Now examine Figure 8-2. We have addeda variational component to the inputvalues in the form of a normal curvealong the horizontal axes. Remember thatthe height of the PDF shown indicatessome values are more likely to occur thanothers. The variability shown in the inputcan be transferred to the output variationthrough the transfer function curve. Forthe first line, this results in a normalcurve on the response with a knownstandard deviation (as shown on the vertical axis). For the second line, this also results in a normal curve butone that has a wider range of variation (a larger standard deviation). For the third and steepest line, we get

another normal curve with an even greaterrange of variation. The first line (or designsolution under consideration) producesless variation (less noise) on the outputwhen input variation is same for all threescenarios. It is a more Robust Designthan the other two (less sensitive to noisein the input variations).

Page 17: Tolerance Analysis Using Worst Case Approach

What if there are multiple inputs to our one output? Doesone input's variation wreak more havoc than anotherinput's variation? This is the question that SensitivityContribution attempts to answer. Consider the responsesurface in Figure 8-3, a completely flat surface that isangled to the horizontal plane (not level). Figure 8-4displays the slices we cut while holding x1 constant atdifferent values and doing the same for x2. Note that theslopes (which are constant and do not change over theinput range of interest) are of a lesser magnitude when x1

is held constant versus when x2 is held constant. Thatmeans the response is less sensitive to variation in thesecond input (x2) than the first input (x1). (As a side note:The steeper slices, when x2 is held constant, have anegative sign indicating a downward slope.)

If we apply the same variation to both x1 and x2 (seeFigure 8-5), it is obvious that the first input causes greater output variation. Therefore, it has a greater

sensitivity contribution than that of thesecond input.

We can flip the tables, however. What if thevariation of x2 was so much greater thanthat of x1? (See Figure 8-6.) It is possiblethat, after the variation of x1 and x2 havebeen "transferred" through the slopes, thatthe corresponding output variation due to x2

is greater than that from x1. Now the secondinput (x2) has a greater sensitivitycontribution component than the first input(x1), even though the first input has a greatersensitivity than the second input. Byexamining the RSS equation for output

variance (see below), it can be seen why this is the case.

Page 18: Tolerance Analysis Using Worst Case Approach

If either the slopes (1st derivatives) of a design solution of interest or the variation applied to those slopes(the input standard deviations) is increased, that input's sensitivity contribution goes up while all the othersgo down. The pie must equal 100%.

So we now know how much pie there is to allocate to the individual input tolerances, being based on thesensitivities and input variations. Let us stop eating pie for the moment and look at the other RSS equation,that of predicted output mean. (RSS is really easy to understand if you look at it graphically.)

Root Sum Squares Explained Graphically,continued (Part 9 / 13)

Written by:Karl - CB Expert 9/24/201010:00:00 AMThe other RSS equation, that of predicted output mean, has a termdependent on 2nd derivatives that is initially non-intuitive:

Why is that second term there?

Examine the two curves in Figure 9-1. The first isthe line and the second an exponential. The 2nd

derivative of a curve captures the changing-slopebehavior of a curve as one moves left-to-right. Inthe first curve, the slope does not change acrossthe range of input values displayed. Its slope (1st

derivative) remains constant so therefore its 2nd

derivative is zero. But the slope of the exponentialdoes change across the input value range. Theslope initially starts as a large value (steep) and

then gradually levels out (not so steep). Its 2nd derivative is a negative value as the slope goes down (asopposed to going up) across the value range.

Applying the same normal input variation to bothcurves results in different output distributions. Theexponential distorts the normal curve whenprojecting output variation (see Figure 9-2). Half ofthe input variation is above those normal curvecenters (means) while the other half is below.When we project and transfer this variation throughthe line, note that the slope above and below the

Page 19: Tolerance Analysis Using Worst Case Approach

center point of transfer is equal. This will transfer half of the variation through the slope above the point ofinterest and half below. Thus the straight line projects a normal curve around the expected nominal response.The output mean is not shifted.

When we project and transfer the normal input variation through the exponential curve, something differenthappens. Since the slope changes around the point of interest (it is a greater slope with the lower half of thenormal input variation and a lesser slope with the upper half), it has the effect of distorting the "normal"output curve. The lower half of input variation drives in a wider range of output variation than the upper halfdoes; it has a longer tail. This has the effect of skewing the distribution and "shifting" the mean outputresponse downwards. By placing the 2nd derivative value (which is negative) in the RSS output meanequation, the same effect is capture mathematically. A negative 2nd derivative value "shifts" the meandownward as the exponential curve does to our output variation.

Now with visual concepts and understanding around RSS planted in our minds, let us turn the spotlight onthe topic of Monte Carlo Analysis.

Copyright ©2010 Karl - CB Expert

Introduction to Monte Carlo Analysis (Part 10/ 13)

Written by:Karl - CB Expert 10/25/20106:26:00 PMIn past blogs, I have waxed eloquent about two traditional methodsof performing Tolerance Analysis, the Worst Case Analysis and theRoot Sum Squares. With the advent of ever-more-powerfulprocessors and the increasing importance engineering organizationsplace on transfer functions, the next logical step is to use theseresources and predict system variation with Monte Carlo Analysis.

The name Monte Carlo is not by accident. This methodology uses therandom behavior associated with underlying probability distributionto sample from those PDFs. Thus, it resembles gambling whereplayers predict outcomes on games of random chance. At the time ofits "invention," Monte Carlo in Monaco was considered the gamblingcapitol of the world, thus the name. (Perhaps if it had been"invented" later, it would've been called Las Vegas or Macau?)

The methodology is simple in nature but with a far-reaching impact in the prediction world. In the RSSmethodology, the 1st and 2nd derivatives of transferfunctions have primary influence over systemoutput means and standard deviations (whenvariation is applied to the inputs). But there are nomathematical approximations to capture systemvariation in Monte Carlo analysis. Instead, it usesrandom sampling methods on defined input PDFsand applies those sampled values to the transferfunctions. (Figure 10-1 displays a potential

Page 20: Tolerance Analysis Using Worst Case Approach

sampling from an input's Normal distribution, in histogram format, overlaid on the normal curve itself.) The"math" is much simpler and we let our laptops crunch numbers and predict outcomes.

This sampling is done many times over, usually thousands of times. (Each combination of sampling isconsidered a trial. All combined trials run constitute one simulation.) Every time a sampling is done fromall the input PDFs, those single-point values are applied to the transfer function and the thousands ofpossible output values are recorded. They are typically displayed as histograms themselves.

Returning to the RSS graphical concepts covered in the last couple of posts, let us examine this approachusing an exponential transfer function (see Figure 10-2). We apply a normal PDF to the input (horizontal)axis which is then sampled many times over (many trials displayed as a histogram of sampled values).

Applying these numerous sampled values to theexponential curve, the corresponding numerousoutput values are calculated and then displayedalso in histogram format along the output(vertical) axis. These thousands (or millions) oftrials allow us to capture the output variationsvisually, perhaps even approximate their behaviorwith a PDF that "fits" the data. We can alsoperform statistical analysis on the output valuesand predict the quality of our output variations.Perhaps some fall outside the LSL & USL limits?Their probability of occurrence quantifies thequality of the system output. Voila! No need toscratch out 1st and 2nd derivatives like in RSS. Allmethods require transfer functions but withMonte Carlo analysis, thorny calculus falls by the

wayside.Now let us apply this methodology to the one-way clutch example. Thanks for staying tuned!Hambleton, Lynne, Treasure Chest of Six Sigma Growth Methods, Tools, and Best Practices (2008);Prentice Hall, pp. 431-432.Sleeper, Andrew D., Six Sigma Distribution Modeling (2006); McGraw-Hill, pp. 115-118.

Copyright ©2010 Karl - CB Expert

Tolerance Analysis using Monte Carlo (Part11 / 13)

Written by:Karl - CB Expert 10/28/201011:02:00 AMHow do Monte Carlo analysis results differ from those derived viaWCA or RSS methodologies? Let us return to the one-way clutchexample and provide a practical comparison in terms of a non-linearresponse. From the previous posts, we recall that there are twosystem outputs of interest: stop angle and spring gap. These outputsare described mathematically with response equations, as transferfunctions of the inputs.

What options are there to run Monte Carlo (MC) analysis on ourlaptops? There are quite a few (a topic that invites a separate post).For the everyday user, it makes good sense to utilize one that workswith existing spreadsheet logic, as in an Excel file. There are threemovers and shakers in the Monte Carlo spreadsheet world which onemight consider: Crystal Ball, ModelRisk and @Risk. (If you examine

Page 21: Tolerance Analysis Using Worst Case Approach

our offerings, www.crystalballservices.com, you will find we sell the first two.) For the purposes of thisseries, we will use Crystal Ball (CB) but will return to ModelRisk in the near future.

As with the RSS approach, it makes sense to enter the transfer function logic into an Excel spreadsheetwhere equations define the system output as linked to the inputs located in other cells. (Please refer to the"One-Way Clutch with Monte Carlo" spreadsheet model in our model archives. Figure 11-1 contains a

snapshot of this spreadsheet.)The inputs and outputs arearranged in vertical order. Eachof the inputs has a nominal andtolerance value associated withthem, along with a desiredSigma level. As stated inprevious posts, it will beassumed the suppliers of theseinputs are 3-sigma capable andthat their behaviors arenormally distributed. Goingforward, this means thestandard deviations are assumedto be one-third of the tolerancevalue.

The noticeable differencebetween this spreadsheet andthe previous one (focused onlyon WCA and RSS results) is thecoloring of the cells in the

column titles "Values." The input "values" are colored green while the output "values" are colored blue. Inthe Crystal Ball (CB) world, these colors indicate a probabilistic behavior has been assigned to thesevariables. If we click on one of the green cells and then the "Define Assumption" button in the CD ribbonbar, a dialog box opens to reveal which distribution was defined for that input. The user will find that theyare all normal distributions and the parameters (mean and stdev) are cell-referenced to the adjacent"Nominal," "Tolerance" and "Sigma" cells within the same row. (Note the standard deviation is calculatedas the tolerance divided by the sigma. If this is not visible in your CB windows, hit CTL-~.)

We can run the simulation as is. But before we do, an important MC modeling distinction should be made.In the RSS derivations, we assumed that the contributive nature of both bearings (bought in batches from thesame supplier) were equal and therefore assumed the same variable behavior for both diameters as onesingular variable (dB1) in the output transfer functions. However, this assumption in the transfer functionshould not be used for the transfer functions in the Monte Carlo analysis.

Re-examine the case of the classicstack-up tolerance example frommy first few posts (see Figure 11-2). Note that for the transferfunction, I have distinguished eachblock width as a separate inputvariable. Had I assumed the sameinput variable for all three (so thatthe output gap transfer function iscavity width minus three times asingular block width), I would haveinvited disaster. Explicit in theincorrect transfer function is an

Page 22: Tolerance Analysis Using Worst Case Approach

assumption that all three block widths will vary in synchronicity. If the block width is randomly chosen asgreater than the nominal by the MC simulation, all three block widths would be artificially larger than theirnominals. Obviously, this is not what would happen in reality. The simulation results would overpredictvariation. Therefore, the MC modeler must assign different variables for all block widths, even though theprobability distributions that define their variation are one and the same.

To run the simulation, the user simply clicks on the "Start" button in the CB ribbon bar. Immediatelyobvious should be the CB Forecast windows that pop up in the forefront to display the simulation as it runs.The visual result is output histograms that show the variation with respect to the spec limits (LSL & USL),as seen in Figure 11-3. CB also can place particular output results values (such as mean, stdev, capability

metrics and percentiles) into definedcells (Cells I12:AK13) via Forecastoptions.

Here is the moment of truth. How didMC analysis compare to the WCAand RSS results? For an even better"apples-to-apples" comparison, theanalyst may decide to extractpercentiles associated with the99.73% range (3 sigmas to eitherside), rather than calculate them frommean and standard deviation. If thereis any non-normal behaviorassociated with the output, MCanalysis can extract the valuescorresponding to the 3-sigma range,providing more accuracy on the

extreme range despite non-normality. The same cannot be said of either WCA or RSS. ("Min Values" and"Max Values" for all three methods are displayed in Columns J & K.)

Figure 11-4 summarizes the range comparisons on both system outputs for the three methods. Surprisinglyenough, MC provides a wider range of stop angle extreme values than RSS but less wide than theconservative WCA approach. (RSS and MC agree pretty much on spring gap range while still being lessthan the WCA range.) The reason for the difference in stop angle extreme ranges is related to a difference in

predicted standard deviations. The MC method predictsan output standard deviation that is two orders ofmagnitude greater than the RSS output standarddeviation. (Your results may vary based on number oftrials and random seed values but should beapproximately the same.) The approximations based inRSS scribes can sometimes be too relaxed, especially iflinearity and normality assumptions are violated. Theycan be too liberal and would paint a rosy picture fordimensional quality predictions.

Is that the only reason to prefer MC analysis over RSS?Follow my next post as we revisit the topic ofsensitivities and sensitivity contributions as they apply inthe MC world.

Creveling, Clyde M., Tolerance Design: A Handbook forDeveloping Optimal Specifications (1997); AddisonWesley Longman, pp. 163-167.

Page 23: Tolerance Analysis Using Worst Case Approach

Tolerance Analysis using Monte Carlo, continued(Part 12 / 13)

Written by:Karl - CB Expert 11/1/201012:33:00 PMIn the case of the one-way clutch example, the current MC qualityprediction for system outputs provide us with approximately 3- and6-sigma capabilities (Z-scores). What if a sigma score of three is notgood enough? What does the design engineer do to the input standarddeviations to comply with a 6 sigma directive?

Both RSS and MC analysis provide information on SensitivityContributions. (WCA does not.) These contributions are reported aspercentages whose total from all input contributions must equal unity(100%). If an engineer knows that some inputs contribute more (oreven way more) to variation output than others, would that be of use?Absolutely. Then a plan can be formulated to attack those inputsdriving quality (if possible) and pay less scrutiny to the others.

Figure 12-1 displays Sensitivity Charts produced from a CBsimulation. From these charts we can visually gage the percentagecontributions. Note that the absolute value sum of the contributions

equals 100%. At the same time, some input contributions are displayed as negatively contributing whileothers are positive. It does not mean the percentages are negative. However, they do provide informationabout the sign (plus or minus) associated with the input sensitivity (also known as the slope or 1st

derivative). This relays other design information to the engineer. Namely, how would the output mean beshifted (up or down, negative or positive) if an input mean shifted. These values are to be compared with theRSS sensitivity contribution results. Are MC and RSS in agreement on the predicted input contributions?

For the one-way clutch example, they do a pretty good job as shown in the following two tables:

STOP ANGLE CONTRIBUTIONS Hub Critical Height Bearing Diameter Cage Inner DiameterRSS % Prediction 55.37% 1.36% 43.26%MC % Prediction 56.1% 0.6% 43.2%

Page 24: Tolerance Analysis Using Worst Case Approach

SPRING GAP CONTRIBUTIONS Hub Critical Height Bearing Diameter Cage Inner DiameterRSS % Prediction 42.94% 2.11% 54.95%MC % Prediction 43.9% 1.0% 55.9%

(As an aside, note that I have listed only one total contribution associated with Bearing Diameter whereasthe Sensitivity Charts indicates there are two, one for each separate bearing. In order to compare ourprevious RSS results, I have summed the two MC bearing contributions. As always, your results may varydepending on number of trials and seed value.)

Now the design engineer's mission is clear. The Hub Critical Height wins out over Cage Inner Diameter incontribution percentage. What can be done to control its variation tighter than currently estimated? What ifthe machining process that makes the hub cannot provide any tighter variation? In that case, the engineerfocuses on what can be done on Cage Inner Diameter, which is a purchased part. But placing any moreeffort on tightening Bearing Diameter variation would be a waste of time. Sensitivity Analysis in aTolerance context has great power. It shows the way forward to addressing a quality problem.Let us summarize the three methods and their pros and cons in the post to follow.Creveling, Clyde M., Tolerance Design: A Handbook for Developing Optimal Specifications (1997);Addison Wesley Longman, pp. 172-175.Sleeper, Andrew D., Six Sigma Distribution Modeling (2006); McGraw-Hill, pp. 122-124.

Copyright ©2010 Karl - CB Expert

Tolerance Analysis Summary (Part 13 / 13) Written by:Karl - CB Expert 11/4/2010 2:01:00 PMTolerance Analysis focuses on dimensional aspects of manufacturedphysical products and the process of determining appropriatetolerances (read: allowable variations) so that things fit together andwork the way they are supposed to. When done properly inconjunction with known manufacturing capabilities, products don'tfeel sloppy nor inappropriately "tight" (i.e., higher operating efforts)to the customer. The manufacturer also minimizes the no-buildscenario and spends less time (and money) in assembly, whereworkers are trying to force sloppy parts together. Defects are lessfrequent. There are a wealth of benefits too numerous to list butobvious nonetheless. Let us measure twice and cut once.

The three primary methodologies to perform Tolerance Analysis asdescribed in these posts are:

• Worst Case Analysis (WCA)• Root Sum Squares (RSS) Monte Carlo (MC) Analysis

WCA relies on calculating worst case or extreme possibilities, given a range of potential individual values(defined by nominals and tolerances). It is straightforward in its approach by asking the question: "What isthe worst that can happen?" It is simple to understand in mathematical terms. However, the extreme values itcalculates have no associated probability with them. Because of its conservative nature, there is a goodlikelihood that those extremes will rarely occur. (In many cases, very rarely.)

RSS relies on mathematical approximations that are generally good for Dimensional Tolerance Analysis,given the usually linear nature of the transfer functions. Unlike WCA, it provides information on predictedoutput means and standard deviations and variation contributions from isolated inputs. Both of these are

Page 25: Tolerance Analysis Using Worst Case Approach

invaluable to the design engineer. Now we have associated probabilities with certain output values occurringand we know which input variations to attack if our product has not attained desired quality standards.

MC Simulation relies on a defined transfer function (as does RSS). However, instead of using nasty calculusto approximate means and standard deviations, it simulates the response variation by sampling values frominput distributions and applying them to the transfer function many times. The result is also anapproximation of the true variation behavior (dependent on seed value and number of trials) but it is a betterapproximation than RSS. Better in the sense that it does not care if there is curvature or non-linearities in thetransfer function and it does not care that input variations are non-normal. RSS provides less accuratepredictions when those conditions occur.

Here is a table summarizing Pros and Cons of the three approaches:

APPROACH PROS CONS

Worst CaseAnalysis

• Lickety-split calculations based ontwo sets of extreme input values

• Easy to understand• Accounts for variation extremes

• Very unlikely variation extremes will occur inreality

• Very conservative in nature• "What-if" experiments may take more time to

find acceptable design solutions

Root SumSquares

• Provides estimation of mean andstandard deviation

• More accurate and less conservativethan WCA in predicting variation

• Provides Sensitivities and %Contributions to enable efficientdesign direction

• Difficult to understand & explain (may beimportant for design change buy-in)

• Requires math & calculus skills• Relies on approximations that are violated when

either:o Input probabilities are non-normal

and/or skewedo Transfer function is non-linear

Monte CarloAnalysis

• Easy to understand• Most accurate variation estimation

(with appropriate # of trials)• Provides Sensitivities and %

Contributions to enable efficientdesign direction

• Accounts for non-normal inputbehavior and non-linear transferfunctions

• Accuracy depends on:o Number of trialso Input probability definition

• Complex models may run "slow"

I hope you have enjoyed this little journey through the Tolerance Analysis world as much as I have hadputting my thoughts on internet paper. Please stay tuned for more posts in the realm of analysis andsimulation.

Sleeper, Andrew D., Design for Six Sigma Statistics (2006); McGraw-Hill, pp. 703, 731.

Copyright ©2010 Karl - CB Expert