View
223
Download
0
Category
Preview:
Citation preview
MODULE NO -10 Introduction to Control charts
Statistical process control
• Statistical process control is a collection of tools that when used together can result in process stability and variability reduction.
• A stable process is a process that exhibits only common variation, or variation resulting from inherent system limitations.
• A stable process is a basic requirement for process improvement efforts. Advantage of a stable process
• Management knows the process capability and can predict performance, costs, and quality levels.
• Productivity will be at a maximum, and costs will be minimized. • Management will be able to measure the effects of changes in the system with
greater speed and reliability. • If management wants to alter specification limits, it will have the data to back up
its decision. Categories of variation in piece part production
• Within-piece variation • Piece-to-piece variation • Time-to-time variation
Source of variation Variation is present in every process due to a combination of the equipment, materials, environment, and operator. The first source of variation is the equipment. This source includes tool wear, machine vibration, work holding-device positioning, and hydraulic and electrical fluctuations. When all these variations are put together, there is a certain capability or precision within which the equipment operates. The second source of variation is the material. Since variation occurs in the finished product, it must also occur in the raw material (which was someone else's finished product). Such quality characteristics as tensile strength, ductility, thickness, porosity, and moisture content can be expected to contribute to the overall variation in the final product. A third source of variation is the environment. Temperature, light, radiation, electrostatic discharge, particle size, pressure, and humidity can all contribute to variation in the product. In order to control this source, products are sometimes manufactured in
white rooms. Experiments are conducted in outer space to learn more about the effect of the environment on product variation. A fourth source is the operator This source of variation includes the method by which the operator performs the operation. The operator's physical and emotional well-being also contribute to the variation. A cut finger, a twisted ankle, a personal problem, or a headache can make an operator's quality performance vary. An operator's lack of understanding of equipment and material variations due to lack of training may lead to frequent machine adjustments, thereby compounding the variability. The above four sources account for the true variation. There is also a reported variation, which is due to the inspection activity. Faulty inspection equipment, the incorrect application of a quality standard, or too heavy a pressure on a micrometer can be the cause of the incorrect reporting of variation. In general, variation due to inspection should be one-tenth of the four other sources of variations. It should be noted that three of these sources are present in the inspection activity-an inspector, inspection equipment, and the environment. Chance and Assignable Causes of Quality Variation As long as these sources of variation fluctuate in a natural or expected manner, a stable pattern of many chance causes (random causes) of variation develops. Chance causes of variation are inevitable. Because they are numerous and individually of relatively small importance, they are difficult to detect or identify. When only chance causes are present in a process, the process is considered to be in a state of statistical control. It is stable and predictable. However, when an assignable cause of variation is also present, the variation will be excessive, and the process is classified as out of control or beyond the expected natural variation.
• A process that is operating with only chance causes of variation present is said to be in statistical control.
• A process that is operating in the presence of assignable causes is said to be out of control.
• The eventual goal of SPC is reduction or elimination of variability in the process by identification of assignable causes.
Control chart
• Control chart was developed to recognize constant patterns of variation. • When observed variation fails to satisfy criteria for controlled patterns, the chart
indicate this. • Control chart allow us to distinguish between controlled and uncontrolled
processes
Statistical Basis of the Control Chart Basic Principles A typical control chart has control limits set at values such that if the process is in control, nearly all points will lie between the upper control limit (UCL) and the lower control limit (LCL). Definition : A control chart is defined as a statistical tool used to detect the presence of assignable causes in any manufacturing systems and it will be influenced by the pure system of chance causes only Control charts are of two types : Variable control charts and attribute control charts Variable Control charts : A variable control chart is one by which it is possible to measure the quality characteristics of a product. The variable control charts are
(i) - chart
(ii) R – chart
(iii) � – chart
Attribute Control chart : An attribute control chart is one in which iti is not possible to measure the quality characteristics of a product i.e., it is based on visual inspection only like good or bad success or failure, accepted or rejected. The attribute control charts are. (i) p - chart (ii) np – chart (iii) c – chart (iv) u - chart Objectives of control charts
• Control charts are used as one source of information to help whether an item or items should be released to the customer.
• Control charts are used to decide when a normal pattern of variation occurs, the process should be left alone when an unstable pattern of variable occurs which indicates the presence of assignable causes it requires an action to eliminate it.
• Control charts can be used to establish the product specification. • To provide a method of instructing to the operating and supervisory personnel
(employees) in the technique of quality control.
x
Notations : Mean of the sample
: Standard deviation of the sample
: Mean of the population or universe
: Standard deviation of the population
Central Limit Theorem Irrespective of the shape of the distribution of the universe, the average value of a sample size ‘n’ ( X bar1, X bar2, X bar3 --------- n ) drawn from the population will tend towards a normal distribution as n tends to infinity. Relation between R bar and - = Mean Range
d2 = Depends upon sample size from the tables
x
xσ
1x
σ 1
σ 1
2
1
dR=σ
R
Control Limits for R chart
Interpretation of Control Charts After plotting the points on the X bar - R charts, it shows two possible states of control. They are
1. State of statistical control and 2. State of lack of control.
State of Statistical Control A manufacturing process is said to be in a state of statistical control whenever it is operated upon by a pure system of chance causes. The display of points in the X bar chart and R chart will be distributed evenly and randomly around the center line and all the points should fall between the UCL and LCL. Control Charts - in Control VS Chance Variation
State of Lack of Control A process is said to be in a state of lack of control whenever the state of statistical control does not hold good. In such a state we interpret the presence of assignable causes, the reason for lack of control are
• Points violating the control limits • Run • Trend • Clustering • Cycle pattern
Control Charts Interpretation
• Special: Any point above UCL or below LCL • Run : > 7 consecutive points above or below centerline • 1-in-20: more than 1 point in 20 consecutive points close to UCL or LCL • Trend: 5-7 consecutive points in one direction (up or down)
Control Charts - Lack of Variability
Control Charts – Lack of Variability
Control Charts shifts in Process Levels
Control Charts Recurring Cycles
Control Charts points near or outside limits
MODULE NO -11
Applications of X bar – R Chart with Real life data Problem 3. The following are the X bar - R values of 20 subgroup of 5 readings each S.G No X bar R 1 34.0 4 2 31.6 2 3 30.8 3 4 33.8 5 5 31.6 2 6 33.0 5 7 28.2 13 8 33.8 19 9 37.8 6
10 35.8 4 11 38.4 4 12 34.0 14 13 35.0 4 14 33.8 7
15 31.6 5 16 33.0 7 17 32.6 3 18 31.8 9 19 35.6 6 20 33.0 4 (a) Determine the control limits for X bar and R chart. (b) Construct the and R chart and interpreter the result. (c) What is process capability? (d) Does it appear that the process is capable of meeting the specification limits. (e) Determine the percentage age of rejection if any The specification limits are =33±5.
0.126
2.669
=�
=�
R
x
0.126
2.669
=�
=�
R
x
46.3320
2.669 ==�=kx
x
6320
126 === �K
RR
For a subgroup size of 5 from tables A2 = 0.58 d2 = 2.326 D3 = 0.0 D4 = 2.11 Control limits for R- chart UCL= D4 = 2.11 x 6.3 LCL = D3 = 0.0 CL = = 6.3 It is seen from the data two subgroup are crossing the UCL which indicates the presence of assignable causes. So the homogenization is necessary. Again control limits for R-chart UCL= = 2.11 x 5.17 LCL= = 0.0 CL = = 5.17 Again one more subgroup is crossing the UCL Again control limits for R-chart UCL= = 2.11 x 4.17 = 9.917 LCL= = 0 x 4.7 = 0.0 CL = = 4.7 Now all the points are falling with the control limits. The final values are UCL = 9.917 LCL = 0.0 CL = 4.7 Control limits for - chart UCL =
22019140.126
−−−=R
17.51893 ==
7.4320
1319141262 =
−−−−=R
X
22 RAX +
RR
R
14 RD
13 RD
1R
24 RD
23 RD
2R
= 33.46 + 0.58 x 4.7
= 36.186 LCL = = 30.734 CL = = 33.46 It is seen from the data that three subgroup are crossing the control limits. Which indicates the presence of assignable causes. So homogenization is necessary Again control limits for X-bar -chart UCL = = 33.22+0.58 x 4.7 = 35.946
LCL = = 33.22 -0.58 x 4.7 = 30.494
CL = = 33.22 Now all the points are falling within the control limits. The final value are UCL = 35.946 LCL = 30.494 CL = 33.22 The Charts are plotted for the final values
22 RAX −
X
22.33320
2.284.388.372.6691
=−
−−==X
221 RAX −
1X
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
UCL = 35.946
CL = 33.22
LCL = 30.494
Subgroup Number
X
221 RAX +
(b) Interpretation R-chart is not in control. Some points are crossing the UCL, - chart is not in control. Points are crossing the control limits. So process is not in a of statistical control �1 = The process capability = 6�1 = 6 x 2.02 = 12.12 (d) UCL – LSL =10 Since 6�1 > (UCL – LCL), the process is not capable of meeting the specifications limits. (e) UNTL = +3�1
= 33.22 + 3 x 2.02 = 39.28 LNTL = -3�1 = 33.22 - 3 x 2.02
= 27.16
CL = = 33.22 UCL = 38 LSL = 28 Probability = 0.0052 = 0.52%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
UCL = 9.917
CL = 4.7
LCL = 0.0
Subgroup Number
R
02.2326.27.4
2
2 ==dR
1X
1X
1X
58.202.2
22.3328 −=−=ZBelow
Probability = 0.9909 = 99.09% Therefore 100 – 99.09 = 0.91% Total Rejection = .052 + 0.91 = 1.43%
Problem - 4 A Control charts has been used to monitor a certain characteristic. The process is sampled in a subgroup size of 4 at an interval of 2 hours. - chart has 3� control limits of 121 and 129 with the target value of = 125. (a) If the product is sold to a user who has a specification of 127 ±8. What percentage of the product will not meet the specification assuming normally distributed output. (b) If the target value of the process can be shifted without effect on the process standard deviation, what target value would minimise the amount of product being outside the specifications. (c) At this new target value what percentage of the product will not meet the specification requirements. Solution. Subgroup size ‘n’ = 4 UCL = 129 LCL = 121 CL = = = 125. Specification limits = 127 ±8 USL = 135 LSL = 119 From tables, for a subgroup size of 4. A2 = 0.73 d2 = 2.059 D3 = 0.0 D4 = 2.28 UCL = + A2 Process capability = 6 �1 = 6 x 2.6
36.202.2
22.3338 −=−=ZAbove
X
1X
X1
X
X R
AXUCL
R2
−=
48.573.0125129 =−=
= 15.96 USL – LSL = 16
Science 6�1 < (USL – LSL). The process in capable of meeting the specification limits. UNTL = +3 �1 = 125 + 3 x 2.66 = 132.98 LNTL = - 3 �1 = 125 – 3 x 2.66 = 117.02 USL = 135 LSL = 119
Percentage of rejection Z = = -2.25 (a) Probability from table 0.0122 = 1.22% (b) In order to minimize the percentage of rejection change the process target from 125 to 127. The percentage of rejection Z = = -3.00 Probability from tables = 0.00135 Percentage of rejection = 0.135
1X
1X
UNTL = 132..98
USL = 135
CL = 125
LNTL = 117.02LSL = 119
σ 166.2125119 CLLSL −=−
66.2127119 −
(c) Since it is symmetric the total percentage of rejection = 0.135 x 2 = 0.27%. Problem -5 For a certain characteristic of a product of sample size2 after 25 sub-groups. �R = 0.81 and � X bar = 27.635. The specification limits are 1.12 ± 0.087. (a) In the process harmonized to the specifications. (b) What are the rejections percentages if any? (c) Is the process capable of meeting the specifications. (d) Harmonise the process to the specifications and obtain the control limits for X bar-R chart after harmonizing the process to specification. Solution. n = 2, K = 25, �R = 0.81, � X bar = 27.635. Specification limits = 1.12 ± 0.087 USL = 1.207 LSL = 1.033 From tables for a subgroup size of 2. d2 = 1.128 A2 = 1.88 D3 = 0.0 D4 = 3.27 UNTL = = 1.1054 + 3 x 0.0287 = 1.1915 LNTL = = 1.1054 – 3 x 0.0287 = 1.0193 CL = = 1.1054
11054.125635.27
XKX
X ===�=
0324.02581.0 ==�=
KR
R
0287.0128.10324.0
2
1 ===dRσ
113�X +
113�X −
1X
UNTL = 1 .1915
USL = 1.207
CL = 1 .1054
LSL = 1 .033
(a) It is clear from the figure that the process is not harmonised with the specifications (LNTL is below LSL) (For a process to be harmonised, LNTL, UNTL must fall well With in the USL and LSL or must be just equal to them.) (b) The percentage of rejections Probability = 0.0059 Percentage of rejection = 0.59% (c) USL – LSL = 1.207 – 1.033 = 0.174 6 �1 = 6 x 0.0287 = 0.1722 6 �1 (USL-LSL) i.e., 0.1722 < 0.174 the process is capable of meeting the specification limits. (d) In order to harmonise the process to the specifications change the process centre to the specifications mean i.e., = 1.12 The control limits for X-bar-chart UCL = = 1.12 + 1.88 x 0.0324 = 1.1809 LCL = = 1.059 CL = X bar = 1.12 Control limits for R-chart
5.20287.0
1054.1033.1 −=−=Z
X
RAX 2+
RAX 2−
1059.0
0324.027.34
=== xRDUCL
.0324.0
0.00324.00.03
==
=×==
RCL
RDLCL
MODULE NO -12
Development and use of X bar – R Chart
In order to establish a pair of control charts for the average ( X bar ) and the range (R), it is desirable to follow a set procedure. The steps in this procedure are as follows:
1. Select the quality characteristic. 2. Choose the rational subgroup. 3. Collect the data (20 to 25 samples). 4. Calculate the mean ( X bar) and R for each sample. 5. Determine the trial control limits. 6. Establish the revised control limits. 7. Construction of X bar - R – Chart. 8. Interpretation of the Results.
Equations for computing 3-sigma limits on Shewhart control charts for variables
Problem 1. Control charts for X bar and R are maintained on a certain dimension of a manufactured part which is specified as 2.05 ± 0.02 cms. Subgroup size is 4. The values of X bar and R are computed for each subgroup. After 20 subgroups. = 41.283 and �R = 0.280. If the dimensions fall above USL, rework is required, if below LSL, the part must be scrapped. If the process is in statistical control and normally distributed. (a) Determine the 3� control limits for X bar and R chart. (b) What is process capability (c) What can you conclude regarding its ability to meet specifications (d) Determine the percentage of scrap and rework (e) What are your suggestions for improvement. Solution. = 41.283 �R = 0.280 n = Sample size = 04 Number of subgroup (K) = 20 The specification limits are 2.05 ± 0.02 Upper specification limit USL = 2.07 cm Lower specification limit LSL = 2.03 cm From the tables, for a subgroup size 4 A2 = 0.73 d2 = 2.059 D3 = 0.0 D4 = 2.28
X�
X�
(c) USL – LSL = 2.07 – 2.03 = 0.04 Since the 6 �1 is greater than USL – LSL , the process is not capable of meeting the specification limit i.e., 0.0407 > 0.04. Note:
1. If 6 �1 is less than (USL – LSL). The process is capable of meeting the specification. There should not be any rejection. If rejection occurs we can conclude that, the process is not centered properly.
2. If 6�1 is equal to (USL – LSL), the process is exactly capably of meeting the specification limits. But tight tolerances are provided. We have to prefer a skilled operator for operating the machine.
3. If 6 �1 is greater than to (USL – LSL), the process is not capable of meeting the specifications limits. The rejections are inevitable.
(e) Since the percentage of rework is 19.49%, to minimize this, the possible ways are (i) Change the process centre to the specification mean i.e., from 2.06415 to 2.05. The calculations are shown below: Z = Probability from Normal tables is 0.9984 That is 1 – 0.9984 = 0.0016 i.e. 0.16%
94.200679.0
05.207.2 =−
The percentage of rework is 0.16% Since it is symmetric the percentage of scrap is also 0.16%. (ii) Widening the specification limits, for this we have to consult the design engineer, whether the product performs its function satisfactorily or not. (iii) Decrease the dispersion, for this we have to prefer a skilled operator and very good raw material and a new machine, practically which is difficult. (iv) Leave the process alone and do the 100% Inspection. (v) Calculate the cost of scrap and rework, whichever is costly make it zero, accordingly change the process centre. Problem 2. Subgroup of 5 item each are taken from a manufacturing process at regular intervals. A certain quality characteristic is measured and X bar , R values computed for each subgroup. After 25 subgroup = 357.5, �R = 8.8. Assume that all the points are within the control limits on both the charts. The specifications are 14.4 ± 0.4
(a) Compute the control limits for X bar and R chart (b) What is the process capability (c) Determine the percentage of rejections if any (d) What can you conclude regarding its ability to meet the specifications. (e) Suggest the possible scrap for improving the situation. (note: n=5 from tables
A2=0.5, d2=2.236, D3 = 0, D4 = 2.11)
X�
MODULE NO -13
Development and use of X bar– S Chart With Real life data
Note : Although X- bar and R charts are widely used, it is a occasionally desirable to estimate the process standard deviation directly instead of indirectly through the use of the range R. This leads to control charts for X-bas and S, where S is the sample standard deviation. Generally X-bar and s charts are preferable to their more familiar counter parts, X – bar and R charts when either
1. The sample size n is moderately large ---say n>10 or 12. 2. The sample size n is variable
Problem – 6 The following data presents the inside diameter measurements on the piston rings to illustrate the construction and the operation of X bar and S chart. The subgroup size is five.
Sample no Si
1 74.010 0.0148
2 74.001 0.0075
3 74.008 0.0147
4 74.003 0.0091
5 74.003 0.0122
6 73.996 0.0087
7 74.00 0.0055
8 73.997 0.0123
9 74.004 0.0055
10 73.998 0.0063
11 73.994 0.0029
12 74.001 0.0042
13 73.998 0.0105
14 73.990 0.0153
15 74.006 0.0073
16 73.997 0.0078
17 74.001 0.0106
18 74.007 0.0070
iX
19 73.998 0.0085
20 74.009 0.0080
21 74.000 0.0122
22 74.002 0.0074
23 74.002 0.0119
24 74.005 0.0087
25 73.998 0.0162
Note: The control limits for the x bar chart based on S bar are identical to the X bar chart control limits, where the limits were based on R bar They will not always be the same, and in general, the X bar chart control limits based on S bar will be slightly different than limits based on R bar.
We can estimate the process standard deviation using the fact that S/c4 is an unbiased estimate of �. Therefore, since c4 = 0.9400 for samples of size five, our estimate of the process standard deviation is This estimate is very similar to that of � obtained via the range method.
Problem -7 A certain product has a specification of 120 ±5. At present the estimated process average is120 and �1 = 1.5 (a) Compute the 3�1limits for X bar , R chart based on a subgroup size of 4 (b) If there is a shift in the process average by 2%, What percentage of product will fail to meet the specification. (c) What is the probability of detecting the shift by X bar - chart
01.09400.00094.0ˆ
4
===cSσ
120
7454.1170885.373.0120
2
==
=−=
−=
XCL
x
RAXLCL
Below Z = = -1.73 Probability = 0.0418 = 4.18% Above: Z = = 1.73 Probability = 0.9582
= 95.82%
i.e. 100-95.82 = 4.18%
(c) With respect to X bar - chart �n = 0.75
122.4
USL = 125
CL = 120
117.6
LSL = 115
5.16.117115 −
5.14.122125 −
45.11
==nx
σσ
Below Z Probability = 0.5714
= 57.14% Above Z = = 0.1866
Probability = 0.4287 = 42.86%
i.e. =100 – 42.86 = 57.14% Problem - 8 Subgroup of 4 items each are taken from a manufacturing process at regular intervals. A certain quality characteristic is measured and X bar , R values are computed for each subgroup. After 25 subgroup. � X bar = 15350, � R = 411.1.
(a) Compute the control limits for X bar, R chart.
(b) Assume all the points are falling within the control limits on both the charts. The specification limits are 610 ± 15. If the quality characteristic is Normally distributed what percentage of product would fail to meet the specifications.
(c) Any product that falls below L will be scrapped and above U must be
reworked. It is suggested that the process can be centered at a level so that not more than 0.1% of the product will be scrapped. What should be the aimed value of to make the scrap exactly 0.1%.
(d) What percentage of rework can be expected with this centering.
122.4 USL = 122 . 26
CL = 120
117.6LSL = 117.74
122.4
1866.075.0
4.12226.122 =−=
1866.075.0
4.12226.122 −=−
1X
From tables, for a subgroup size of 4 A2 = 0.73 d2 = 2.059 D3 = 0.0 D4 = 2.28 Control limits for X bar - chart
(b) Specification limits are 610 ± 15 USL = 625 LCL = 595
012.626456.1673.0614
2
=×+=
+= RAXUCL
0.0450.160
3
=×=
= RDLCL
456.16== RCL
6141
== XX
Probability from tables = 0.0089 = 0.89% Percentage of rework Probability from tables = 0.947 i.e 91.47%
99.7059.2456.16
2
1 ===dRσ
UNTL = 637.97
USL = 625
CL = 614
LNTL = 590.03LSL = 595
37.199.7
614625
1
1
=−=
−=σ
XUSLZ
Rework = 100 – 91.47% = 8.53% For the probability 0.001 the Z value from the normal table is -3
The percentage of rework now is For Z 0.75 the probability from normal table is 0.7734
i.e 77.34%
Percentage of rework = 100 - 77.34 = 22.66%.
99.7595
31
newX−=−
97.61899.735951
=×+=newX
75.099.7
97.618625 =−=Z
MODULE NO -14
Introduction to Six Sigma Concepts What is six sigma Six sigma is several things. First, it is a statistical measurement. It tells us how good our products, services and processes really are. The Six sigma method allows us to draw comparisons to other similar or dissimilar products, services and processes. In this manner, we can see how far ahead or behind we are. Most importantly, we see where we need to go and what we must do to get there. In other words, Six sigma helps us to establish, our course and gauge our pace in the race for total customer satisfaction For example, when we say a process is 6 sigma, we are saying it is Best-in-Class. Such a level of capability will only yield about 3 instances of nonconformance out of every million opportunities for nonconformance. On the other hand, when we say that some other process is 4 Sigma, we are saying it is average. This translates to about 6200 nonconformities per million opportunities for nonconformance. In this sense, the sigma scale of measure provides us with, a “goodness micrometer” for gauging the adequacy of our products, services and processes. Six Sigma as a business strategy can greatly help us to gain competitive edge. The reason for this is very simple – as you improve the sigma rating of a process, the product quality improves and costs go down. Naturally, the customer becomes more satisfied as a result. Let us remember there is no economics of quality. It is always cheaper to do “Right Things, “Right First Time”. WHAT DOES “METRICS” STAND FOR ? M Measure E Everything T That R Results I In C Customer S Satisfaction Applicability's of six sigma The first step toward improving the sigma capability of a process is defining what the ‘customer’ expectations are. Next, you “map” the process by which you get the work done to meet those expectations. This means that you create a ‘box diagram’ of the process flow; I.e.; identifying the steps within the process. With this done, you can now affix success criteria to each of the steps. Next, you would, want to record the number of times each of the given success criteria is not met and calculate the total defects-per-opportunity (TDPO). Following this, the TDPO information is converted to defects-per-opportunity (DPO) which in turn, is translated into, a sigma value (�). Now, you are ready to make direct comparisons – even apples and Oranges if you want.
Three Sigma vs Six Sigma Three Sigma would be equivalent to one misspelled word per 15 pages of text . Six sigma would be equivalent to one misspelled over 300000 pages, quite a difference indeed. Now, let’s put this in real world terms. Some corporations are already running Six Sigma. It is self-evident that they’re going to perform better over the long haul. For example, several prestigious Japanese Companies(which are doing so well in the World market place) are currently running at or near the 6 sigma Level. SIGMA - (�) Sigma is a letter in Greek alphabet. The term “sigma” is used to designate the distribution or spread about the me an (average) of any process or procedure. The Sigma rating indicates how often defects are likely to occur. The higher the sigma rating, the less likely a process will produce defects. As sigma rating increases, costs go down, cycle time goes down and customer satisfaction goes up. QUALITY IMPROVEMENT = PRODUCTIVITY IMPROVEMENT = COST REDUCTION RIGHT FIRST TIME AND EVERY TIME What is a defect A defect is any variation of a required characteristic of the product (or its parts) or services which is far enough from its target value to prevent the product from fulfilling the physical and functional requirements of the customer, as viewed through the eyes of your customer. A defect is also anything that causes the processor or the customer to make adjustments.
Anything That Dissatisfies Your Customer The Common Metric: Defects per Unit (DPU) DPU is the best measure of the overall quality of the process.
� DPU is the independent variable. � Process yields are dependent upon DPU
Example: We checked 500 Purchase Orders (PO) and PO had 10 defects then, d.p.u = d/u = 10/500 = 0.02 In a P.O. we check for the following : a) Supplier address/approval. b) Quantity as per the indent c) Specifications as per the indent. d) Delivery requirements e) Commercial requirements. Then there are 5 opportunities for the defects to occur. Then, the total no. opportunities =m u = 5x500 = 2500. Defects per opportunity, d.p.o = d/(m u) = 10/2500 = 0.004 If expressed in terms of d.p.m.o. (defects per million opportunities) it becomes. d.p.m.o. = d.p.o x 106 = 4000 PPM From d.p.o., we go to the normal distribution tables and calculate ZLT and corrected to ZST by adjusting for shift (1.5 �) then. ZLT = 2.65; and ZST = 2.65 + 1.5 = 4.15 No. of opportunities = No. of points checked. If you don’t check some points then it becomes a passive opportunity. We should take only active opportunities into our calculation of d.p.o., and Sigma level. Cost / Quality Six Sigma has shown that the Highest Quality Producer Is the Lowest Cost Producer Process capability process potential index (Cp) The greater the design margin, the lower the DPU. Design Margin is measured by Capability Index (Cp), Where : The numerator is controlled by Design Engineer Cp = Maximum allowable Range of Characteristic Normal Variation of Process The denominator is controlled by process Engineering. If Ford says, Cp should be more than 1.33 for regular production.
Cp should be more than or equal to 1.67 for new jobs.
Motorola says, Cp should be more than 2.0 for all jobs.
That implies, (U – L) / 6� = 2.0 or (U – L) = 12� i.e; (U – L) = ± 6� Hence the name Six Sigma. ± 3� Process capability means 0.27%. I.e., 2700 PPM shall be out of specification. ± 6� Process capability shall mean 2.5 Parts per Billion, shall be outside the specification limits. The six Sigma Methodology is a five phase improvement cycle that are employed in a project oriented fashion through the
1. Define 2. Measure 3. Analyze 4. Improve 5. Control
Step 1 : Define : Define The Customer, Critical to quality (CTQ) issues, And the Core business Process involved. Define who customers are, what their requirements are and what their expectations are. Define Project boundaries – the start and stop of the process. Define the process to be improved by mapping the process flow. Step 2 : Measure :
Develop a data collection plan for the process. Collect data, to determine types of defects and metrics.
Measure the current performance of the core business process involved. Step 3 : Analyze : Analyze the data collected to determine the root causes of defects and opportunities for improvement. Identify the gaps between current performance and goal performance. Prioritize opportunities to improve. Identify sources of variation. Step 4 : Improve : Improve target solutions by designing creative solutions to fix and prevent problems. Create innovative solutions using technology and discipline. Develop and deploy implementation plan.
Step 5 : Control : Control the improvements to keep the process on the new course. Prevent reverting back to the “old way” Control the development, documentation and Implementation of an ongoing monitoring plan. Step 1 : Define: The Problem definition has five major elements. The Business Case. Identifying the Customers of the project, their needs & requirements. The problem statement. Project Scope. Goals & Objectives. Step 2 : Measure Calculating Sigma Value for discrete data The data being collected for this project is discrete, to calculate sigma using the discrete method, there are three items being measured. They are : 1. Unit : The item produced. 2. Defect : Any event that does not meet customer’s requirement. 3. Opportunity : A chance for a defect to occur. Step 2 : Measure The Formula to Calculate DPO. Number of Defects DPO = Number of opportunities x Number of units produced DPMO = DPO x 1000000 This calculation is called defects per Million Opportunities. Step 2 : Measure Performance Measures For the Month of December : Total Number of rings produced = 86,702 Number of Defective Rings = 47 Number of Opportunities = 4 opportunities Defects per Opportunity = 47/ 86,702 * 4 = 1.355 * 10 – 4 Defects per Opportunity = 47/ 86,702 * 4 = 1.355 * 10 - 4 Defects / Million Opportunities = 1.355 * 10 – 4 * 106 = 135.55 DPMO Converting DPMO to “�” value = 5.1 �
Performance Measure
Month Number of Rings produced
Number of Defective Rings
DPO
DPMO
Sigma Value
December 2000 86,702 47 0.0001355 135.55 5.1 January 2001 1,13,345 100 0.0002145 214.5 5.0
February 2001 1,14,368 123 0.0002688 268.88 4.9
March 2001 1,14,404 451 0.0009855 984.54 4.5
The Practical Meaning of
SIX SIGMA � AS A GOAL (Distribution shifted to ± 1,5� )
Sigma Level Defects in PPM Yield in %
2� 3� 4� 5� 6�
308,538 66,807 6210 233 3.4
69.1462 98.3198 99.3790 99.9767 99.99966
Legends “ m” : Number of Opportunities. “ N” : Number of parts. “ d” : Number of defects. “ dpu” : Defects per unit. “ dpo” : Defects per opportunity. “ Yft” : First time yield. “ Yrt” : Rolled thru put yield. “ dpmo” : Defects per million opportunities. “ TDPU” : Total defects per unit. “ Zlt” : Long term sigma level “ Zst” : Short term sigma level Formulae “ dpu” = d / N “ dpo” = dpu/m “ Yft” = e-dpu “ dpmo” = dpo X 106 “ TDPU” = sum of dpu “ Yrt” = e-TDPU “ YPO” = Yrt 1/m = e-dpo “ dpo” of the over all process = (1-Ypo) Cpk = Zlt / 3 Z = (USL – X bar)/� Cp = Zst/3
Process Capability It is a measure of the inherent uniformity of the process. Before examining the sources and causes of variation and their reduction we must measure the variation. YARD-STICKS OF PROCESS CONTROL: Cp - measuring capability of a process Cpk - Capability of process, but corrected for non-centering Process Capability Indices: Cp is a measure of spread. Cp = Specification (S) / Process width (P) Cpk is a measure of centering the process and its spread. Cpk is minimum of Cpu= (USL - µ) / 3σ and Cpl = (µ - LSL) / 3σ The relationship between Cp and Cpk is Cpk = (1 – k) Cp where : k : Correction factor and is the minimum of (T -µ ) / S / 2 or (µ - T ) / S / 2
Reducing Variation is the key to Reducing Defects
Cp - Measure of variation
Process Capability
CASE STUDY TO A REAL LIFE PROBLEM In a company manufacturing and assembling of Printed Circuit Boards(PCB’ s),the rejection rate was found to be very high. Upon study it was noticed that there are 16 stages in the assembly process of Printed Wiring Boards(PWB’ s),out of which the rejection rate was more during the wave soldering process compared to the other stages. This wave soldering process stage is a critical stage of assembly.
Hence Wave Soldering Process Stage was selected for the study, in order to reduce the process variability and to minimize the rejection rate. The PCB’ s are classified as single layered, bi-layered and multi-layered boards. In the assembly section of this company, two types of PCB’ s are being assembled.
On-line inspection data for the wave soldering process were collected and the attribute control charts (p and c) were plotted which showed that the process was not in a state of statistical control.
The fraction rejection was found to be 0.2 (i.e., 20%) and the average defects per unit were 1.67 for multi-layered boards and 0.5 for bi-layered boards respectively. The on-line inspection data for wave soldering process with the existing process parameter values was collected and sigma(σ) was calculated. For bi-layered boards the sigma level was 3.39 and for multi-layered boards the sigma was 3.33,which are given below.
Table 1. Product Type: Bi-layered Boards
Standard Process Parameters Baking Temperature 75˚C
Preheat Temperature 300˚C
Hot-air Temperature 320˚C
Solder Temperature 245˚C
Solder wave height 11mm
On-line Data for Bi-layered Boards- calculation Number of defects = 71 Total no. of soldering points = 233287 Defects per opportunity = Total no. of defects = 71 ….(1) Total no. of soldering points 233287 = 0.000304 From the Normal Tables ,the value of sigma is 3.39 Table 2. Product Type: Multi-layered Boards
Standard Process Parameters Baking Temperature 75˚C
Preheat Temperature 320˚C
Hot-air Temperature 340˚C
Solder Temperature 255˚C
Solder wave height 12.5mm
On-line Data for Multi-layered Boards- calculation Number of defects = 38 Total no. of soldering points = 106828 Defects per opportunity = Total no. of defects = 38 ….(2) Total no. of soldering points 106828 = 0.00036 From the Normal Tables, the value of sigma is 3.33 After conducting the Brain storming session with the operators, foreman and the manager, the causes for the rejection of the PWA’ s were traced out. During the inspection of the wave soldered PWA’ s, it was found that the rejections were due to the following causes.
Measles Blow holes Solder bridge Solder splash and Icicles The necessary calculations are made for both Bi-layered and Multi-layered boards and analysis was carried out for the collected data with the help of Pareto Diagram which showed that blow holes and solder bridges constituted for majority of rejection. After discussions with the operators, foreman and manager, the causes for the blow holes and solder bridges were identified. The cause and effect diagrams were drawn and critical process parameters (control factors) that influence the wave soldering process were identified as: Baking Temperature Pre-heat Temperature Hot-air Temperature Solder Temperature Solder wave height Two noise factors viz., ambient temperature and humidity, each with two levels are considered for the experimentation. In order to optimize the above identified wave soldering process parameters, the Orthogonal Array Approach of DOE was applied. Three levels were fixed for each of the above five critical factors which are shown in Tables 3 and 4 for both bi-layered and multi-layered boards respectively. With the application of the Linear Graphs the number of experiments to be conducted are 27 for the factors. OA Table and physical layout for the Bi-layered and Multi-layered boards are prepared. 27 experiments were carried out for both Bi-layered and Multi-layered PCB’ s separately with a sample size of two each. Table 3.Factors and Levels for Bi-layered Board
FACTORS LEVEL 1 LEVEL 2 LEVEL 3
Banking Temperature 75˚C 80˚C 85˚C Preheat Temperature 300˚C 305˚C 310˚C Hot air Temperature 320˚C 325˚C 330˚C Solder Temperature 240˚C 245˚C 250˚C Solder wave height 10mm 11mm 12mm
Table 3. Factors and Levels for Bi-layered Board
FACTORS LEVEL 1 LEVEL 2 LEVEL 3
Banking Temperature 75˚C 80˚C 85˚C Preheat Temperature 320˚C 325˚C 330˚C Hot air Temperature 340˚C 345˚C 350˚C Solder Temperature 250˚C 255˚C 260˚C Solder wave height 11.5mm 12.5mm 13.5mm
Analysis of Data and Results The experimental results were analyzed to establish the optimum process parameter values for Baking Temperature, Pre-heat Temperature, Hot-air Temperature, Solder Temperature and Solder wave height. The responses were calculated for each of the experiments for both bi-layered and multi-layered PCB’ s. From the response matrices, the SIGNAL-TO-NOISE (S/N) ratios were calculated using the formula : � =10 log ((1/p) - 1) ……(3) where p =1 - % good …….(4) For example , p= 1 – (25/100) = 75 �=10 log ((1/0.3) – 1) = - 4.7712 The S/N ratios for each of the experiments were calculated for bi-layered and multi-layered boards. The analysis of variance (ANOVA) was carried out for bi-layered and multi-layered boards. The optimal levels of parameters were established based on the highest value of S/N ratios. Optimized Factor Level for the wave soldering process for both bi-layered and multi-layered PCB’ s are given below FACTORS Bi-layered Multi-layered Banking Temperature Level-3 85˚C Level-3 85˚C Preheat Temperature Level-3 310˚C Level-3 330˚C
Hot air Temperature Level-3 330˚C Level-3 260˚C
Solder Temperature Level-3 250˚C Level-3 260˚C Solder wave height Level-2 1.5mm Level-3 2.5mm
Confirmation Run Further experiments were carried out with the optimized levels of the above parameters for both bi-layered and multi-layered PCB’ s taking a sample size of 8 each to check the validity of the levels of the optimized parameters.
The sigma levels were calculated again for the data collected and were found to be 4.1 for bi-layered and 4.125 for multi-layered PCB’ s which shows that the process variability is decreased and process capability(cp and cpk) is increased. The percentage of rejections was again calculated which is reduced to 0.2% from 20%. Conclusions In this case study, both on-line quality control techniques like control charts, pareto diagram and cause and effect diagram as well as off-line quality control techniques are applied before the manufacture of the product to control the process. The sigma level for bi-layered PCB was improved from 3.39sigma to 4.1sigma and the multi-layered PCB from 3.33sigma to 4.125sigma. Since the sigma levels were increased considerably using the Orthogonal approach of DOE, it is evident that the application of the DOE technique (during the early stages itself) is very effective in improving the quality of any process or product by optimizing the parameters in order to yield a product which can be produced with minimum cost and with minimum variation. The optimal levels for the factors obtained using OA approach and levels of sigma for both bi-layered and multi-layered PCB’ s are summarized below: Table 5: Comparison of the levels of five parameters for bi-layered PWS and multi-layered PWS FACTORS Bi-layered PWA’ s Multi-layered PWA’ s
Present Optimum Present Optimum Baking Temperature 75˚C 85˚C 75˚C 85˚C Pre-heat Temperature 300˚C 310˚C 320˚C 330˚C
Hot-air Temperature 320˚C 330˚C 340˚C 345˚C
Solder Temperature 245˚C 250˚C 255˚C 260˚C
Solder Wave height 11mm 11mm 12.5mm 13.5mm
Table 6. Comparison of the Sigma Levels Type of Printed Wiring Assembly
Present Level of Sigma
Improved Level of Sigma
Bi-layered 3.39 4.1 Multi-layered 3.33 4.125
With lesser number of experiments in Orthogonal Array approach of DOE, it is possible to achieve the same effective results as compared to other techniques of DOE like Full Factorial, Fractional Factorial, Randomized Block Design etc.
Orthogonal Array Table
Experiment Number
Baking Tempt.
Preheat Tempt.
Hot-air Tempt.
Solder Tempt.
Solder Wave Height
1 1 1 1 1 1
2 1 1 1 1 2
3 1 1 1 1 3
4 1 2 2 2 1
5 1 2 2 2 2
6 1 2 2 2 3
7 1 3 3 3 1
8 1 3 3 3 2
9 1 3 3 3 3
10 2 1 2 3 1
11 2 1 2 3 2
12 2 1 2 3 3
13 2 2 3 1 1
14 2 2 3 1 2
15 2 2 3 1 3
16 2 3 1 2 1
17 2 3 1 2 2
18 2 3 1 2 3
19 3 1 3 2 1
20 3 1 3 2 2
21 3 1 3 2 3
22 3 2 1 1 1
23 3 2 1 1 2
24 3 2 1 1 3
25 3 3 2 3 1
26 3 3 2 3 2
27 3 3 2 3 3
Physical Layout For Bi-layered Boards Experiment
Number Baking Tempt.
Preheat Tempt.
Hot-air Tempt.
Solder Tempt.
Solder Wave Height
1 75˚C 300˚C 320˚C 240˚C 10mm
2 75˚C 300˚C 320˚C 240˚C 11mm
3 75˚C 300˚C 320˚C 240˚C 12mm
4 75˚C 305˚C 325˚C 245˚C 10mm
5 75˚C 305˚C 325˚C 245˚C 11mm
6 75˚C 305˚C 325˚C 245˚C 12mm
7 75˚C 310˚C 330˚C 250˚C 10mm
8 75˚C 310˚C 330˚C 250˚C 11mm
9 75˚C 310˚C 330˚C 250˚C 12mm
10 80˚C 300˚C 325˚C 250˚C 10mm
11 80˚C 300˚C 325˚C 250˚C 11mm
12 80˚C 300˚C 325˚C 250˚C 12mm
13 80˚C 305˚C 330˚C 240˚C 10mm
14 80˚C 305˚C 330˚C 240˚C 11mm
15 80˚C 305˚C 330˚C 245˚C 12mm
16 80˚C 310˚C 320˚C 245˚C 10mm
17 80˚C 310˚C 320˚C 245˚C 11mm
18 85˚C 310˚C 320˚C 245˚C 12mm
19 85˚C 300˚C 330˚C 245˚C 10mm
20 85˚C 300˚C 330˚C 245˚C 11mm
21 85˚C 300˚C 330˚C 245˚C 12mm
22 85˚C 305˚C 320˚C 250˚C 10mm
23 85˚C 305˚C 320˚C 250˚C 11mm
24 85˚C 305˚C 320˚C 250˚C 12mm
25 85˚C 310˚C 325˚C 240˚C 10mm
26 85˚C 310˚C 325˚C 240˚C 11mm
27 85˚C 310˚C 325˚C 240˚C 12mm
Conclusion In most of the Indian industries, the acceptance criterion is only on the basis of specification limits specified by the designer. If any characteristic of a product / process falls between the specified limits, it is taken for granted that the product is uniformly good. But as per Taguchi’ s QLF, as the functional characteristic of a product deviates from the target value, it causes loss to the society. The more the deviation, the more is the loss, even if it is within the specified limits. Robust engineering methods are recommended at the early stages of product design to achieve the higher sigma levels. Robust engineering also reduces the time to market with the help of two step optimization. The results obtained from small scale laboratory experiments can be repeated under the large scale manufacturing conditions if the output characteristics are selected appropriately using S / N ratio.
Introduction to six sigma Problem 1 A press brake is set up to produce a formed part to a dimension of 3� ± 0.005�. A process study reveals that the process limits are at 3.002” ± 0.006�, i.e., at a minimum of 2.996� and a maximum of 3.008�. After corrective action, the process limits are brought under control to 3.001� ± 002�. Question:1 Question 1. Calculate the Cp and Cpk of the old process. Question 2. Calculate the Cp and Cpk of the corrected process Answers: Question 1. specification with (s) = 0.010�; process width (p) = 0.012� So Cp= S/P = 0.10/0.012 = 0.833 = 3.002�; design center (D) = 3.000�
4.0005.0002.0
005.0000.3002.3
2==−=−=
SDX
SoK
Question 2. Specification width (s) = 0.010�; process width (p) = 0.004�. Therefore Cpk = (1-0.4)0.833 = 0.5 So Cp= S/P = 0.10/0.004 = 2.5 = 3.001"; design center (D) = 3.000" Therefore Cpk = (1-k) Cp = (1-0.2)2.5 = 2.0 Using the simpler and alternate formula for Cpk:
5.0006.0003.0
006.0002.3005.3
:1 ==−=pkCquestionIn
0.2002.0004.0
002.0001.3005.3
:2 ==−=pkCquestionIn
2.0005.0001.0
005.0000.3001.3
2==−=−=
SDX
SoK
MODULE NO -15 Introduction to Robust Design and its applications
Out line
• Introduction. • Quality Control • Quality Engineering • Taxonomy of Quality • Evaluation of Quality loss • Tools used in Robust Design • Process Capability • Conclusions.
Introduction Dr. G. Taguchi: Dr. Taguchi, is born in 1924. He started his career from Naval institute of Japan between 1942 – 45 and then with ministry of public health and welfare. Later he joined ministry of education subsequently moved to Nippen telephone at japan. Dr. Taguchi is the inventor of the famous orthogonal array OA techniques for the design of experimentation. He published his first book on OA in 1951 Taguchi also visited Indian statistical institute between 1954 – 55. He wrote a book on design of experiments. Dr. Taguchi philosophy is Robust Engineering Design. He blended statistics with engineering applications and pioneered work in Industrial Experimentation. He is also the innovator of the quality loss function concept and promoted robust design related to this he propagated signal to noise ratio phenomenon in SPC. He developed a three stage off Line QC methods viz., system design, parameter design and Tolerance design. Genichi Taguchi, a Japanese statistician, is in the forefront of the pioneers of the Quality Control. His major contribution is the concept of Robust Design, which is acclaimed as the most significant one throughout the world. His concepts have revolutionized the very idea of quality control and hence these techniques are widely applied by the manufacturing and service industries successfully in the advanced countries like Japan, US, UK, etc. In the early 1970’ s, Taguchi developed the concept of the Quality Loss Function.
Quality
• Quality is defined as “ fitness for use” . • As per G. Taguchi, the quality of a product is the (minimum) loss imparted by
the product to the society from the time the product is shipped. • Quality plays a vital role in all walks of life, starting from the household to big
engineering and service industries.
Quality Control
• It is an activity of ensuring the manufacturing of good quality products which satisfy the customers’ needs.
• Quality control techniques are broadly classified into On-line and Off-line. • The On-line quality control techniques are applied to monitor a manufacturing
process to verify the levels of quality of goods already produced. • The Off-line quality control techniques are applied to improve the quality of a
product/process in the design stage itself i.e., before the products are manufactured and made available to customers.
For any company to compete in the world class market scenario, its leaders must understand, digest, disseminate and guide the implementation of simple and powerful tools that go well beyond the traditional quality control techniques. They are
• Design of Experiments (DOE) • Multiple Environment Over Stress Test (MEOST) • Quality Function Deployment (QFD) • Total Productive Maintenance (TPM) • Benchmarking • Poka-Yoke • Next Operation As a Customer (NOAC) • Supply Chain Management (SCM) • Failure Mode and Effect Analysis (FMEA) and • Cycle Time Reduction
The Design of Experiments (DOE) is one of the most powerful techniques that helps to achieve the world-class quality. Quality Engineering It consists of the activities directed at reducing the variability and thereby reducing the loss. The fundamental principle of robust design is to improve the quality of product/process by minimizing the effect of the causes of variation without eliminating the causes. This is achieved by optimizing the product/process design to make the performance minimum sensitive to the cause of variation. The robust design process encompasses three stages namely
System Design – It is the process of applying scientific and engineering knowledge to produce a basic functional prototype design.
• Development of a system to function under an initial set of nominal conditions. • Requires technical knowledge from science and engineering. • Originality / Invention / Marketing strategy.
Parameter Design - It is the process of investigation towards identifying the settings of design parameters that optimizes the performance characteristics and reduces the sensitivity of engineering design to the source of variation (noise factors).
• Determination of control factor levels so that the system is least sensitive to noise.
• Involves use of orthogonal arrays and signal – to Noise Ratio. • Improves quality at minimal cost.
Tolerance Design – It is the process of determining the tolerances around the nominal settings identified in the parameter design process.
• Specification of allowable ranges for deviations in parameter values. • Involves cause detection and removal of causes. • Typically increases product cost. However, cost may be minimized by
experimenting to find tolerances that can be relaxed without adversely affecting quality.
Taxonomy Of Quality There are three fundamental issues regarding quality: · To evaluate the quality · To improve quality cost-effectively and · To monitor and maintain quality cost-effectively. Quality characteristics are classified into two: · Variable characteristic and · Attribute characteristic Variable characteristics can be classified into three types: · Nominal-The-Best : A characteristic with a specific target value. Examples: Dimension, Clearance, Viscosity etc. · Smaller-The-Better : Here the ideal target value is zero. Examples: Wear, Shrinkage, Deterioration etc. · Larger-The-Better : The ideal target value is infinity Examples: Strength, Life, Fuel efficiency etc. · Attribute Characteristics : Based on visual inspection Examples : Appearance, Taste, Good/bad, etc.
Evaluation Of Quality Loss
• Traditional interpretation of quality loss • Taguchi’ s interpretation of quality loss.
Traditional interpretation of quality loss Step Function
Taguchi emphasizes that the loss incurred by a product which falls close to LSL and which falls just below LSL is almost same. The problem with most of the traditional measures of quality (rework rate, scrap rate, Cp, Cpk, etc.) is that by the time we get these figures, the product is already either in production or in the hands of customer.
Taguchi’s interpretation of Quality Loss Function (TQLF)
• The objective of TQLF is the quantitative evaluation of loss resulting from the functional variation of the output quality characteristic from the target value.
• The two important points to be considered to establish Taguchi’ s QLF are * Consumer tolerance and * the customer loss
Taguchi’s Quadratic Representation of the QLF
There are three cases of TQLF namely :
• TQLF for Nominal-The-Best • TQLF for Smaller-The-Better and • TQLF for Larger-The-Better.
To establish the characteristics of Taguchi’ s QLF, the two important aspects to be considered are consumer tolerance and consumer loss. Tqlf For Nominal-The-Best (NTB): Taguchi’ s QLF for the case of NTB is given by L(y) = K * (y-T) 2 Where L(y) = loss is rupees per unit of product for the output characteristic of ‘y’ T = Target value of ‘y’ K = Constant of proportionality that depends upon financial importance of the output characteristic Taguchi recognized the loss as a continuous function and it does not occur suddenly. The quadratic representation of QLF i.e., L (y) = K * (y-T) 2 . Where Loss L(y) is minimum at y =T and L(y) increases as ‘y’ deviates from the target value T. ∴ K = L(y) = Ao (y-T)2 ∆ 02 where ‘∆0’ is the consumer tolerance and ‘Ao’ is the consumer loss which are shown in Figure below.
TQLF for Nominal-The-Best
TQLF For Smaller-The-Better (STB) When the out-put characteristic is to be a minimum value, the loss function is characterized as “ Smaller-The-Better” . The examples for STB are shrinkage, pollution, radiation leakage etc.
The ideal value for this is zero. The Loss function is slightly different but the procedure is same as the Nominal–The–Best. For STB, the loss function is given by L(y) = K * y 2 and K = A0 / y0 2
TQLF For Larger-The-Better (LTB) The loss function for LTB is the reciprocal of the Smaller-The-Better case and is given by L (y) = K * (1 / y2) and K = Ao * yo2 . This is shown in figure below. Some examples of LTB are strength of a permanent adhesive, strength of a welded joint, fuel efficiency, corrosion resistance etc. The ideal value for LTB is infinity.
TQLF : Larger-The-Better
y = Output Characteristic (nominal-the-Best) m = target Value
• Same product, same specifications • All are 100% inspected • Cost to you is the same from all four sources.
Which factory would you choose to be your vendor? Why?
US LS
m
3 2
m m m
It is, therefore, very much essential to analyze and quantify the losses to the society using TQLF. This will help in identifying the level of one’ s own quality in comparison with the competitors’ quality to take remedial actions for improvements, if necessary, to compete in the present day global competition. The TQLF can be used to determine the optimum tolerances for the levels of optimized parameters determined by the parameter design technique.
Tools Used in Robust Design
• Signal- To - Noise Ratio (S/N) - which measures quality
• Orthogonal Arrays - which are used to study many design parameters
simultaneously
1 US LS
m
4
m m
$0.73/pc
m
$0.15/pc $1.23/pc $0.27/pc
S/N ratios for Static Problems S/N ratio for Smaller-The-Better When the output characteristic can be classified as Smaller-The-Better, the standard S/N ratio is With n observations, y1, y2, y3 ……. Yn L(y) = K(y2) K(MSD) where K= A0 / Yo2 MSD = y12 + y22 + y32 + ….. Yn2 n S/N (dB) = -10 log10 (MSD) S/N Ratio for Nominal-The-Best When the output characteristic is classified as Nominal-The-Best the standard S/N ratio is: With n observations, y1, y2, y3 ……. yn The Signal to Noise ratio is S/N ratio for Larger-The-Better When the output characteristic can be classified as Larger-The-Better , the standard S/N is: L(y) = K(1 / y2) K(MSD) K = Aoyo2 n MSD = 1/n yi2 i=1 Signal to Noise Ratio is S/N (dB) = -10 log10 (MSD)
andn
Yn
ii
Y�=
=1
1 2
1
2 )1
1 ( Yn
n
ii
Y −−
= �=
σ
2
2
10log10σY
n =
S/N ratio for Fraction Défective The Quality Characteristic is denoted by p, a fraction assuming values between 0 and 1. When the fraction defective is p, on an average we have to manufacture 1 / (1-p) to produce one good piece.For every good piece produced there is a waste and hence a loss, equivalent to the cost of the processing {1/(1-p) - 1} pieces. Thus, the Quality Loss Q is given by Q = K (p / 1-p) Where K is the cost of processing one piece Ignoring K, we obtain the objective function to be maximized in the decibel scale as Ideal Function Reality y = Output Response y = Output `̀ y y M = Input Energy M = Input Energy The most common way of expressing a design’ s Ideal Function is : y = �M Where y = Output response m = Input signal
���
����
�
−−=
pp
n1
log10 10
Classification of Parameters
Signal factors – these are the parameters set by the user of the product to express the intended value for the response of the product. The signal factors are selected by the design engineer based on the engineering knowledge of the product being developed. Control factors – these are any design parameters of a system that engineers can specify by nominal values and maintain cost effectively. Noise factors – these are the variables that affect the system function and are either uncontrollable or too expensive to control. The Engineered System consists of four (4) components:
Product/Process Signal factors
Control factors
Noise factors
Block Diagram of a Product/Process
Response
VOICE OF CUSTOMER
System (Subsystem)
Engineered System Control Factors
Y Output
Response RESULT
M Input Signal
INTENT
Noise Factors
The Relationship Between Loss and Noise Factors Relationship between Control Factors Vs Noise Factors is represented graphically below.
Control Factors
Noise Factors
R&D Advanced Product Design MFG. Users/Recycle Engineering Planning Process Design
Number of
Factors
Product Development Cycle
Loss to the Society
Deviations of Functional Characteristics from Target Value
Noise Factors
Cause Deviations
Inner Noise
Outer Noise
Between-Product Noise
Variation in Operating
Environments Human Errors
Deterioration Manufacturing Imperfections
Put a lot of thought into
what you are going To measure as Data Selection of output
Characteristics
Put a lot of thought
into selecting control Factors and levels
Put a lot of thought into how you are going to treat noise factors.
Conduct confirmation run
Make predictions
Is prediction Confirmed ?
Is prediction Confirmed ?
No
Yes
Yes
No
Put a lot of thought into what you are going To measure as data Selection of output Characteristics
Put a lot of thought into selecting control
Factors and levels
Put a lot of thought into how you are going to treat noise factors.
Design the experiment with an orthogonal array without assigning Interactions among control factors
Use L12, L18, L36 as much As possible
Analyze data using Signal-to-Noise Ratio response table
Make predictions
Conduct Experiment
Six Sigma Process Capability It is a measure of the inherent uniformity of the process. Before examining the sources and causes of variation and their reduction we must measure the variation. Yard-Sticks Of Process Control: Cp - measuring capability of a process Cpk - Capability of process, but corrected for non-centering Process Capability Indices: Cp is a measure of spread. Cp = Specification (S) / Process width (P) Cpk is a measure of centering the process and its spread. Cpk is minimum of Cpu= (USL - µ) / 3σ and Cpl = (µ - LSL) / 3σ The relationship between Cp and Cpk is Cpk = (1 – k) Cp where : k : Correction factor and is the minimum of (T -µ ) / S / 2 or (µ - T ) / S / 2
Raw Data from the Experiment
S/N Analysis
•Smaller–the-Better �(db) = - 10 log
•Larger–the-Better �(db) = - 10 log
•Nominal–the-Best (Type I) �(db) = 10 log
•Nominal–the-Best (Type II) �(db) = 10 log
Response Table & Graphs
•Control factors only
Characteristic Type
Nominal-the-Best Analysis
Smaller-the-Better Analysis
Larger-the-Better Analysis
Cp - Measure of variation
Process Capability
Sigma Levels and the Associated CP & Defects
Flow Chart Showing Stages Of Quality Engineering.
± 5�
± 2�
± 4�
± 6�
± 3�
308537 PPM Cp= 0.67 1970’ S
3.4 PPM Cp = 2 2000’ S
World Class
233 PPM Cp= 1.66 Mid– to- late 1990’ S
6210 PPM Cp= 1.33 Early 1990’ S
66807 PPM Cp = 1 1980’ S
QUALITY ENGINEERING
OFF-LINE QUALITY ENGINEERING ENGINEERING OPTIMIZATION
USING DESIGN OF EXPERIMENTS
PRODUCT DESIGN
PROCESS DESIGN
SYSTEM DESIGN INNOVATION
The Basic Steps In Parameter Design 1. Define project scope / objectives
– Define project objectives – Identify the system or subsystems – Select team leader and members – Establish overall strategies
2. Identify Ideal Function / Response to be Measured
– Establish intent, desired results – Define input signal and output response – Define ideal function. – Determine measurement feasibility
3. Develop Signal and Noise Factor Strategies
– Define Signal levels and ranges. – Identify all noise factors – Select critical noise factors and set levels – Determine Noise Strategies.
4. Establish control factors and levels – Identify all control factors – Select critical control factors and set levels – Select orthogonal array – Assign control factors to orthogonal array
5. Control experiments
– Plan / prepare for experiment – Conduct experiment – Collect data
ON-LINE QUALITY ENGINEERING PROCESS CONTROL
PARAMETER DESIGN OPTIMIZATION
TOLERANCE DESIGN OPTIMIZAZTION
6. Conduct data analysis – Calculate S/N ratios and �’ s – Complete /interpret response tables / graphs – Perform two step optimization – Make predictions.
7. Conduct confirmation run 8. Implement and document results Contributions of DOE to Business Excellence: A Spider Chart
Applications of Computer based Robust Engineering
• Reduction of R&D cycle time using Simulation based Robust Engineering. • Software testing and Algorithm Optimization. • Design of Information Systems for Pattern Analysis.
Chronic Problem Solving
90%
60%
50%
70%
60%
50%
50%
30% 70%
40% 30% 30%
Profit /R.O.I Improvement
Customer Loyalty
Cycle – Time Reduction
Employee Morale
Design Improvement
Cost reduction
Reliability Improvement
Space Reduction
Supplier Improvement
MODULE NO -25
Measurement Systems Analysis and Gage R&R Measurement System Analysis (MSA) Introduction The manufacturing environment, by its very nature, relies on two types of measurements to verify quality and to quantify performance: (1) measurement of its products, and (2)measurement of its processes. Therefore, product evaluation and process improvement require accurate and precise measurement techniques. Due to the fact that all measurements contain error, and in keeping with the basic mathematical expression: Observed value = True value + Measurement Error, understanding and managing "measurement error," generally called Measurement Systems Analysis (MSA), it is an extremely important function in process improvement (Montgomery, 2005). MSA is a comprehensive set of tools for the measurement, acceptance, and analysis of data and errors, and includes such topics as statistical process control, capability analysis, and gauge repeatability and reproducibility, among others (Besterfield,2004).MSA recognizes that measurements are made on both simple and complex products, using both physical devices and visual inspection devices that rely heavily on human judgment of product attributes. Purpose of MSA is to statistically verify that current measurement systems provide:
– Representative values of the characteristic being measured – Unbiased results – Minimal variability
Organizational Uses of MSA are:
• Mandatory requirement for QS 9000 certification. • Identify potential source of process variation. • Minimize defects. • Increase product quality.
All measurement processes will contain some amount of variation. The variation can come from one of two sources; 1) the difference between parts made by any process, and 2) the method of obtaining the measurements are imperfect. Measurement System Errors are of two types, they are as follows
• Accuracy: difference between the observed measurement and the actual measurement.
• Precision: variation that occurs when measuring the same part with the same
instrument.
Any measurement system can have any of these problems. One could have a measurement device that measures parts with very little variation, but is not accurate. One could also have an instrument where the average of the measurements are very close to the actual value, but has a large variance (not precise). Finally, one could have a device that is neither accurate nor precise. Measurements are said to be accurate if their tendency is to center around the actual value of the entity being measured. Measurements are precise if they differ from one another by a small amount. Measured Value = ƒ (TV + Ac + Rep + Rpr) TV = true value Ac = gauge accuracy Rep = gauge repeatability Rpr = gauge reproducibility Measurement system components:
• Equipment or gage – Type of gage
• Attribute: go-no go, Vision systems (part present or not present) • Variable: calipers, probe, coordinate measurement machines
– Unit of measurement - usually at least 1/10 of tolerance • Operator and operating instructions
222
gageproductobserved σσσ +=
Measurement error is considered to be the difference between a value measured and the true value. Types of gage variation:
• Systematic variation – Accuracy - improper calibration – Reproducibility - different persons using same equipment with different
techniques • Periodic variation
– Stability - wear, deterioration, environment • Random variation
– Repeatability (unable to locate part to be measured) Types of measurement variation
• Accuracy • Stability • Reproducibility • Repeatability
Accuracy : Difference between the true average and the observed average. (True average may be obtained by using a more precise measuring tool)
Stability: The difference in the average of at least 2 sets of measurements obtained with a gage over time.
Reproducibility:
Variation in average of measurements made by different operators using the same gage measuring the same part.
Repeatability: The random variation in measurements when one operator uses the same gage to measure the same part several times.
How do we improve gage capability?
• Reproducibility – operator training, or – more clearly define measurement scale available to the operator
• Repeatability – gage maintenance – gage redesign to better fit application
Accuracy of Measurement
• Broken down into three components:
1. Stability: the consistency of measurements over time. 2. Accuracy: a measure of the amount of bias in the system. 3. Linearity: a measure of the bias values through the expected range of
measurements. Precision of Measurement
• Precision, Measurement Variation, can be broken down into two components:
1. Repeatability (Equipment variation): variation in measurements under exact conditions.
2. Reproducibility (Appraiser variation): variation in the average of measurements when different operators measure the same part.
MSA Process Flow
1. Preparation for study 2. Evaluate stability 3. Evaluate resolution 4. Determine accuracy 5. Calibration 6. Evaluate linearity 7. Determine repeatability and reproducibility
Preparation for Study
• Objective: establish process parameter for the study. • Process: 1. Determine which measurement system will be studied. 2. Establish test procedure. 3. Establish the number of sample parts, the number of repeated readings, and the
number of operators that will be used. 4. Choose operators and sample parts
Evaluate Stability
• Objective: evaluate measurement system to determine if the system is in statistical control.
• Procedure: 1. Choose sample standards. 2. Measure sample standards three to five times. 3. Plot data on a x-bar and R chart.
• Analysis: 1. Determine if process is in control. 2. If process is unstable determine and correct the cause
Evaluate Resolution
• Objective: determine if the measurement system can identify and differentiate between small changes in the given characteristic.
• Process: 1. Choose a sample standard. 2. Measure the sample standard three to five times. 3. Repeat the process 10 to 25 times. 4. Plot data on a R chart. • Analysis 1. The resolution is inadequate if:
- There are only one, two, or three possible values for the range, or - There are only four possible values for the range when n >= 3. Determine Accuracy
• Objective: determine the variation between the observed measurement and the actual measurement of a part.
• Process: 1. Choose sample standards. 2. Measure sample standards 15 to 25 times using the same measuring device, the same operator, and the same setup. 3. Calculate x-bar 4. Calculate bias - Bias = Average – Reference Value 5. Calculate the upper and lower 95% confidence limit (CL).
• Analysis
1. If reference value is within the 95% CL then the bias is insignificant. 2. If reference value is outside the 95% CL then the bias is significant and measurement system must be recalibrated.
Calibration
• Objective: to ensure the instrument is accurate, and measurement bias is minimized.
• Process: calibrate instrument . Evaluate Linearity
• Objective: determine the difference between the obtained value and a reference value using the same instrument over the entire measurement space.
• Process:
1. Choose three to five sample standards that cover the measurement space. 2. Measure sample standards 15 to 25 times. 3. Calculate the average of the readings. 4. Calculate bias. 5. Plot reference values on x-y graph. 6. Calculate slope of the linear regression line. 7. Calculate linearity and percent linearity. 8. Calculate R2. • Analysis 1. The closer the slope is to zero, the better the instrument. 2. R2 gives indication of how well the “ best-fit” line accounts for variability in the
x-y graph.
Determine Repeatability and Reproducibility • Objective: determine variation in a set of measurement using a single instrument
that can be credited to the instrument itself, and to the entire measurement system. • Process 1. Generate random order for operators and parts to complete the run. 2. Repeat process for subsequent runs. 3. Have operators take measurements.
• Analysis: 1. Plot data 2. Run ANOVA (analysis of variance) on data. 3. Calculate total variance. 4. Calculate % Contribution and determine if acceptable. 5. Calculate % Contribution (R&R) 6. Calculate Process to Tolerance ratio (P/T) for repeatability. 7. Determine if P/T is acceptable.
Real World Application - Gauge R&R study of automobile radiator manufacturer. - After studying four characteristics of radiator components the following results were obtained:
• Any system having greater than 30% gauge R&R is considered inadequate. As seen in Table 1, all four characteristics’ %R&R is inadequate.
• Investigation of the measurement system led to a subsequent reduction of %R&R in three of the four characteristics to between 12% and 23%.
• Further investigation of the fourth characteristic, inlet hole diameter, led the examiners to a manufacturing problem. The team discovered high ovality in the inlet hole, which was caused by the cutting tool. The tool was modified to reduce ovality.
• Benefits of the study.
1. Reduced measurement variation. 2. Increased operator confidence regarding their aptitude for conducting gauge R&R
studies. 3. Paved the way for further studies within the firm.
An Exercise – Calculating EV, AV, R&R, and TV
• Given: EV = 5.15(s0) , AV = 5.25(s1)
R&R = √ (EV2 + AV2)
TV = √ (EV2 + AV2 + PV2)
• Where: s0 = gauge standard deviation = 0.05
s1 = true appraiser standard deviation = 0.1
PV = part-to-part variation = 0.02
• Calculate R&R and TV
• Is the calculated R&R acceptable?
Gage R&R Analysis – Analysis of Repeatability and
Reproducibility This is a technique to measure the precision of gages and other measurement systems. The name of this technique originated from the operation of a gage by different operators for measuring a collection of parts. The precision of the measurements using this gage involves at least two major components: the systematic difference among operators and the differences among parts. The Gage R&R analysis is a technique to quantify each component of the variation so that we will be able to determine what proportion of variability is due to operators, and what is due to the parts. A typical gage R&R study is conducted as the following: A quality characteristic of an object of interest (could be parts, or any well defined experimental units for the study) is selected for the study. A gage or a certain instrument is chosen as the measuring device. J operators are randomly selected. I parts are randomly chosen and prepared for the study. Each of the J operators is asked to measure the characteristic of each of the I parts for r times (repeatedly measure the same part r times). The variation among the m replications of the given parts measured by the same operation is the Repeatability of the gage. The variability among operators is the Reproducibility. Gage repeatability and reproducibility studies determine how much of your observed process variation is due to measurement system variation. The overall variation is broken down into three categories: part-to-part, repeatability, and reproducibility. The reproducibility component can be further broken down into its operator, and operator by part, components.
Gage R&R Studies Gage repeatability and reproducibility (R&R) studies involve breaking the total gage variability into two portions: repeatability which is the basic inherent precision of the gage reproducibility is the variability due to different operators using the gage.
• Gage variability can be broken down as
� More than one operator (or different conditions) would be needed to conduct the
gage R&R study. Statistics for Gage R&R Studies (The Tabular Method)
• Say there are p operators in the study • The standard deviation due to repeatability can be found as
Where And d2 is based on the # of observations per part per operator. Statistics for Gage R&R Studies (the Tabular Method)
• The standard deviation for reproducibility is given as
Where d2 is based on the number of operators, p Basic Terms
• EV= Equipment Variation (Repeatability)
• AV= Appraiser Variation (Reproducibility)
• R&R= Repeatability & Reproducibility
• PV= Part Variation
• TV= Total Variation of R&R and PV
• K1-Trial, K2-Operator, & K3-Part Constants
2222ityrepeatabililityreproducibgageerrortmeasuremen σσσσ +==
2
ˆdR
ityrepeatabil =σ
p
RRRR p+++
=�21
2
ˆdR
xilityreproducib =σ
),,min(
),,max(
21min
21max
minmax
p
p
x
xxxx
xxxx
xxR
�
�
==
−=
• Generally two or three operators • Generally 10 units to measure • Each unit is measured 2-3 times by each operator • Determine if reproducibility is an issue. If it is, select the number of operators to
participate. • Operators selected should normally use the measurement system. • Select samples that represent the entire operating range. • Gage must have graduations that allow at least one-tenth of the expected process
variation. • Insure defined gaging procedures are followed. • Measurements should be made in random order. • Study must be observed by someone who recognizes the importance of conducting
a reliable study. Procedure for Performing R&R Study
• Calibrate the gage, or assure that it has been calibrated. • Have the first operator measure all the samples once in random order. • Have the second operator measure all the samples once in random order. • Continue until all operators have measured the samples once (this is Trial 1). • Repeat above steps for the required number of trials. • Use GR&R form to determine the statistics of the study.
– Repeatability, Reproducibility & %GR&R – Standard deviations of each of the above – % Tolerance analysis
• Analyze results and determine action, if any. Variable Gage R&R % R&R Results <5% No issues 10%≤ Gage is OK 10% – 30% Maybe acceptable based upon importance of application, and cost factor Over 30% Gage system needs improvement/corrective action Gasket Thickness Study PT1 PT2 PT3 PT4 PT5 PT6 PT7 PT8 PT9 PT10 OP/TRIAL 0.65 1.00 0.85 0.85 0.55 1.00 0.95 0.85 1.00 0.60 A1 0.60 1.00 0.80 0.95 0.45 1.00 0.95 0.80 1.00 0.70 A2 0.55 1.05 0.80 0.80 0.40 1.00 0.95 0.75 1.00 0.55 B1 0.55 0.95 0.75 0.75 0.40 1.05 0.90 0.70 0.95 0.50 B2 0.50 1.05 0.80 0.80 0.45 1.00 0.95 0.80 1.05 0.85 C1 0.55 1.00 0.80 0.80 0.50 1.05 0.95 0.80 1.05 0.80 C2
X bar& R Minitab Example Using Aiag49:mtw Data File Specification: 0.6 - 1.0 mm Process Variation: 1.6 mm Reference QS Measurement System Analysis Manual
Gage R&R Study for Thickness – XBar/R Method Source Variance StdDev 5.15*Sigma Total Gage R&R 2.08E-03 0.045650 0.235099 Repeatability 1.15E-03 0.033983 0.175015 Reproducibility 9.29E-04 0.030481 0.156975 Part-to-Part 3.08E-02 0.175577 0.904219 Total Variation 3.29E-02 0.181414 0.934282 Gage R&R Study for Thickness – XBar/R Method Source %Contribution %Study Var %Tol %Process Total Gage R&R 6.332 25.164 58.77 14.69 Repeatability 3.509 18.733 43.75 10.94 Reproducibility 2.823 16.802 39.24 9.81
Misc:Tolerance:Reported by :Date of study :Gage name:
1.11.00.90.80.70.60.50.40.3
321
Xbar Chart by Operator
Sam
ple
Mea
n
X=0.80753.0SL=0.8796
-3.0SL=0.7354
0.15
0.10
0.05
0.00
321
R Chart by Operator
Sam
ple
Ran
ge
R=0.03833
3.0SL=0.1252
-3.0SL=0.000
10 9 8 7 6 5 4 3 2 1
1.11.00.90.80.70.60.50.4
Gasket
OperatorOperator*Gasket Interaction
Ave
rage
123
321
1.11.00.90.80.70.60.50.4
Operator
Response by Operator
10 9 8 7 6 5 4 3 2 1
1.11.00.90.80.70.60.50.4
Gasket
Response by Gasket
%Total Var%Study Var%Process %Toler
Part-to-PartReprodRepeatGage R&R
200
100
0
Components of Variation
Perc
ent
Gage R&R (Xbar/R) for Thickness
Part-to-Part 93.668 96.782 226.05 56.51 Total Variation 100.000 100.000 233.57 58.39 Number of distinct categories = 5 Calculation Explanation
• 5.15 Sigma = the factor standard deviation. 5.15 was developed empirically to approximate the gage population distribution variation.×5.15
• % Contribution = repeatability variance/ total variation variance.×Percent
contribution of each factor based upon the variance. Repeatability = 100
• % Study Variation = 5.15 repeatability standard deviation/ 5.15 total variation standard deviation.× the total variation standard deviation. Repeatability = 100 × the factor standard deviation divided by 5.15 ×5.15
• % Tolerance 5.15 repeatability standard deviation/tolerance.× the factor
standard deviation divided by the tolerance. Repeatability = 100 ×= 5.15 • % Process Variation = 5.15 x the factor standard deviation divided by the
process variation. Repeatability = 100 x 5.15 repeatability standard deviation/ process variation.
• Number of Distinct Categories = part standard deviation divided by the total
gage R&R standard deviation times 1.41.
MODULE NO -26
Statistical Theory of Tolerances Purpose of Specification . A specification is a definition of a design. The design remains a concept in the mind of the designer until he define it through verbal description, sample, drawing, writing etc. It defines in advance what the manufacturer expects to make. It defines what the consumer can expect to get. The specification serves as an agreement between manufacturer and consumer on the nature of characteristics of the product. It is helpful to recognize the distinction between a design specification and an inspection specification. The design specification deals with what is desired in a manufactured article, i.e. it deals with the specification function. In contrast, the inspection specification deals with means of judging whether what is desired is actually attained, in other words it deals with inspection function (quality of conformance). Specifying the tolerance. It is practically impossible to manufacture one article exactly like another or one batch like another. Variability is one of the fundamental concepts of modern quality control. Therefore, the range of permissible difference in dimension have been standardized under the name limits. The limit of size for dimension or a part are two extreme permissible sizes for that dimension (high limit and low limit). Design engineer have a tendency to specify tight tolerance for the following reason. 1. Lack of time. Tolerance are to be set up on many dimension, therefore, the designers may not have sufficient time to give much attention to tolerances on all dimension. Therefore, to be on the safer side the designers are tempted to specify much closer tolerance 2. The concept of factor of safety. Designers have been taught to allow for the unexpected or the unusual i.e. overloading of the machine, use of unintended purpose, change in the condition of use . The designers may assume more factor of safety to anticipate failure of conformance by the shop. 3. Setting tolerances assuming ideal conditions. Design engineering seems to specify tolerance with reference to some what idea l conditions, assuming good machine, well trained operators, skilled supervision and good working condition. Or they use reference tables which may tacitly assume such factor. In actual practice, nearly ideal condition may be obtained during some part of the process, but almost never for say, extended period of time. 4. Lack of knowledge of the production process. The designers may not have sufficient knowledge about the production process. Therefore, they may design the product with little or no critical consideration of the various production problems involved to meet the tolerance.
5. Lack of information about the process capability. In some case the designers do not have information regarding the production facilities available in the plant, their condition and process capability. 6. Lack of awareness of the quantitative effect of tolerance decision on factory economy 7. Tendency of shop personnel to loose up the tolerances. The design may be conscious of the difference between the blue print tolerance and those which are actually enforced. Therefore, in order to get what they think they need, they tend to specify loser tolerance then they believe necessary. Definition: It is not possible to manufacture each & every item identically. It is a customary to allow a certain variability in the measured quality characteristic called the tolerance. Generally in any industry design section specifies what is to be produced &sets the dimension & tolerance of the characteristic. The responsibility of the manufacturing dept, is to manufacture the items according to the specification laid down by the design department. The inspection dept. checks whether the product is meeting the specification given by the design dept, unless there is a proper co-ordination it is difficult to manufacture the item exactly. while establishing the specification limits the fall pts must be considered 1. Functional utility of the product. 2. Capability of the product and process. 3. Inspection procedures. Tolerance spread: T = (U-L) It is set by the engg. design section to design the mini & max values available for the product to work properly. Theorems in statistical Tolerance
• Addition theorem When the components are added together linearly
mean of the assembly = µ assembly = µ A + µ B std deviation assembly = � assembly = If n components are assembled together linearly, then the mean of the whole
assembly = µ assembly = µ A + µ B + - - -µ n
std. deviation of assembly = � assembly =
1. Difference theorem
When the components are mating together, eg: shaft and the bearing.
Assumption to be made for the above formula in statistical tolerance
1. The component dimension are independent of each other & are assembled randomly.
2. The component dimension must follow a normal distribution 3. A control chart has to be maintained for each of the component dimension /
characteristic. 4. The actual average of each component is equal to the nominal value stated in the
specification.
22
BA σσ +
222nBA σσσ −−−++
Conventional tolerance Statistical tolerance
1. 100% of the interchangeability of the components is possible for assembly
2. The tolerance of the interacting
dimension are smaller than the necessary.
1 . A small % of the assembly will fall outside the tolerance limit but this can be corrected with a selective assembly 2. This method permits a larger tolerance On the interacting dimension
3. No assumption are necessary
4. No special process control procedures are required
3. The interacting dimension must be independent of each other & each characteristic must follow a normal distribution 4. The process average of each components must be maintained at a nominal dimension value (target value)
Statistical Theory of Tolerances
Statistical Tolerance
Use of statistical method of tolerance can lead to economic production, when we are dealing with interacting dimensions. Interacting dimensions are those which mate or merge with other dimensions to create a final result. A dimension of an assembled product may be the sum of the dimensions of several parts. Or an electrical resistance may be the sum of several electrical resistances of parts. Or a weight may be the sum of number of weights of parts. In such situations it is necessary to determine the relationship of the tolerances of the sum. The statistical theory of tolerance results in larger component tolerances with no change in manufacturing process and no change in the assembly tolerance. Larger tolerances, increase the production Output, minimize waste of material and productive effort and are generally responsible for reduction in manufacturing costs. This is the effect of statistical approach. If an overall tolerance is fixed but not being met, then the problem is which component tolerances should be reduced to meet overall tolerance. The statistical theorem can help to determine which of the component tolerances have the greatest effect on the overall tolerances. This information, when coupled with economic considerations on achieving a smaller tolerance, can form basis for a decision.
A risk involved in the use of statistical theorem is that : it is possible that an assembly will result which falls outside of the assembly tolerance. However, the chance can be calculated and a judgment made on whether or not to accept risk. The probability that assembly length will fall outside the tolerance limits can be found out by analyzing the area under the normal curve for assembly lengths. 1. The component dimensions are independent and the components are assembled
randomly. 2. Each component dimension should be normally distributed. Some departure from this
assumption is permissible. 3. The actual average for each component is equal to the nominal value stated in the
specification. Problem 1. Manufacturer A produces a metal piece the dimension of which is normally distributed with based on a subgroup size of 4. The manufacturer B produces a 2nd metal pieces which is also normally distributed with based on a subgroup size 9 of a company C purchases these 2 parts & assembles them together to obtain a combined dimension of 15 cm. What % of the combined assembly. Would you expect to have the dimension is excess of 15.006 cm. Solution : from the table, for a subgroup size of 4, d2=2.059 from the table, for a subgroup size of 9, d2=2.97
cmRcmX AA 004.0&5.81
==
cmRcmX BB 005.0&5.61
==
4004.0&5.81
=== AAA ncmRcmX
9005.0&5.61
=== BBB ncmRcmX
The % of assembled items which have dimensions in excess of 15.006 is The tables, for Z=2.33, the prob. Is 0.9901 i.e.. 99.01% i.e. 100 -99.01 = 0.99% There fore 0.99% assembled items will have dimensions more than 15.006.
Problem-2 Two Parts A&B are received in an assembly operation, where each part is permanently attached to the other . When the combined width of the parts dose not meet the specification of 10 ±0.02 the assembled product must be scrapped. If the width of the part B is normally distributed with and the width of the part A is also normally distributed with
33.2002568.0
15006.15 =−=
−=�−=
Z
xZ
xZ
C
c
σµ
σµ
012.0&5.6 11 == BBX σ008.0&5.3 11 == AAX σ
The assembly is at random. Determine the %of the assembled product that has to be scrapped
Since it is symmetric, the % of items of that are scrapped (The scrap & rework) is 2x0.0823 = 0.1646 I.e. = 16.46%
MODULE NO -27
Statistical Theory of Tolerances Hanna Varnum Diagram A Hanna-Varnum Diagram is used to determine the probability of interference when 2 normal distributions overlap. This diagram is plotted for the ratio of differences between the 2 means to a smaller standard deviation V/s ratio of standard deviations. Steps involved Step 1: Divide the larger standard deviation by smaller standard deviation. Step 2: Locate this value on the lower scale (x-axis) Step 3: subtract the average ie difference between the 2 means Step 4: Divide this difference by a smaller standard deviation. Step 5: Indicate this value on the vertical scale. Step 6: Find the point which is above the lower scale value to the right of the vertical scale value. Step 7: Determine the interference risk from the % curves passing nearest to this point which is shown in the graph. Interference Tolerance It is defined as a negative clearance. Interference exists in a situation where the shaft diameter > than the bearing diameter. If the negative clearance is present, considered it as zero. Problem-3 Two mating parts X & Y have an average clearance of 0.015mm. control chart analysis indicates that the standard deviations of X & Y are 0.025 and 0.075mm respectively. Find the probability of interference between the 2 distributions and also probability of clearance being greater than 0.0175 assume normal distribution and random assembly
The Probability of interference between the Two distribution is
Since it is an interference, it is –ve clearance i.e 0 From the normal tables, value of z = 0.4286 ie 42.86% 42.86% assembled items have probability of interference between 2 distribution
Problem-4 Dimensions of Two mating parts E and F are normally distributed with averages of 251.0mm and 250.0mm and standard deviations of 0.1mm and o.3mm respectively. If the parts are assembled randomly, what percent of the assemblies will have (a) clearance greater than 1.2mm and (b) no defective parts if the specifications of E and F are 251.0± 0.2mm and 250.0±0.5mm respectively
18.0
01899.0079.0
015.00.
−=
−=−=zei
Clearance
Solution:
given (a) Average clearance To Find the area under the curve
Corresponding area from table =0.7357
(a) For part E, Xmax = 251.2 mm; Xmin = 250.8 mm XbarE = 251mm and �E = 0.1 mm To find the area under the curve Corresponding area = 0.9773
mmXX FE 1=−=
22 )()( FEassembly σσσ +=
3162.0)3.0()1.0( 22 =+=
mmmmX
mmmmX
FF
EE
3.00.250
1.00.251
==
==
σσ
σXX
FFactor i −=1
21.0
2512.251 +=−=
21.0
2518.2502 −=−=FFactor
Corresponding area = 0.0228 Therefore shaded area = 0.9773 – 0.0228 = 0.9545 Therefore, Probability that the part E will be non- defective = 0.9545. For part F, Xmax = 250.5 Xmin = 249.5 XbarF = 250 �F = 0.3
Corresponding area = 0.00478 Therefore, shaded area = 0.9522 -0.0478 = 0.9044. [ Probability that the part F will be non-defective] The probability that assembly of parts E and F will be non-defective = PE x PF = 0.9545 x 0.9044 = 0.86324 i.e. 86.324% assemblies will have no defective part
667.03.0
2505.2501 −=−=F
Problem-5 Control chart analysis indicates that the standard deviation of the 2 mating parts C & D are 0.0016 & 0.004 cm respectively. It is desired that the probability of smaller clearance than 0.004 cm should be 0.005. What distribution between the average dimensions of C & D should be specified by the designer. Assume the data follows normal distribution & random assembly with this distribution specified what is the probability that the parts assembled at random will have a greater clearance than 0.024 cm. Solution :
For the probability of 0.005 The Z value from normal tables is -2.57 (b) % of items that have clearance > 0.024
004.0
0016.0
==
D
C
σσ
cm
xZ
C
C
C
C
015.00043.0
004.057.2
=
−=−
−=∴
µ
µσ
µ
From the tables, the probability less than 0.024 is 0.9817 There fore % of assembled items having clearance > 0.024 Is 1-0.9817 =0.0183 i.e., 1.83%. Problem 6. Control chart analysis indicates that the standard deviation of the 2 mating parts will have identical values of 0.0013cm. It is desired that the probability of clearance less than 0.003 cm should be 0.002. What distribution between the average values of these dimensions should be specified by the designer. Assume normal distribution & random assembly with this distribution specified, what is the probability that the assembled items will have clearance > 0.009cm. Solution : (a) For the probability of 0.02, the Z value from the normal tables is -2.05
(b) % of assembled items having clearance greater than 0.009 cm
0013.021 == σσ
( )cm
comb
00184.0
0013.02 2
2
2
2
1
==
+= σσσ
From the normal tables, Z=1.21,
the probability = 0.8869 i.e. 88.69%
There fore % of items having clearance > 0.009 cm is 100 – 88.69 = 11.31% Problem -7 2 mating parts A&B have the dimension of 2.61&2.615cm respectively , control chart analysis indicates that the standard duration of A&B are 0.0012&0.0015cm respectively. With distribution of A&B are normal & centered about the specified dimension &the parts are assembled at random, find the probability of interference both the 2 distribution & also calculate the probability of clearance >0.01. solution :
probability of interference better the 2distribution is Since it is an –ve clearance interference =0 From the tables probability of interference =0.0047 I.e. 0.47% items have the probability of interference.
C
xZ
σµ−=
60.200192.0
005.00 −=−=Z
(b) Probability of clearance > 0.01
From the tables, probability of clearance > 0.01 is 0.0047 i.e. 0.47% Probability of assembled items having clearance >0.01is = 1-0.0047 =0.4953 i.e. 99.53% Problem -8 Control chart analysis indicates that the SD of the 2 mating points are 0.008&0.02cm resp. It is desired that the prob, of smaller clearance than 0.002cm should be 0.005what distributions, between average dimension of C&D should be specified by the designer with this distributions , specified, what is the prob, that the 2parts assembled will have clearance > 0.12 Solution: a) For probability of 0.005 the value of Z= -2.57
cm
BAcombB
A
0215.0
)02.0()008.0(
02.0
008.0
22
22
=+=
+==
=
σσσσ
σ
(b) % of items having clearance >0.12
From the tables, the probability is 0.9982 Therefore % of items having clearance >0.12 = 1-0.9982 = 0.0018 i.e., 0.18% Problem 9. The gross weight of cement bags with cement at the terminal dispatch stage at a cement factory was known to follow a ND with mean =51 kg & SD = 400 gms. It is known that the weight of cement bags before filling follows a ND with mean = 500 gms & SD = 20 gms. If the net weight of the specimen is 50 kg minimum can it be assumed that all the bags have the minimum net weight. If not, what % of the bags have underweight. What should be the minimum mean gross weight required in order to have no defect net weight. Solution :
22
ENg
ENg
σσσ
µµµ
+=
+=
kgweightnetimum
kgkg
kgkg
Eg
Eg
50min
02.04.0
5.051
=
==
==
σσµµ
kgEgN
ENg
5.505.051 =−=−=+=
µµµµµµ
( ) ( )kgN
EgN
EgN
ENg
399.0
02.04.0 2222
222
222
=∴
−=−=
−=
+=
σσσσ
σσσ
σσσ
(a) % of cement bags falling below 50 kg net weight
From the normal tables for Zo-1.25, the probability is 0.1056 ie 10.56% of the bags are falling below 50 (b) When LSL=LNTL, there are no rejections LSL=LNTL The minimum mean gross weight required in order to have no deficit net weight. 51.197+0.5 = 51.697kg
113σ−= X
( )197.51
399.0350
350
350
=+=+=−=
n
NN
Nn
µ
σµσµ
MODULE NO -28
Statistical Theory of Tolerances
Application of STT in Other Areas
Setting Specification Limits on Discrete Components It is often necessary to use information from a process capability study to set specifications on discrete parts or components that interact with other components to form the final product. This is particularly important in complex assemblies, or to prevent tolerance stack-up where there are many interacting dimensions. This section discusses some aspects of setting specifications. on components to ensure that the final product meets specifications. Linear Combinations: In many cases, the dimension of an item is a linear combination of the dimensions of the component parts. That is, if the dimensions of the components are x1, x2, … … , xn, then the dimension of the final assembly is y = a1x1 + a2x2 + … . + anxn If the xi are normally and independently distributed with mean µ i and variance �i
2, then y is normally distributed with mean µy=�in =1and variance �2
y= �in=1 ai
2 �I2. There fore, if µ I and �i
2 are known for each component, the fraction of assembled items falling outside the specifications can be determined. Problem-1. A linkage consists of four components as shown in Fig. The lengths of x1, x2, x3 and x4 are known to be x1~ N(2.0, 0.0004), x2 ~ N(4.5, 0.0009), x3 ~ N(3.0, 0.0004), and x4 ~ N(2.5, 0.0001). The lengths of the components can be assumed independent, because they are produced on
Different machines. All lengths are in inches The design specifications on the length of the assembled linkage are 12.00 ± 0.10. To find the fraction of linkages that fall within these specification limits, note that y is normally distributed with mean µy =2.0 + 4.5 + 3.0 + 2.5 = 12.0 and variance
x1 x2 x3 x4
y
A linkage assembly with four components.
�2
y =0.0004 + 0.0009 + 0.0004 + 0.0001= 0.0018 To find the fraction of linkages that are within specification, we must evaluate P{11.90 � y � 12.10} =P {y � 12.10} – P {y � 11.90} = �(2.36) - �(-2.36) = 0.99086 – 0.00914 = 0.98172 There fore, we conclude that 98.172% of the assembled linkages will fall within the specification limits. Problem-2 Consider the assembly shown in Fig. Suppose that the specifications on this assembly are 6.00 ± 0.06 in. Let each component x1, x2, and x3 be normally and independently distributed with means µ1=1.00 in., µ2=3.00 in., and µ3=2.00 in., respectively. Suppose that we want the specification limits to fall inside the natural tolerance limits of the process for the final assembly so that Cp= 1.50, approximately, for the final assembly, this implies that about 7 ppm defective is allowable. The length of the final assembly is normally distributed. Furthermore, if the allowable number of assemblies nonconforming to specifications is 7 ppm, this implies that the natural tolerance limits must be located at µ ± 4.449 �y. Now µy = µ1 + µ2 + µ3 =1.00 + 3.00 + 2.00 = 6.00, so the process is centered at the nominal value. Therefore, the maximum possible value of �y that would yield the desired value of Cp is That is if �y � 0.0134, then the number of nonconforming assemblies produced will be less than or equal to 7 ppm. Now let us see how this affects the specifications on the individual components. The variance of the length of the final assembly is
��
���
� −Φ−��
���
� −Φ=0018.0
00.129.110018.0
00.1210.12
x1 x2 x3
y
Assemble for Example
0134.049.406.0 ==yσ
( ) 00018.00134.0 223
22
21
2 =≤++= σσσσ y
Suppose that the variances of the component lengths are all equal; that is ,
(say). Then And the maximum possible value for the variance of the length of any component is Effectively, if �2 � 0.00006 for each component, then the natural tolerance limits for the final assembly will be inside the specification limits such that Cp = 1.50. This can be translated in to specification limits on the individual components. If we assume that the natural tolerance limits and the Specification limits for the components are to coincide exactly, then the specification limits for each component are as follows Problem-3. A shaft is to be assembled into a bearing. The international diameter of the bearing is a normal random variable – say, x1 – with mean µ1 = 1.500 in. and standard deviation �1 = 0.0020 in. The external diameter of the shaft – say, x2 – is normally distributed with mean µ2 = 1.480 in. and standard deviation �2 = 0.0040 in. The assembly is shown in figure. When the two parts are assembled. Interference will occur if the shaft diameter is larger than the bearing diameter – that is, if y = x1 – x2 < 0 Note that the distribution of y is normal with mean
µy= µ1 - µ2 = 1.500 - 1.480 = 0.020
and variance Therefore, the probability of interference is P {interference} = P{y,0}
223
22
21 σσσσ === 22 3σσ =y
00006.03
00018.03
2
2 === yσσ
0232.000.200006.000.300.2:
0232.000.300006.000.300.3:
0232.000.100006.000.300.1:
3
2
1
±=±
±=±
±=±
x
x
x
( ) ( ) 00002.00040.00020.0 2222
21
2 =+=+= σσσ y
( )( )ppm4000004.0
47.400002.0
020.00
=−Φ=
��
���
� −Φ=
Which indicates that very few assemblies will have interference. In problems of this type, we occasionally define a minimum clearance – say, C – such that P {interference < C} = � Thus, C becomes the natural tolerance for the assembly and can be compared with the design specification. In our example, if we establish � = 0.0001 ( i.e., only 1 out of 10,000 assemblies or 100 ppm will have clearance less than or equal to C), then have Which implies that C = 0.020 – (3.77) 0.00002 = 0.0034. That is, only 1 out of 10,000 assemblies will have clearance less than 0.0034 in. Problem-4. Figure below shows an assembly consisting of four components. The lengths of x1, x2, x3 and x4 are known to be x1~ N(2.5, 0.032), x2 ~ N(2.4, 0.022), x3 ~ N(2.4, 0.042), and x4 ~ N(3.0, 0.012). The lengths of the components can be assumed independent, because they are produced on different machines. All lengths are in cm. The design specifications are 10.25 ± 0.15. To find the fraction of linkages that fall within these specification limits, note that y is normally distributed with mean µy =2.5 + 2.4 + 2.4 + 3.0= 10.3 cm and variance �2
y = 0.032 + 0.022 + 0.042 + 0.012 = 0.003 cm2 To find the fraction of components that are within specification, we must evaluate P{10.10 � y � 10.40} =P {y � 10.40} – P {y � 10.10}
= �(1.818) - �(-3.636) = 0.9655 – 0.0001 = 0.9654
71.300002.0
020.0
0001.0
−=−
−=−
C
ZC
y
y
σµ
x2 x1
Assembly of shaft and a bearing
X X X
X
Y
��
���
� −Φ−��
���
� −Φ=055.0
30.1010.10055.0
30.1040.10
There fore, we conclude that 96.54% of the assembled components will fall within the specification limits. Statistical Theory of Tolerances Problem 5. Control chart analysis indicates that the standard deviation of the 2 mating parts will have identical values of 0.0013cm. It is desired that the probability of clearance less than 0.003 cm should be 0.002. What distribution between the average values of these dimensions should be specified by the designer. Assume normal distribution & random assembly with this distribution specified, what is the probability that the assembled items will have clearance > 0.009cm. Solution : (a) For the probability of 0.02, the Z value from the normal tables is -2.05 (b) % of assembled items having clearance greater than 0.009 cm From the normal tables, Z=1.21, the probability = 0.8869 i.e. 88.69% There fore % of items having clearance > 0.009 cm is 100 – 88.69 = 11.31%
0013.021 == σσ
( )cm
comb
00184.0
0013.02 2
2
2
2
1
==
+= σσσ
cmC
C
006772.0
00184.0003.0
05.2
=
−=−
µ
µ
0.003
2 %
00184.0
?
==
C
C
σµ
21.1
00184.0006772.0009.0
=
−=−=
Z
xZ
C
C
σµ
0..009
00184.0
006772.0
==
C
C
σµ
problem -7 2 mating parts A&B have the dimension of 2.61&2.615cm respectively, control chart analysis indicates that the standard duration of A&B are 0.0012&0.0015cm respectively. With distribution of A&B are normal & centered about the specified dimension &the parts are assembled at random, find the probability of interference both the 2 distribution & also calculate the probability of clearance >0.01.
Probability of interference better the 2distribution is Since it is an –ve clearance interference =0 From the tables probability of interference =0.0047 I.e. 0.47% items have the probability of interference.
From the tables, probability of clearance > 0.01 is 0.0047 i.e. 0.47% Probability of assembled items having clearance >0.01is = 1-0.0047 =0.4953 i.e. 99.53%
C
xZ
σµ−=
60.200192.0
005.00 −=−=Z
problem -8 Control chart analysis indicates that the SD of the 2 mating points are 0.008&0.02cm resp. It is desired that the prob, of smaller clearance than 0.002cm should be 0.005what distributions, between average dimension of C&D should be specified by the designer with this distributions , specified, what is the prob, that the 2parts assembled will have clearance > 0.12 From the normal tables for Zo-1.25, the probability is 0.1056 ie 10.56% of the bags are falling below 50 (b) When LSL=LNTL, there are no rejections LSL=LNTL The minimum mean gross weight regds. In order to have No deficit net weight. 51.197+0.5 = 51.697kg
11
3σ−= X
( )197.51
399.0350
350
350
=+=+=−=
n
NN
Nn
µ
σµσµ
MODULE NO -29 Reliability
Introduction: The concept of reliability has been known for a number of years, but it has assumed greater significance ,and importance during the past decade, particularly due to impact of automation, development in complex missile and space programmers. The manufacture of highly complex equipment has served to focus greater attention on reliability. The complex products, equipments are made up of hundreds or thousands of components whose individual reliability determines the reliability of the entire equipment. Using various types of materials and fabricating operations, the industry has to build reliable performance into equipment and the products manufactured. As regards to the Indian industry, the reliability concept is yet to find a footing. The solutions to many of the problems of quality and economy remain handicapped because of inadequate appreciation of the reliability principles and techniques. However, reliability is only one of the tools of the management which must be supplemented by other tools like quality control and design of experiments for the solution of problems of quality and cost. Quality Control and Reliability Quality control maintains the consistency of the product and thus affects reliability. But it is entirely a separate function. Reliability is associated with quality over the long term whereas quality control is associated with the relatively short period of time, required for manufacture of the products. The task of reliability is to see that in a product design, full account has been taken of every contingency which may cause a breakdown in use and to forecast the components or assemblies that are likely to become defective in service. However, the equipment is designed, still it may be unreliable, if some component has not been fully evaluated under all service conditions, even if the production standards have been maintained by quality control during manufacture. Need for a reliable product The reliability of a system, equipment or product is very important aspect of quality for its consistent performance over its expected life span. In fact, Uninterrupted service and hazard free operation is the essential requirement of large complex systems like electric power generation and distribution plants or communication network such as railways, aero plane, automobile vehicles etc. In these cases a sudden failure of even a single component, assembly or system results in a health hazard, accident, or interruption in continuity of service. Thermal power plants provide electric power for domestic, commercial, industrial and agricultural use. Reliability problems may cause shut down or reduced generation of power resulting in load shedding and many other problems including loss of productive activities. Failure of anyone system of an air-craft may result in forced landing or an accident. Sudden stoppage of suburban railway train due to fault in the single system faulty carriage, interruption in the power supply or faulty track, sets up a chain of events
leading to disruption of service or accidents. Similarly, sudden failure of a car break system while it is running may cause severe accident. Unpredicted failure of a single critical component may be cause of anyone of the above. What is true of power plants, air-crafts, railways etc. is also true for other products like washing machine, mixer grinder, T.V. sets, Refrigerators etc. though failure of such products may cause inconvenience on a smaller scale. The problem of assuring and maintaining has many responsible factors, including original equipment design, control of quality during manufacture, acceptance inspection, field trials, life testing and design modifications. Therefore, deficiencies in design and manufacture of products which go to build such complex systems needs to be detected by elaborate testing at the development stage and later corrected by a planned programme of maintenance. Definitions of Reliability Reliability is ordinarily associated with the performance of the product. However, there would be little point in having an electric lamp which may light at the time of purchase but which may burn off after 200 hours of use. Reliability is the probability that a device will perform its intended function satisfactorily without failure for the stated period of time under the specified operating conditions. In the above definition there are 4 factors which are essential to the concept of reliability. Whenever the customer purchases a product he expects that it should give satisfactory performance over a reasonably long period. Hence, what is important is that a product should function and continue to function for a reasonable time. In practice, in majority of the cases, it may not be possible to test each and every product for its life or other performance requirements. Nevertheless, it is a well known experience that each individual unit of product varies from the other units; some may have relatively long life. In view of the existence of this variation, Reliability is the probability of a product functioning in the intended manner over its intended life under the environmental conditions encountered. From this definition, there are, four factors associated with reliability. These are : 1) Numerical value 2) Intended function 3) Life 4) Environmental condition The introduction of this element of probability really makes the quantitative measurement of reliability possible. In other words, such measurements help to make reliability a number-a probability-that can be expressed as a standard. The second consideration for a product to be reliable is that it must perform a certain function or do a certain job when called upon. The phrase 'functioning in the intended manner' (satisfactory performance) implies that the device is intended for certain application. For example, in the case of electric iron, the intended application is that of applying intended degree of heat to the various types of fabrics. If instead it is used to keep a room
at a certain temperature, the electric iron might be inadequate because of the greater area to be heated and the change in environment. The third consideration for a product to be reliable is that it of time which ensures that the product is capable of working satisfactorily throughout the expected life. The fourth consideration for a product to be reliable is that of the environment conditions which have to be viewed broadly so as to include storage and transport conditions. Since these conditions too have significant effect on product reliability. When an equipment works well, and works whenever called upon to do the job for which it is designed, such equipment is said to be reliable. Failure is defined as the inability of an equipment not to breakdown in operation. The causes of unreliability of the product are many: one of the major causes is the increasing complexity of product. The multiplication law of probability Illustrates this very simply. Given an assembly made up of five components, each of which has a reliability of function of 0.95, the reliability of function of assembly is (0.95)5 or about 0.78. Many assemblies which are electronic in nature involve thousands of parts (a ballistic missile has more than 40,000 parts). Therefore, to have reasonable chance of survival for such assemblies the component reliability is of prime importance. Basic Elements of Reliability The basic elements required for an adequate specification or definition of reliability are as follows:
1. Numerical value of probability. 2. Statement defining successful product performance. 3. Statement defining the environment in which the equipment must operate.
4 Statement of the required operating time. 5. The type of distribution likely to be encountered in reliability measurement.
Reliability follows the distribution of Poisson where mean life T required life Failure Pattern for Complex Product Complex products often follow a familiar pattern of failure. When the failure rate (number of failures per unit time) is plotted against a continuous time scale, the resulting chart is known as "bath tub curve" (because of its shape).
θT
e_
θ
ZONE-1random failure
ZONE -2 ZONE-3
wear out failureearlyfailureF
ailure
/Failure rate
useful life
This curve exhibits three distinct zones. These zones differ from each other in frequency of failure and in the cause of failure pattern. These are as follows: 1.Early failure period: (or burn in or the debugging period). This is characterized by high failure rates. It begins at the first point during manufacture
That total equipment operation is possible and continues for such a period of time as permits (through maintenance and repairs), the elimination of marginal parts initially defective though not inoperative and unrecognizable as such until premature failure Commonly, these are early failures resulting from defect in manufacturing, or other deficiencies which can be detected by debugging, running on or extended testing. Failures in this zone are due to one or more of the following causes: ( i.e., assignable causes)
• Design deficiency • Manufacturing error
2. Random failure period: The constant failure rate period. It is characterized by a more or less constant failure rate. This is the rate at which the normal usage of the product occurs without any expectation of failures. Failures in this zone are due to chance causes. These are chance failures which may result from the limitations inherent in the design plus accidents caused by usage or poor maintenance or hidden defects which escape inspection. The period from A to B is the normal operating period in which the average failure rate remains fairly constant. 3.Wear out period: These are failures due to abrasion fatigue, creep, corrosion, vibration etc., e.g., the metal becomes embrittled, the insulation dries out. A reduction in failure rate requires preventive replacement of these dying components before they result in catastrophic failure Failures in this zone are due to one or more of the following causes:
• Ageing • Reduction • Wear & tear.
Achievement of Reliability. There are five effective areas for the achievement of reliability of the product. They are (I) Design (ii) Production (iii) Measurement and testing (iv) Maintenance and (v) Field operation. Design is the main cause of unreliability and a greater percentage of causes of unreliability can be traced out in this area.
Designing for Reliability The following factors should be considered for achieving a reliable design: 1. Simplicity of product. The design should be as simple as possible. Error rate is directly proportional to complexity. The greater the number of components the greater the chance of failure. Increased reliability is a natural by-product of equipment/simplification. 2. Derating : Derating means providing a large safety margin. It is also used as a method of achieving design reliability. For example, a material with tensile strength of 10,000 kg/cm2 might be prescribed where only 7,000 kg/cm2 is required. 3. Redundancy. Redundancy is the provision of stand-by or parallel components or assemblies to take over in the event of failure of the primary item.Even though we use most reliable components and keep their number a minimum, there may be one or two such components which may have lesser reliability. To overcome this, more number of such components are included and so arranged that the whole equipment will continue to survive so long as at least one of these components survives. This technique is known as redundancy. Auxiliary power generators are examples of redundant items. They are put in service when the primary system fails. 4. Safe operation: Part should be designed with fail safety in mind. How the component fails is of importance. If possible, failure should occur in non catastrophic manner and should do not harm to operator.Auxiliary power generators are examples of redundant items. They are put in service when the primary system fails. 5. Protection from extreme environmental conditions. An item protected from extremes of environmental conditions will have increased reliability. For example, pilots of supersonic space-craft are protected from the effects of extremes of heat and cold. Electric motors of common household appliances are rubber mounted to protect them from vibration. 6. "Maintainability" and "serviceability" are important considerations in designing for reliability. Ease of maintenance and service contributes to higher field reliability. It is evident that an item which is easy to maintain naturally receives better maintenance and service. Reliability Tests. Reliability testing means the tests conducted to verify that a product will work satisfactorily for a given time period. Reliability testing therefore consists of functional test, environmental test and life testing. Functional Test. Functional testing involves a test to determine if the product will function at time zero. Environmental Test. Environmental conditions (temperature, humidity, vibration, etc.) are critical to many products.
Environmental test consists of determining the expected environmental levels and then carrying the functional test under the environments under which the product has to operate. Relationship between the failure rate and the mean time between failures (MTBF) & MTTF & MTTR :MTBF ( repairable systems only) is defined as mean time interval between the successive failure. It is denoted by . Related to the MTBF is the failure rate which is denoted by �. The failure rate is the reciprocal of the MTBF If a large number of items are placed in the test of the same type & operated until each one fails. The mean time to failure ( irreparable systems) Where n= number for items failed T = test duration Prove that the failure rate is the reciprocal of the MTBF: Proof: Let us test ‘n’ items each ‘t’ hours & items which fails are repairable. suppose that there are ‘r’ failures, that failure rate � is Hence proved
MTBF1=λ
n
TiMTTF
n
i�
== 1
λ
λ
tnr
ntr
durationtesttotalfaileditemsofno
=
== ,
λλ
λ1.
1
==
===
ntnt
MTBF
rnt
faileditemsofnodurationtestTotal
MTBF
Prove that the failure rate is the reciprocal of the MTBF: Proof: Let us test ‘n’ items each ‘t’ hours & items which fails are repairable. suppose that there are ‘r’ failures, that failure rate � is The reliability function of any system is given by R=probability of survival (therefore it may be denoted as P also ie probability) Problem 1: Determine the reliability of a system for a period of 100 hrs if the MTBF is 500 hrs. Solution : Reliability of a system MTBF = 500 hrs, t =100 hrs
= 0.8187 is the probability of survival of the system. Ie 81.87% Problem 2: A device as a failure rate of 5x10-6 failures / hrs a) What is the reliability for an operating period of 100hrs. b) If 10,000items are placed in the test, how many failure are expected in 100hrs © What is the MTBF. (d) What is the reliability of a systems for an operating time equal to MTBF. e) If the useful life is 1,00,000hrs what is the reliability for operating over its useful life.
λ
λ
tnr
ntr
durationtesttotalfaileditemsofno
=
== ,
λλ
λ1.
1
==
===
ntnt
MTBF
rnt
faileditemsofnodurationtestTotal
MTBF
teR λ−=
teR λ−=
hrfailuresMTBF
rateFailure /002.0500
11 ===λ
)100(002.0−= eR
Solution : Failure rate of the system = failures / hrs (a)
(b) n = 10,000 , t = 100hrs Expected no of failures = (10,000) (100) ( 5X10-6 ) r = 5 failures ( c) d) e) Problem 3: A piece of ground support equipment has a specified mean time between failure of 100hrs, what is its reliability for a mission period time of 1hrs , 10hrs, 50hrs, 100hrs, 200hrs, & 300hrs graph these answer by plotting the mission time V/S reliability. Assume exponent tonal distribue. Solution : MTBF = 100hrs
9995.0
)100105( 6
==
=−−
−
R
e
eRXX
tλ
6105 −Χ=λ
hrsX
MTBF
000,00,210511
6
=
== −λ
6065.0
000,00,13679.0
000,00,2000,00,2
)000,00,1105(
)200000105(
6
6
==
===
====
ΧΧ−
Χ−
−
−
−
eR
hrst
R
eR
tMTBF
hrsMTBFwhereeR
X
tλ
9048.0
10)(9900.0
1)(
/01.0100
11
1001.0
101.0
==
===
==
===
Χ−
Χ−
−
eR
hrstb
e
hrteRa
hrfailuresMTBF
tλ
λ
( c ) t = 50hrs , R = e-0.01X50 = 0.6065 (d) t = 100hrs, R = e-0.01X100 = 0.3679 (e) t = 200hrs, R = e-0.01X200 = 0.1353 (f) t = 300hrs, R = e-0.01X300 = 0.0498 As the duration increases the reliability of the systems decreases Systems Reliability 1. Systems connected in series 2. Systems connected in parallel System connected in series : follow a multiplication law of probability Consider 3 components A, B, C If a system consists of 3 components A,B,C connected in series , then the reliability of systems
If a system consists of 3 components A,B,C connected in series , then the reliability of systems
A C I / P
O / P B
Mission time
Rel ia
b ility
RS = RA. R B. RC.
Systems connected in parallel Here the function of A can bedone by B vice versa.If the systems consists of the Components A&B connected In Parallel with there reliability
RA & RB , then the reliability of the systems is,Rs = 1- (1-RA) (1-RB) If n components are connected in parallel then
RB
A
B
I / P O / P
RA
[ ]( )
[ ][ ]n
S
nS
nBA
nBAS
RR
RR
thenRRRIf
RRRR
)1(1
)1(1
.....................
)1......().........1()1(1
,
−=−
−−=
==−−−−=
MODULE NO -30 Reliability
Prove that the failure rate of the systems is equal to the sum of the failure rates of the components of the system. or Prove that for a series system, the failure rates are additive Proof : Let RS , RA , RB , RC… … … … … … be the reliability of the system & its component parts A , B , C, … … … … … … . For independent component in series, & exponentially distributed failure rate, RS = RA . RB . RC . … … … … … ..__(1) If are the failure rate of the system & its components parts for a mean time tm From equation (1) RS = RA . RB . RC Problem 1: A systems has 3 units with a failure rate of 1.5, 4 &3.8 failure for 10+6 hrs (a) Find the MTBF of the systems (b) Determine the reliability of the system for 10 hrs if the components are connected in series. (c) If the components are connected in parallel Solution: Failure rate is always per hr, But here it is for 106 hrs therefore 106 hrs – 1.5 1 hr = 1.5 X 10-6 = 1.5 failure for 106 hrs =1.5 x 10-6 failures / hr = 4 x 10-6 failures /
nBASei λλλλ .................... ++=
...................,,, CBAS λλλλ
mCmBmAm tttt eeee λλλλ −−−− = ..
provedHence
RHSLHSforsamearebasetheSince
ee
ee
CBAS
s
tt
CBA
CBAmms
................&
.......)..........(
......)..........(
+++=
==
+++
+++−−
λλλλ
λλλλ
λλλλ
C
B
A
λλλ
= 3.8 x 10-6 failures / hr Assuming that components are in series, (a) MTBF of the systems = MTBFs (b) (c) If the components are connected in parallel, Problem 2 : A series system has 3 independent parts A,B,&C which have an MTBF of 100, 400 & 800 hrs respectively find (a) MTBF of the system (b) Failure rate of the system in failures per million hrs (c) Failure of the systems in percent failure for 1000hrs (d) Reliability of the systems for 30 hrs Solution:
hrfailuresXs
s
CBAs
/103.9
)8.345.1(6
10 6
−=
++=
++=−
λλ
λλλλ
9999.0
88.107526103.9
11
)10103.9(
6
6
==
=
=Χ
==
ΧΧ−
−
−
−
s
ts
Ss
R
e
eR
seriesinconnectedcomponent
hrsMTBF
sλ
λ
[ ]
9999.0
9999.0
)1)(1)(1(1
10104
10105.1
6
6
===
===
−−−−=
−
−
−−
−−
XXtB
XXtA
CBAs
eeR
eeR
RRRR
λ
λ
( )9999.0
)9999.01)(9999.01)(9999.01(1
9999.010108.3 6
=−−−−=
===−−−
s
s
XXtC
R
R
eeR λ
hrsMTBF
hrsMTBF
hrsMTBF
C
B
A
800
400
100
=
=
=
hrfailures
hrfailures
hrfailures
C
B
A
/00125.0800
1
/0025.04001
/01.0100
1
==
==
==
λ
λ
λ
(a) MTBFs = 0.01+ 0.0025 + 0.00125 = 0.01375 failures/hr (b) 1 million = 106 = 0.01375 X 106 failures / million hrs = 13750 failures / million hrs (c) = 0.01375 failures / hrs = 0.01375 X 103 failures / 1000hrs = 0.01375X103 / 100 percent failures / 1000hrs = 1375 percent failures / 1000hrs (d) Problem 3 : A systems is composed of 10,000 parts, what average failure rate / part must be achieved to get a system MTBF of 250 hrs . Assume series with independent. Solution : MTBF = 250 hrs Problem 4 : Determine the reliability of an equipment having an MTBF of 50 hrs for an operating period of 45 hrs. If the reliability has to be improved by 20% what % change in the MTBF is required solution : MTBF = 50 hrs t = 45 hrs
CBAS λλλλ ++=
hrsMTBFS
S 727.7201375.0
11 ===λ
hrfailuresS /01375.0=λ
Sλ
66199.0
)3001375.0(
==
=−
−
X
t
e
eR Sλ
partperhrfailures
forhrfailures
MTBF
S
S
/004.0
000,10/40
004.0....................
004.01
000,101
==
==
==
λλ
λλ
λ
40657.0
/02.05011
4502.0
===
===
−
−
X
t
e
eR
hrfailuresMTBF
λ
λ
Change in reliability by 20 % i.e. By 20% RN = (0.20) 0.40657) + (0.40657) = 0.487884 - 0.7176 = - ie loge=1
eXL
e
eR
Nn
X
tN
N
S
log45)487850.0(487884.0 )45(
λ
λ
λ
−==
=−
−
eN log*45*λ
%78.25
2578.050
5089.62%
89.1289.6250
)0041.00159.002.0(
89.620159.01
/0159.0
=
=−=
=+−=
=−=
==∴=
increase
MTBFintimprovemen
intimprovemen
MTBFhrfailures NN
λ
λ
MODULE NO -31
Reliability Problem 5: The MTBF of a certain unit is 50hrs. Calculate the reliability for 75hrs of operating period. If the reliability of the unit is increased by 10 % , 20%, 30%, 40%&, & 50%, calculate. (a) The% changes in the MTBF that is necessary. (b) Plot a graph , % change in reliability V/S % change in MTBF. Solution : MTBF = 50 hrs, t = 75 hrs R increased by 10% R10 = 1.10X0.223 = 0.2453 R increased by 20 % R20 = 1.20 X 0.223 = 0.2676 R increased by 30 % R30 1.30 X 0.223 = 0.2899 R increased by 40 % R40 = 1.4 X 0.223 = 0.3122
223.0
/02.05011
)75(02.0
==�=
===
−− eReR
hrfailuresMTBF
tλ
λ
hrsMTBF
eX
eeR t
37.5310187.0
log752453.0ln
2453.0
10
)75(10
==∴
=−=
=�= −−
λ
λλ
λλ
hrs
MTBF
ee
89.5601758.0
101758.0
log752676.0ln2676.0
20
)75(
=
=∴=
Χ−=�= −
λ
λλ
hrsMTBF
ee
57.600165.0
log75)2899.0(ln2899.0
20
)75(
==
Χ−=�= −
λλλ
hrsMTBF
ee
43.6401552.0
log75)3122.0(ln3122.0
40
)75(
==
Χ−=�= −
λλλ
R increased by 50 % R50 = 1.50 X 0.223 = 0.3345
Reliability MTBF (hrs) % Change in MTBF
0.223
0.2453
02676
0.2899
50
53.37
56.89
60.57
0.3122
0.3345
64.43
68.486
hrsMTBF
ee
486.6801460.0
101460.0
log75)3345.0(ln3345.0
50
)75(
==
=Χ−=�= −
λλλ
%14.2110050
5057.60
%78.1310050
5089.56
%74.610050
5037.53
=��
���
� −
=��
���
� −
=��
���
� −
%972.3610050
50486.68
%86.2810050
5043.64
=��
���
� −
=��
���
� −
GRAPH
010203040
1 2 3 4 5 6
% Change in Reliability
% C
hang
e in
M
TBF
Problem 6 An electronic system consist of a power supply whose failure rate is 30 failures / 106 hrs, a receiver whose failure rate is 25 failures / 106 & an amplifier whose failure rate is 20 failures / 106 hrs in series . The equipment is to operate for 200 hrs. Determine its reliability. Solution: Since the components are connected in series Problem-7 A cassette player has got 4 subsystems, namely spool control systems, magazine pickup head, amplification & sound system & other systems, all the 4 systems must perform satisfactorily. The MTBF of the various subsystems are :
spool ctrl systems = 2000 hrs Mag. Pickup head = 2500 hrs Amplification & sound systems = 4000hrs Other systems = 3000 hrs Calculate the reliability of the cassette player for 1500 hrs reception time. What is the mean time between failures of a cassette player? Solution:
Pλ rλ aλ
hrfailuresX
hrfailuresX
hrfailuresX
P
r
P
/1020
/1025
/1030
6
6
6
=
=
=
λλλ
9851.0
Re
/1075
10)202530(
2001075
6
6
6
==
=
=++=
++==
−
−
R
eR
eRsystemtheofliability
hrfailureX
X
systemofratefailure
XX
t
arPS
Sλ
λλλλ
SCS mPH SA& Os
Since the comp. are connected in series, �S = 0.0005+0.0004+0.00025+0.00033 = 1.483 x 10-3 failures / hr. Problem-8: An item is required to have a failure rate not greater than 0.1% for 1000 hrs of operation.
(a) Assuming a constant failure rate what is the probability that one of these units will survive for a required 2000 hrs of service.
(b) Determine the minimum acceptable failure rate where the probability of survival for a required 2000 hrs of operation is 0.99
Solution: � = 0.1% for 1000 hrs � = 0.1 / 100 for 1000 hrs = 0.001 x 10-3 fail / hrs = 0.1 x 10-5 fail / hr
(b)
hrfailures
hrfailures
hrfailures
hrfailures
Os
SA
MPH
SCS
/00033.03000
1
/00025.04000
1
/0004.02500
1
/0005.02000
1
&
==
==
==
==
λ
λ
λ
λ
10807.0
Re)(150010483.1 3
==
=××−
−
−
e
eRliabilitya tsλ
hrs
systemtheofMTBFbS
68.67510483.1
1
1)(
3
=×
=
=
−
λ
998.0)( 200101.0 5
=== ××−− −
eeRa tλ
6
)200(
10025.5
log200099.0ln99.0
−
−
−
×=
−==
=
λλ
λ
λ
e
e
eR t
Problem-9 A participant in a motor rally is required. to complete a mission time of 2500 km. his vehicle can be imagined to have 3 subsystems namely fuel, ignition & other systems. The mean time to repair ( MTTR) of the 3 sub systems are known to be 6000, 8000, 10,000 kms respectively. Find the reliability of his completing the mission without repair. Solution: Since the comp. are connected in series Problem-10 What is the failure rate for a piece of equipment if the probability of survival is 88% for 900 hrs of operating period. Express the failure rate in terms of percent failures for 1000 hrs. Solution: R = 0.88, t = 900
kmfailureshrsMTTR
kmfailureshrsMTTR
kmfailureshrsMTTR
OO
II
FF
/10110000
1ie 10000
/1025.18000
1ie 8000
/1066.16000
1ie 6000
4
4
4
−
−
−
×===
×===
×===
λ
λ
λ
Iλ Oλ Fλ
kmfailures
SIFS
/1091.3
10)125.166.1(4
4
−
−
×=
++=
++= λλλλ
376.0
)25001091.3( 4
==
=××−
−
−
e
eR tSλ
hrfailures
e
eR
e
t
/1042.1
log90088.0ln88.0
4
)900(
−
−
−
×=
×−==
=
λλ
λ
λ
hrsfailurespercent
hrsfailurespercent
hrfailurespercent
1000/42.1
1000/10001001042.1
/1001042.1
4
4
=
××=
×=
−
−λ
Problem-11: A 750 hrs life test is performed on 6 components. One component fails after 350 hrs of operation. All others survive the test. Compute the failure rate . Solution: Redundancy One of the methods for improving the reliability of the system is by initializing the concept of redundancy. To enhance the reliability of the system, quite often additional units are built in to the system to perform the same functions. In such a system, one component failure will not necessarily cause the system failure. Since additional components are available to perform the same function. Redundancy is defined as the characteristic of the system by virtue of which marginal component failures are prevented from causing system failures due to the presence of additional components. In order to increase the reliability of the system select the component which has the least reliability & then arrange it parallely? For example:
RS = (RA). (RB). (RC) = (0.8). (0.5). (0.7) = 0.28 In order to increase the above systems reliability, select the component which has the least reliability & arrange it parallely.
hrfailure
duratriontesttotalfailureditemsofno
rateFailure
/10439.2
)3501()7505(1
.
4−×=×+×
=
=λ
AR BR CR0.8 0.7 O/P I/P 0.5
RB’ = 1-[(1-RB) (1-RB)] = 1-(1-0.5)2 RB’ = 0.75
RS = (0.8) (0.75) (0.7) = 0.42 It is clear from the system that reliability of the system is increased from 0.28 to 0.42. Derivation
Definition of improvement factor (IF)
If in the case of ‘n’ parallel redundancies, the improvement factor (IF) =1-R / 1-RS Where (1-R) = unreliability of each component (1-RS) = unreliability of the system If ‘n’ components are connected in parallel with the same reliability as shown in the fig below.
We know that, if n component are connected in parallel, then the reliability of the systems is
CRO/P I/P
0.7
BR
BR
AR0.8
0.5
0.5
AR 'BR CR
0.8 0.7 0.75 I/P O/P
RS = 1-[(1-RA) (1-RB) - - - (1-Rn) Since the reliabilities RA = RB = - - - Rn, then RS = 1- (1-R)n There fore 1-RS = (1-R)n Problem-1 A system consists of 3 components A, B & C. The configuration of the system and the reliabilities of the elements are given below. Calculate the reliability of the system.
Solution: RAB = 1 - [ (1-RA)(1-RB) ] = 1 – [ (1-0.8) (1-0.7) ] = 0.94
RS = (RAB) (Rc) = (0.94) (0.9) = 0.846. Problem-2 Calculate the reliability of the configuration given below.
n
Sn
QIF
QRifRR
RR
IF
−=
=−−−=
−−=
1
)1(11
)1(1
Solution: RAB = 1 - [ (1-RA)(1-RB) ] = 1 – [ (1-0.7) (1-0.7) ] = 0.91
RABC = (RAB) (RC) = (0.91) (0.9) = 0.819
RABCD = 1 - [ (1-RABC )(1-RD) ]
= 1 – [ (1-0.819) (1-0.8) ] = 0.9638
RABCDE = (0.9638) (0.9)
= 0.86742 Problem-3: What is the reliability of the systems shown below. P(A)=P(B)=P(C)=0.8 P(D) = 0.95, P(E) = 0.85
How would the reliability be improved further if the sub system E is also made parallel redundant. Show the configuration of the system. Solution: Case 1: RABC = 1 - [ (1-RA)(1-RB) (1-RC) ]
= 1 – [ (1-0.8)3 ] = 0.992
RABCDE = (RABC) (RD) (RE)
= (0.992) (0.95) (0.85)
= 0.80104
RABC = 1 - [ (1-RA)(1-RB) (1-RC) ]
= 1 – [ (1-0.8)3 ] = 0.992
RE’ =1-[(1-RE)2]
=1-[(1-0.85)2=0.9775
RABCDE’ = (RABC) (RD) (RE’ ) = (0.992) (0.95) (0.9775) = 0.9212 % improvement in reliability = 0.9212-0.80104 / 0.80104 = 15% Problem-4: Determine the probability of success for the system with all units operating P(A) = 90%, P(A) =85%, P(c) =75% & P(D) = 80%
Solution:
P(A) = 0.9, P(B) =0.85, P(C) = 0.75, P(D) =0.80
RD’ = 1 - [ (1-RD)(1-RD) (1-RD) ] = 1 – [ (1-0.8)3 ] = 0.992
RABCD’ A = (RA) (RBC) (RD’ ) (RA) = (0.9) (0.9625) (0.992) (0.9) = 0.773388 Therefore probability of success for the system with all units operating ( i.e. reliability of the system) = 0.77388.
Problem-5 An electronic system consists of 5 subsystems with the following MTBF’ s. (SS = subsystem) SS A = 12,500 hrs SS D = 9,850 hrs SS B = 2,830 hrs SS E = 15,500 hrs SS C = 11,000 hrs These 5 subsystems are arranged in series configuration. What is the probability of survival for an operating period of 800 hrs. Solution:
RS = (RA) (RB) (RC) (RD) (RE) = (0.938) (0.7146) (0.9299) (0.95008) = 0.5458
938.0
/108500,1211
800108
5
5
===
×===
××−−
−
−
eeR
hrfailuresMTBF
tA
AA
λ
λ
7146.0
/108380,211
800102.4
4
4
===
×===
××−−
−
−
eeR
hrfailuresMTBF
tB
BB
λ
λ
9299.0
/1009.9000,1111
8001009.9
5
5
===
×===
××−−
−
−
eeR
hrfailuresMTBF
tC
CC
λ
λ
9216.0
/1002.1850,911
8001002.1
4
4
===
×===
××−−
−
−
eeR
hrfailuresMTBF
tD
DD
λ
λ
95008.0
/104.6500,1511
800104.6
5
5
===
×===
××−−
−
−
eeR
hrfailuresMTBF
tE
EE
λ
λ
Problem-6 A step down transformer, rectifier, filter comprises a series of systems. The following are the failure rates of these components.
Transformer =1.56 percent failures / 1000 hrs Amplifier = 2 percent failures / 1000 hrs Filter = 1.7 percent failures / 1000 hrs The equipment is operate for 1500 hrs. What is the probability of survival of the system. Solution:
RS = (RT) (RA) (RF) = (0.97687) (0.97044) (0.97482) = 0.92412 Problem 7: Determine the reliability of the systems for 20 hrs of operating period. The configuration is given below. The failure rate per hr are also given.
hrfail
hrfailurespercentT
/1056.1
1000/56.15−×=
=λ
hrfailures
hrfailurespercentA
/102
1000/25−×=
=λ
hrfailures
hrfailurespercentF
/107.1
1000/7.15−×=
=λ
97687.015001056.1 5
=== ××−− −
eeR TtT
λ
97044.01500102 5
=== ××−− −
eeR AtA
λ
97482.01500107.1 5
=== ××−− −
eeR FtF
λ
,025.0,02.0,02.0,015.0,01.0 ===== AAAAA λλλλλ
RBC = 1 – [ (1-RB ) (1-Rc )]
= 1- [(1 – 0.7408) (1 – 0.6703)]
= 0.9145
RABC = ( RA ) ( RBC )
= (0.8187) (0.9145)
= 0.7487
6065.0
6703.0
6703.0
7408.0
8187.0
20025.0
2002.0
2002.0
20015.0
2001.0
===
===
===
===
===
Χ−−
Χ−−
Χ−−
Χ−−
Χ−−
eeR
eeR
eeR
eeR
eeR
tEE
tDD
tCC
tBB
tAA
λ
λ
λ
λ
λ
RABCD = 1 – [ ( 1- RABCD ) (1- RD ) ]
= 1 – ([ ( 1 – 0.7487) ( 1 – 0.6703)]
= 0.917146
RABCDE = (RABCD ) ( RC ) = (0.917146) ( 0.6065) = 0.5562
Recommended