7
One-Step Screening and Process Optimization Experiments Author(s): John Lawson Source: The American Statistician, Vol. 57, No. 1 (Feb., 2003), pp. 15-20 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/3087272 . Accessed: 10/09/2013 04:47 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to The American Statistician. http://www.jstor.org This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AM All use subject to JSTOR Terms and Conditions

One-Step Screening and Process Optimization Experiments

Embed Size (px)

Citation preview

One-Step Screening and Process Optimization ExperimentsAuthor(s): John LawsonSource: The American Statistician, Vol. 57, No. 1 (Feb., 2003), pp. 15-20Published by: American Statistical AssociationStable URL: http://www.jstor.org/stable/3087272 .

Accessed: 10/09/2013 04:47

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to TheAmerican Statistician.

http://www.jstor.org

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions

One-Step Screening and Process Optimization Experiments John LAWSON

It is often recommended that process optimization experiments be performed sequentially, starting with a screening design and followed by additional experiments to deconfound interactions, to proceed along the path of steepest ascent, or to fit a response surface. In many situations, however, experimenters do not have the luxury of follow-up experiments, and they must base their improvement recommendations on the result of one experiment. Better than choosing between an optimization or screening de- sign, recent literature has shown that certain orthogonal array experiments can be used simultaneously for screening and op- timization. This article describes these experiments and gives examples of one-step screening and optimization.

KEY WORDS: Complex confounding; Hidden projection properties; Main effect plan; Orthogonal array.

1. INTRODUCTION

It is often recommended that process optimization experi- ments be conducted sequentially. The suggested initial step is a screening experiment such as a 2k-p or Plackett-Burman de-

sign to identify the important factors. Once this initial step is

completed and the data analyzed, additional experiments may be recommended to deconfound main effects from potential in- teractions. After the important factors and interactions are iden- tified, again additional experiments may be recommended. The purpose of more experiments at this stage is to expand the ini- tial design to allow fitting a quadratic model to the data, or to proceed along the path of steepest ascent to a region where a response surface design can be conducted.

However, when experiments are conducted in a manufactur- ing facility, or a pilot plant (that can be scheduled only for a limited time), or when experiments are being conducted with prototype engineering designs with a fixed deadline for scale up and production, the possibility of follow-up experiments may be small. After beginning their conduct, some experimental plans must be revised when the experimenter finds he cannot vary fac- tors independently, discovers important but previously unknown factors, or finds the chosen range for a factor is infeasible. But when time allotted for experimentation is tight, rarely is there liberty for follow-up experiments to deconfound interactions, or for follow-up experiments to allow fitting a quadratic model to the data. In many situations the hope of expanding the initial range of the factor settings in later experiments is bleak, and the best that can be hoped for is to find the optimum within the initial range tested.

John Lawson is Associate Professor, Department of Statistics, Brigham Young University, Provo, UT 84604 (E-mail: [email protected]).

For the reasons cited above, the recommended sequential ex-

perimentation strategy is often abandoned in many process im-

provement studies in favor of a single experiment. But the ex- perimental designs normally recommended for screening and optimization experiments are mutually exclusive. If an experi- menter can afford to run only one experiment, he is forced to choose between one objective or the other. If the experimenter chooses a classical response surface design, he must select a sub- set of the factors to work with and takes the chance of missing other important factors and improvement possibilities. On the other hand, if the experimenter decides to conduct a screening experiment, she may miss important interactions and quadratic effects that could lead to further process improvements and cost reductions.

It would be desirable to have experimental design plans that would allow for (1) screening a reasonably large number of factors, to identify the truly important ones, and (2) refitting a model, in the important factors, that would be useful in lo- cating the optimum, or improved process operating conditions within the factor ranges studied.

Recent literature has shown that certain orthogonal-array- main-effect plans have desirable properties for one-step screen- ing and process improvement studies. Cheng and Wu (2001) proposed a strategy for using nonregular fractions of three-level designs for simultaneous factor screening and response surface exploration. Plackett-Burman designs can be used in a similar way for factor screening and constrained optimization. This ar- ticle illustrates the use of both these designs, by example, for one-step screening and optimization experiments.

2. TRADITIONAL SCREENING AND OPTIMIZATION

Experimental objectives dictate the experimental design that should be used. The three experimental objectives encountered in process improvement studies are (1) screening, (2) con- strained optimization, and (3) unconstrained optimization (see Lucas 1985; Lawson and Erjavec 2001). An experimental de- sign suited for one of these objectives normally would not be considered for another objective.

When the objective is screening, a large number of factors are being varied with the goal of identifying the ones that have the most important effects on the response. In this environment the experimental designs usually recommended are highly fraction- ated 2k-p designs and Plackett-Burman designs. These designs allow only two levels for each factor being studied, but as many as n - 1 factors can be studied in a design with n runs. The mo- tive is that only the minimum and maximum of a wide range for quantitative factors need be tested, and only the most contrasting levels of factors with discrete levels need be tested in order to accentuate their effects. By restricting the levels to two, more factors can be studied so that no potentially important ones will be left out.

? 2003 American Statistical Association DOI: 10.1198/0003130031045 The American Statistician, February 2003, Vol. 57, No. 1 15

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions

After identifying the most important factors in the screen- ing stage, an experimenter's next objective is constrained opti- mization or unconstrained optimization. When the objective is constrained optimization, the number of factors under study is usually not greater than 6, and the purpose is to identify the opti- mum combination of factor levels, among those actually tested. Full factorial designs (including multiple levels for factors with discrete levels) or resolution V fractional factorial designs are commonly employed for this objective. These designs allow es- timation of interaction effects and identification of the optimum combination of factor levels. A useful model to fit to the data in a constrained optimization experiment is

k k

y= +/3o i+ E i3,XiX, (1) i=l i<j=l

which includes all main effects and two-factor interactions. When only quantitative factors are being studied and curvi-

linear relationships with the response are possible, it may be possible to identify improved operating conditions by interpo- lating between the factor levels studied. When this is the goal, the objective is called unconstrained optimization. With an un- constrained optimization objective the experimenter is normally attempting to identify the optimum factor levels within a region defined by 5 or less quantitative factors. Response surface de- signs, such as the central composite or Box-Behnken, are nor- mally used for this objective. These designs allow fitting the quadratic model given in Equation (2) to approximate the rela- tionship between the response and the quantitative factors.

k k k

y = 3o + Z iX, + x,x,j + A 2.ii (2) i=1 i<j=1 i=1

The surface defined by this quadratic model can be explored to identify the optimum operating conditions.

3. ORTHOGONAL MAIN EFFECT PLANS WITH HIDDEN PROJECTION PROPERTIES

Orthogonal main effect plans are experimental designs where all columns are orthogonal with each other but are aliased with two-factor interactions. Orthogonal main effect plans, such as Plackett-Burman designs and certain fractions of 3k designs, have complex confounding. By complex confounding we mean that interactions are partially confounded with many main ef- fects rather than being completely confounded with one main effect, as they are in regular 2k-p fractions. Lin and Draper (1992) and Wang and Wu (1995) showed that designs with com- plex confounding have a hidden projection property. The hidden projection property of designs allows estimation of some interac- tions, even though the designs are of resolution III. This property makes designs useful for one-step screening and optimization, when projecting to a subset of the factors after resolution III.

Plackett-Burman designs are orthogonal two-level designs discovered during World War II for screening experiments with anti-aircraft proximity fuses (Box and Bisgaard 1993). These designs are fractions of a full 2k factorial but are nonregular in the sense that they cannot be created from a defining relation- ship. The number of runs in these designs are a multiple of 4, and

the 12-run, 20-run, and 24-run designs are useful for screening experiments in up to 11, 19, or 23 factors, respectively. Wang and Wu (1995) showed that when only a subset of the factors is important, it is possible to fit a model that includes main effects and two-factor interactions for all the important factors. This is a model usually fit to data from a resolution V fractional facto- rial design. Subsets of three factors from the Plackett-Burman designs form a partially replicated full factorial in these fac- tors. Therefore Plackett-Burman designs appear to be excellent choices for one-step screening and constrained optimization ex- periments.

Three nonregular fractions of 3k factorials that would nor- mally be considered screening designs were shown by Wang and Wu (1995) and Cheng and Wu (2001) to have desirable prop- erties for fitting a full quadratic model to subsets of 3, 4, or 5 factors. These three designs are the OA(18, 37), the OA(27, 38), and the OA(36, 312). These designs are shown in the Appendix. The notation OA stands for orthogonal array. The first num- ber in parentheses refers to the number of runs, and the super- script on the three shows the number of three-level factors that can be accommodated in the design. These designs can be used for screening 7, 8, or 12 three-level factors, respectively. Wang and Wu (1995) showed that a full quadratic model can be fit to 34 of the 35 subsets of three or four factors selected from the OA(18, 37) design. Cheng and Wu (2001) showed that a full quadratic model can be fit to 55 of the 56 subsets of five fac- tors from the OA(27, 38). The subsets of factors that allow fit- ting the quadratic model are called eligible subsets. In almost all cases where these designs are used for screening, a full quadratic model in the important factors can also be fit to the same data by using methods of analysis that have been recommended for de- signs with complex confounding (such as those described in Sec- tion 4). Therefore these designs appear to be excellent choices for one-step screening and unconstrained optimization.

In addition to being able to fit a full quadratic model to subsets of the factors in the OA(18, 37), the OA(27, 38), and the OA(36, 312) the characteristics of the designs from a re- sponse surface perspective are quite good. Cheng and Wu (2001) showed that compared to the central composite designs (CCD) normally used for response surface experiments, the relative D- efficiency of the eligible nonregular fractions ranged from 81- 99% for subsets of three factors, 65-96% for subsets of four factors, and 60-81% for subsets of five factors. The value in the range depends upon which of the three designs is used and the specific subset of factors. Guderian (1996) showed that the ratio of the maximum to minimum variance of a predicted value, and the degree of rotatability, for subsets of the OA(18, 37) were comparable to other traditional response surface designs with similar numbers of runs.

4. ANALYSIS OF ORTHOGONAL MAIN EFFECT PLANS WITH HIDDEN PROJECTION PROPERTIES

When analyzing data from orthogonal main effects plans, it is impossible to start with model (1) or model (2) in all the factors, then eliminate insignificant terms because there are not enough degrees of freedom to fit the full model. Instead we must start by identifying important main effects, as in the analysis of a screening design, and then augment the model with interactions. This is not straightforward because of the complex confounding

16 Statistical Practice

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions

in orthogonal array designs, but it is possible since no interaction is completely confounded with a main effect.

Several methods have been proposed that allow detection of a few important interactions in orthogonal main effect plans with hidden projection properties. Using these methods with data from a Plackett-Burman design, the significant main ef- fects and interactions in model (1) can be detected. When data from an OA(3k) design is analyzed using these methods the sig- nificant linear, quadratic and interaction terms in model (2) can be detected.

Hamada and Wu (1992) proposed an iterative analysis strat- egy based on forward stepwise regression. Their strategy was guided by the principle of effect heredity. Models with effect heredity would not include a two-factor interaction, unless the main effect of at least one of the two factors also appeared in the model. Lin (1998-1999) proposed a simpler forward step- wise regression, to detect interactions, that was not restricted by effect heredity. These two methods based on forward stepwise regression explore only a subset of the potential model space. Once a variable enters the model in a forward search, it will never be removed, even if it becomes superfluous after a sub- sequent variable enters the model. Therefore the model found by forward regression may not be the best for the data. When using this method, it may be unknown to the analyst that one, or several, other models may fit the data equally well as the one he has found.

Box and Meyer (1993) proposed a Bayesian method of identi- fying subsets of important factors in an orthogonal main effects plan with hidden projection properties. This proposed method assigns prior probabilities to a series of models, Mi, that each consist of a subset of factors and interactions obeying the ef- fect heredity principle. The marginal probability that a factor, fj,, is active is calculated by summing the posterior probabili- ties for each model it appears in. By including in the class of models all subsets up to a certain limit, this method can explore a much larger model space (like an all subsets regression) than the methods based on forward stepwise regression.

Another Bayesian method of analysis, based on the stochastic search variable selection algorithm of George and McCulloch (1993), was proposed by Chipman, Hamada, and Wu (1997). They proposed to use priors that capture the dependence relation between the interactions and the main effects they are formed from. Using this method all possible models have a positive probability of being visited in a stochastic model search. This method computes posterior probabilities for the models, and when the several models are identified with comparable posterior probabilities, the authors explain that the confounding in the

Table 1. Factors and Levels

Levels

Factor- +

A. Hole geometry Square Chamafered B. Capsule position Vertical Up/down C. Feedback spring V-shaped U-shaped D. Diaphragm material Thin Thick E. O-ring material Type A Type B F. Follower arm tolerance Low end High end

design is such that the data is not informative about the exact model choice.

Lawson (2002) proposed an all-subsets regression analysis of the data after the list of candidates had been reduced, by considering properties of the alias matrix. The alias matrix shows which interactions have the highest potential to contribute to the large contrasts. By eliminating the others, the list of candidates is reduced to the degree where computation time for all-subsets regression is feasible, and only the likely models are explored. The all-subsets regression explores a much larger model space like the Bayesian methods, but can be performed using standard statistical packages like SAS, Minitab, and S-Plus that include matrix computations.

5. EXAMPLES OF ONE-STEP SCREENING AND OPTIMIZATION EXPERIMENTS

5.1 Example 1: Constrained Optimization Six factors were studied to improve the performance of an

electro-pneumatic valve positioner used for large valves in chemical processing plants and oil field applications. This was an existing product that was experiencing some problems in the field. Engineers were hoping to reduce the hysteresis, or the tar- diness of the positioner to open or close the valve in response to a pneumatic signal. Table 1 lists the six factors studied. Five of the six factors shown had discrete levels. The high (+) levels for these factors represented the normal, and the low (-) levels represented experimental conditions. The sixth factor (follower arm tolerance) was quantitative, and the low and high levels were chosen to span the range of what was normally seen in practice. This product improvement study was allocated only limited budget.

To find the optimum combination of levels for the six factors would require 26 = 64 experiments, or a 26-1 resolution VI

design. But, the experiments were time consuming since each experiment required a prototype valve positioner to be assem- bled and tested. There was not enough time or resources for that many experiments.

The number of experiments could be reduced to a feasible number by either performing a full 2k experiment with a subset of three of the factors thought to be important, or by perform- ing a screening experiment such as an 8-run resolution III 26-3 fractional factorial, a resolution IV 16-run 26-2, or a 12-run Plackett-Burman design. Running a full factorial in a subset of the factors was unadvisable, since an important factor might be left out. With the limited budget and priority for this project, there would be no time for follow-up experiments. So, the re- sults could be confusing if a regular fractional factorial were used and important interactions existed. A Plackett-Burman screen- ing design was used, that could possibly allow for screening and constrained optimization with one design. Simulated data pat- terned after the actual experimental results are shown in Table 2.

A simple analysis of this data, including effect calculation and a half-normal plot of the effects, revealed the following. The largest effects in absolute value were: factor A (hole geometry), factor D (diaphragm material), unassigned columns 10 and 11, and factor E (O-ring material). Of these, only factor A had a clearly significant effect. R2 for the model including only factor

The American Statistician, February 2003, Vol. 57, No. 1 17

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions

Table 2. Plackett-Burman Design

Run A B C D E F 7 8 9 10 11 Y

1 + - + - - - + + + - + 0.702 2 + + - + + + + 0.667 3 - + + - + + + + 0.188 4 + - + + - + + + 0.786 5 + + - + + - + + 0.760 6 + + + - + + - + - 0.605 7 - + + + - + + - + - 0.390 8 - - + + + - + + - + - 0.367 9 - + + + - + + - + 0.506 10 + - - - + + + - + + - 0.471 11 - + - - - + + + - + + 0.300 12 - - - - - -0.320

A was 0.72. Had time and resources been available, a follow- up 23 full factorial using factors A, D, and E might have been planned.

Without the luxury of follow-up experiments more informa- tion was sought from the available data. The all-subsets regres- sion procedure recommended by Lawson (2002) was used. Can- didate independent variables included the main effects and the interactions AB, AC, AD, AE, BC, BD, BE, CD, CE, and DE. These interactions were identified by properties of the alias ma- trix as described by Lawson (2002). The model selected after the all subsets regression included main effects A, and D, and discovered an interaction AC to be significant. This three-term model had an R2 of 0.97. Factor D (diaphragm material) had a positive effect, and the low level (thin) resulted in lower hys- teresis. The interaction plot, shown in Figure 1, shows that when choosing the low level (square) for factor A (hole geometry) an additional reduction to Y (hysteresis) could be made by choosing the high level (U-shaped) of factor C (feedback spring).

The optimum combination of (- = square) hole geometry, (- = thin) diaphragm material and (+ = U-shaped) feedback spring, was predicted to result in greater than 50% reduction in hysteresis. This combination of factor levels was found after one design with no follow-up experiments.

5.2 Example 2: Unconstrained Optimization Maruthi and Joseph (1999-2000) described an experiment

wherein eight factors were studied to reduce inner layer defects of printed circuit boards (PCB). Although this experiment was not initially analyzed in the way described in this article, we

Table 3. Factors and Levels

Levels

Factor 1 2 3

A. Preheat No Yes B. Surface preparation Scrub Pumice Chemical C. Lamination speed 1.2 m/min. 1.5m/min 1.8 m/min D. Lamination pressure 20 psi 40 psi 60 psi E. Lamination termperature 95? 105? 115? F. Exposure step (RST) 14 17 20 G. Developer speed 3 4 5 H. ORP 500 530 560

will reanalyze the data as if it were from a one-step screening optimization experiment. Seven of the eight factors could be varied over a continuous range and the levels are shown in Table 3. The purpose of the study was to find the optimal combination to reduce defects due to shorts.

Eight factors are generally too many for a traditional response surface design. The OA(18, 37) design can be augmented by one two-level factor that is orthogonal to the first three-level factor, and that was the design used for this study.

The results were shown by Maruthi and Joseph (1999-2000). The response was the signal-to-noise ratio, r7 = 10log((1 -

p)/p), where p is the fraction defective for shorts. A simple ANOVA of the data revealed large effects for fac-

tor A (preheat) factor F (exposure step (RST)). Five reasonable models with R2 > 0.95, and five or six terms were found after a further all-subsets regression. The candidate terms for the all- subsets regression were the main-effects (linear and quadratic main-effects for three level factors) and a subset of the possible linear-by-linear interactions. The subset of interactions included were identified by properties of the alias matrix as described by Lawson (2002). The terms in all of these reasonable models were a subset of the full quadratic model involving, factors C, D, F, and G with a two-level covariate, factor A.

A full quadratic model was fit in these variables with the co- variate. Numerical analysis of the fitted surface revealed that the maximum signal-to-noise ratio or minimum defects due to shorts was predicted to occur at the low levels of these five fac- tors. Figure 2 is a contour plot that shows the maximum r is slightly over 24 at these conditions. This is very similar to the

20 - -

A = Hole Geometry . Square

Chamafered 0.7 -

0.6 -

) 0.5 -

0.4 -

V-shaped C = Feedback Spring

Figure 1. Interaction plot AC interaction.

U-shaped

p U- 03

C) c)

03 Q.

x w

la ----- -. 14 -I -

18 -

17 -

16 -

15 -

14 -

~ " - . . ..-^ .

' 16 '

-18 ^ - --20 ,

220

_ 2 .%. v ,2 22

\ X \ \ I I I I I

20 30 40 50 60 Lamination Pressure

Constant values: Preheat: No, Lamination Speed: 1.2, Developer Speed: 3.0

Figure 2. Contour plot of signal-noise-ratio for shorts.

18 Statistical Practice

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions

optimum conditions and predicted value found by Maruthi and model can be fit to the important factors, and response surface Joseph (1999-2000) and confirmed with a confirmation experi- exploration methods can be used to identify the optimum oper- ment. ating conditions.

6. CONCLUSIONS AND GUIDELINES FOR PLANNING ONE-STEP EXPERIMENTS

APPENDIX

When time or budget constraints prevent sequential exper- imentation, one-step screening and optimization experiments are a viable alternative. Some designs normally thought to be A.1 TABLES OF USEFUL DESIGNS FOR ONE-STEP useful only for screening experiments, when combined with ap- SCREENING OPTIMIZATION EXPERIMENTS propriate methods of data analysis, are useful for (1) detecting interactions, (2) fitting quadratic models to a subset of the fac- Table A Th OAf 8,37) tors, and (3) identifying optimal operating conditions within the

ranges studied. Run 1 2 3 4 5 6 7 ranges studied. In example 1, six factors were studied with the ultimate goal 1 0 0 0 0 0 0 0

of finding the optimum combinations of discrete factor levels. A 2 0 1 1 1 1 1 1 traditional design for constrained optimization would have re- 3 0 2 2 2 2 2 2 quired 32 or 64 experiments. Since it was not known whether all 1 1 1 2 2 factors were important, the number of experiments could have 6 1 2 2 0 1 1 been reduced with a sequential approach. This approach would 7 2 0 1 0 2 1 2 start with an 8-run 2I63 design and later follow-up experiments. 8 2 1 2 1 0 2 0

9 2 2 0 2 1 0 1 Instead, a 12-run Plackett-Burman design was employed be- 10 0 0 2 2 1 1 0 cause of budget constraints. Due to the hidden projection prop- 11 0 1 0 0 2 2 1 erties of that design, and recent methods of data analysis, an 12 0 2 1 1 0 0 2 interaction was discovered and the optimum combination of fac- 13 1 0 1 2 0 2 1

14 1 1 2 0 1 0 2 tor A (hole geometry), factor C (feedback spring), and factor D 15 1 2 1 2 1 0 (diaphram material) was identified after only 12 experiments. 16 2 0 2 1 2 0 1

In example 2, seven quantitative three-level factors and one 17 2 1 0 2 0 1 2 discrete two-level factor were studied in order to find the opti- 8 2 2 1 0 1 2 0 mal combination of factor levels. A traditional response surface design would have required in excess of 60 experiments. The sequential approach would have required at least two follow- Table A2. The OA(27,38) up designs. Instead an 18-run OA(18, 37) design was used. An R 1 2 3 4 5 6 7 all-subsets regression of the data showed only four quantitative factors to have large effects and or interactions. The projection 1 0 0 0 0 0 0 0 0

properties of the design allowed a full quadratic model to be fit 2 1 0 1 0 1 2 1 2 3 2 0 2 0 2 1 2 1 in these factors and the optimum experimental conditions to be 4 0 1 0 1 1 2 2 1

determined after only 18 experiments. 5 1 1 1 1 2 1 0 0

Using the sequential experimental approach normally re- 6 2 1 2 1 0 0 1 2

quires more experiments than a one-step screening and opti- 7 1 2 1 2 2 1 1 2

mization experiment. More data will provide more complete in- 9 2 2 2 2 1 2 0 0 formation. However, when the number of factors under study is 10 0 0 1 2 2 0 1 0 large and time or budget constraints prevent the use of sequential 11 1 0 2 2 1 1 2 2

12 2 0 0 2 0 2 0 1 follow-up experiments, it is recommended that one-step screen- 13 o 1 1 0 1 1 0 1

ing optimization experiments be used, rather than choosing be- 14 1 1 2 0 0 2 1 0 tween a 2k-p screening design and a response surface design in 15 2 1 0 0 2 0 2 2 a subset of the factors. When constrained optimization is the ul- 16 0 2 1 1 0 2 2 2

17 1 2 2 1 2 0 0 1 timate objective, Plackett-Burman designs should be used. Data 18 2 2 0 1 1 1 1 0

analysis by Bayesian methods described by Chipman, Hamada, 19 0 0 2 1 0 1 0 2 and Wu (1997), or all-subsets regression described by Lawson 20 1 0 0 1 1 0 1 1

21 2 0 1 1 2 2 2 0 (2002), are recommended, and can detect important interactions 22 0 1 2 2 1 2 2 0 so that an optimum combination of factor levels can be identi- 23 1 1 0 2 2 2 0 2 fled. When unconstrained optimization is the ultimate objective, 24 2 1 1 2 0 1 1 1 the OA(18, 37), OA(27,38), or OA(36, 312) should be used de- 25 0 2 2 0 2 2 1 1

26 1 2 0 0 0 1 2 0 pending on the number of factors under study. After identifying 27 2 2 1 0 1 0 0 2 the important factors and interactions in the data from these designs, using the same methods cited above, a full quadratic

The American Statistician, February 2003, Vol. 57, No. 1 19

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions

Table A3. The OA(36,312)

Run 1 2 3 4 5 6 7 8 9 10 11 12

1 0 0 0 0 0 0 0 0 0 0 0 0 2 1 1 1 1 1 1 1 1 1 1 1 1 3 2 2 2 2 2 2 2 2 2 2 2 2 4 0 0 0 01 1 1 1 2 2 2 2 5 1 1 1 1 2 2 2 2 0 0 0 0 6 2 2 2 2 0 0 0 0 1 1 1 1 7 0 0 1 2 0 1 2 2 0 1 1 2 8 1 1 2 0 1 2 0 0 1 2 2 0 9 2 2 0 1 2 0 1 1 2 0 0 1

10 0 0 2 1 0 2 1 2 1 0 2 1 11 1 1 0 2 1 0 2 0 2 1 0 2 12 2 2 1 0 2 1 0 1 0 2 1 0 13 0 1 2 0 2 1 0 2 2 1 0 1 14 1 2 0 1 0 2 1 0 0 2 1 2 15 2 0 1 2 1 0 2 1 1 0 2 0 16 0 1 2 1 0 0 2 1 2 2 1 0 17 1 2 0 2 1 1 0 2 0 0 2 1 18 2 0 1 0 2 2 1 0 1 1 0 2 19 0 1 0 2 2 2 0 1 1 0 1 2 20 1 2 1 0 0 0 1 2 2 1 2 0 21 2 0 2 1 1 1 2 0 0 2 0 1 22 0 1 1 2 2 0 1 0 0 2 2 1 23 1 2 2 0 0 1 2 1 1 0 0 2 24 2 0 0 1 1 2 0 2 2 1 1 0 25 0 2 1 0 1 2 2 0 2 0 1 1 26 1 0 2 1 2 0 0 1 0 1 2 2 27 2 1 0 2 0 1 1 2 1 2 0 0 28 0 2 1 1 1 0 0 2 1 2 0 2 29 1 0 2 2 2 1 1 0 2 0 1 0 30 2 1 0 0 0 2 2 1 0 1 2 1 31 0 2 2 2 1 2 1 1 0 1 0 0 32 1 0 0 0 2 0 2 2 1 2 1 1 33 2 1 1 1 0 1 0 0 2 0 2 2 34 0 2 0 1 2 1 2 0 1 1 2 0 35 1 0 1 2 0 2 0 1 2 2 0 1 36 2 1 2 0 1 0 1 2 0 0 1 2

2k-P-is a 2k-p/2k fraction of a two-level experiment with k factors. This design has 2k-p runs.

2k-p -this is a 2k-p design of resolution III, meaning main effects are confounded with two factor interactions.

Rotatability-A property of second-order designs, where in the variance of a predicted value y, at a design point described

by the coded factor levels (Xl, x2,... Xk) is a function of

only the distance of (xl, x2,... Xk) from the center of the

design. D-Efficiency-is the ratio of the determinant of the moment ma-

trix of a design to the determinant of the moment matrix of the design that maximizes that determinant. Since the de- terminant of the moment matrix is inversely proportional to the volume of confidence region on the regression coef-

ficients, the D-Efficiency of a design shows how well the

regression coefficients will be estimated using that design.

[Received May 2002. Revised September 2002.]

REFERENCES

Box, G., and Bisgaard, S. (1993), "Georges Column-What Can You Find Out from Twelve Experimental Runs?" Quality Engineering, 5, 663-668.

Box, G. E. P., and Meyer, R. D. (1993), "Finding the Active Factors in Fraction- ated Screening Experiments," Journal of Quality Technology, 25, 94-105.

Cheng, S., and Wu, C. F. J. (2001), "Factor Screening and Response Surface Exploration," Statistica Sinica, 11, 553-604.

Chipman, H., Hamada, M., and Wu, C. F J. (1997), "A Bayesian Variable- Selection Approach for Analyzing Designed Experiments with Complex Aliasing," Technometrics, 39, 372-381.

George, E. I., and McCulloch, R. E. (1993), "Variable Selection via Gibbs Sam- pling," Journal of The American Statistical Association, 88, 881-889.

Guderian, R. (1996), "Response Surface Properties of the L18 Design," unpub- lished Master's Thesis, Brigham Young University.

Hamada, M., and Wu, C. F. J. (1992), "Analysis of Designed Experiments with Complex Aliasing," Journal of Quality Technology, 24, 130-137.

Lawson, J. (2002), "Regression Analysis of Experiments with Complex Con-

A.2 GLOSSARY OF TERMS founding Patterns Guided by the Alias Matrix," Computational Statistics and Data Analysis, 39, 227-241.

Lawson, J., and Erjavec, J. (2001), Modern StatisticsforEngineering and Quality Improvement, Pacific Grove, CA: Duxbury Press.

Alias-Two terms in a factorial are said to be aliased if their Lin, D. K. J. (1998-1999), "Spotlight Interaction Effects in Main Effect-Plans:

effects cannot be estimated independently. a Supersaturated Design Approach," Quality Engineering, 11, 133-139. Lin, D. K. J., and Draper, N. R. (1992), "Projection Properties of Plackett and

Defining relationship-Is the key to the confounding patter in Burman Designs," Technometrics, 34, 423428.

a fractional factorial. It shows specifically the aliases of Lucas, J. M. (1985), Discussion of "Off-line Quality Control" by Kackar, Journal the mean term in the model. of Quality Technology, 17, 195-197.

Maruthi, B. N., and Joseph, V. R. (1999-2000), "Improving the Yield of Printed

Resolution-is number that describes the degree confounding Circuit Boards Using Design of Experiments," Quality Engineering, 12,259- 265.

in a design. In a design or resolution R, main effects are Wang, J. C., and Wu, C. F. J. (1995), "A Hidden Projection Property of Plackett- confounded with R - 1 degree interactions. Burman and Related Designs," Statistica Sinica, 5, 235-250.

20 Statistical Practice

This content downloaded from 147.8.204.164 on Tue, 10 Sep 2013 04:47:18 AMAll use subject to JSTOR Terms and Conditions