10
A new constraint handling method based on the modified Alopex-based evolutionary algorithm Zhen Wang, Shaojun Li , Zhixiang Sang Key Laboratory of Advanced Control and Optimization for Chemical Processes, East China University of Science and Technology, Ministry of Education, Shanghai 200237, China article info Article history: Received 8 May 2013 Received in revised form 15 April 2014 Accepted 18 April 2014 Available online 30 April 2014 Keywords: AEA EDA Constrained optimization problems Adaptive penalty function abstract In this paper, a new constraint handling method based on a modified AEA (Alopex-based evolutionary algorithm) is proposed. Combined with a new proposed ranking and selecting strategy, the algorithm gradually converges to a feasible region from a relatively feasible region. By introduction of an adaptive relaxation parameter l, the algorithm fully takes into account different functions corresponding to differ- ent sizes of feasible region. In addition, an adaptive penalty function method is employed, which adap- tively adjust the penalty coefficient so as to guarantee a moderate penalty. By solving 11 benchmark test functions and two engineering problems, experiment results indicate that the proposed method is reliable and efficient for solving constrained optimization problems. Also, it has great potential in han- dling many engineering problems with constraints, even with equations. Ó 2014 Elsevier Ltd. All rights reserved. 1. Introduction Nowadays, constrained optimization problems are very impor- tant as they frequently appear in many engineering applications (Zhang & Rangaiah, 2012). It is a great challenge to get the global optimal solution for the constrained optimization problems, espe- cially for the nonlinear constrained optimization problems. These problems are imposed upon decision variables, tight interferences among constraints, and complex interrelationship between the constraints and the objective function. Thus, the study of con- strained optimization problems is an active research area. Generally, the mathematical model of constrained optimization problems can be defined as follows: Min f ðXÞ s:t: g j ðXÞ 6 0; j ¼ 1; 2; ; p h j ðXÞ¼ 0; j ¼ p þ 1; p þ 2; ; m l i 6 x i 6 u i ; i ¼ 1; 2; ; N ð1Þ where X =(x 1 , x 2 , , x N ) is the N-dimensional vector of decision variables, f(X) is the objective function, g j ðXÞ 6 0 and h j (X) = 0 are p inequalities and (m p) equations, respectively. Symbol f, g and h can be described as linear or nonlinear functions. Symbol l i and u i are the lower and upper bounds of the independent variable x i , respectively (Tetsuyuki & Setsuko, 2010). Global optimization methods to the above problem can be classi- fied into two categories: deterministic methods and stochastic meth- ods. A deterministic method can also be called a traditional method, which includes the feasible direction method, gradient method, outer approximation method etc. (Floudas, Aggarwal, & Ciric, 1989; Kocis & Grossmann, 1998; Ryoo & Sahinidis, 1995). These methods require that the objective function or constraint conditions should be continuous and differentiable. Also, the establishment and computation of objective function are very complex and difficult. In recent years, stochastic algorithms including genetic algorithm (GA) (Holland, 1975), differential evolution (DE) (Storn & Price, 1997), particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), sim- ulated annealing and so on, get rapid development because of their high probability to gain the global optimization. Moreover, stochas- tic algorithms do not require explicit structure knowledge of the problems. Combined with constraint handling method, stochastic algorithms have been widely used to solve the constrained optimi- zation problems. There are three main categories to handle constrains in the stochastic global optimization methods: (1) penalty function method; (2) separation of objective functions and constrains method; (3) hybrid methods. Penalty function is the simplest and most commonly used method for handling constraints. Coello (2002) makes a detailed description on penalty function method about its operating ways. The main disadvantage of penalty func- tion method is that it is problem-dependent and difficult to choose a suitable penalty parameter. As a result, static penalty parameters, dynamic penalty parameters, and adaptive penalty parameters are widely used in the latest literature (Coello, 2000; Snyman, Nielen, http://dx.doi.org/10.1016/j.cie.2014.04.011 0360-8352/Ó 2014 Elsevier Ltd. All rights reserved. Corresponding author. Tel.: +86 21 64253820. E-mail addresses: [email protected] (Z. Wang), [email protected] (S. Li), [email protected] (Z. Sang). Computers & Industrial Engineering 73 (2014) 41–50 Contents lists available at ScienceDirect Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Embed Size (px)

Citation preview

Page 1: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Computers & Industrial Engineering 73 (2014) 41–50

Contents lists available at ScienceDirect

Computers & Industrial Engineering

journal homepage: www.elsevier .com/ locate/caie

A new constraint handling method based on the modified Alopex-basedevolutionary algorithm

http://dx.doi.org/10.1016/j.cie.2014.04.0110360-8352/� 2014 Elsevier Ltd. All rights reserved.

⇑ Corresponding author. Tel.: +86 21 64253820.E-mail addresses: [email protected] (Z. Wang), [email protected]

(S. Li), [email protected] (Z. Sang).

Zhen Wang, Shaojun Li ⇑, Zhixiang SangKey Laboratory of Advanced Control and Optimization for Chemical Processes, East China University of Science and Technology, Ministry of Education, Shanghai 200237, China

a r t i c l e i n f o

Article history:Received 8 May 2013Received in revised form 15 April 2014Accepted 18 April 2014Available online 30 April 2014

Keywords:AEAEDAConstrained optimization problemsAdaptive penalty function

a b s t r a c t

In this paper, a new constraint handling method based on a modified AEA (Alopex-based evolutionaryalgorithm) is proposed. Combined with a new proposed ranking and selecting strategy, the algorithmgradually converges to a feasible region from a relatively feasible region. By introduction of an adaptiverelaxation parameter l, the algorithm fully takes into account different functions corresponding to differ-ent sizes of feasible region. In addition, an adaptive penalty function method is employed, which adap-tively adjust the penalty coefficient so as to guarantee a moderate penalty. By solving 11 benchmarktest functions and two engineering problems, experiment results indicate that the proposed method isreliable and efficient for solving constrained optimization problems. Also, it has great potential in han-dling many engineering problems with constraints, even with equations.

� 2014 Elsevier Ltd. All rights reserved.

1. Introduction Global optimization methods to the above problem can be classi-

Nowadays, constrained optimization problems are very impor-tant as they frequently appear in many engineering applications(Zhang & Rangaiah, 2012). It is a great challenge to get the globaloptimal solution for the constrained optimization problems, espe-cially for the nonlinear constrained optimization problems. Theseproblems are imposed upon decision variables, tight interferencesamong constraints, and complex interrelationship between theconstraints and the objective function. Thus, the study of con-strained optimization problems is an active research area.

Generally, the mathematical model of constrained optimizationproblems can be defined as follows:

Min f ðXÞs:t: gjðXÞ 6 0; j ¼ 1;2; � � � ; p

hjðXÞ ¼ 0; j ¼ pþ 1; pþ 2; � � � ;mli 6 xi 6 ui; i ¼ 1;2; � � � ;N

ð1Þ

where X = (x1, x2, � � �, xN) is the N-dimensional vector of decisionvariables, f(X) is the objective function, gjðXÞ 6 0 and hj(X) = 0 arep inequalities and (m � p) equations, respectively. Symbol f, g andh can be described as linear or nonlinear functions. Symbol li andui are the lower and upper bounds of the independent variable xi,respectively (Tetsuyuki & Setsuko, 2010).

fied into two categories: deterministic methods and stochastic meth-ods. A deterministic method can also be called a traditional method,which includes the feasible direction method, gradient method,outer approximation method etc. (Floudas, Aggarwal, & Ciric,1989; Kocis & Grossmann, 1998; Ryoo & Sahinidis, 1995). Thesemethods require that the objective function or constraint conditionsshould be continuous and differentiable. Also, the establishment andcomputation of objective function are very complex and difficult. Inrecent years, stochastic algorithms including genetic algorithm (GA)(Holland, 1975), differential evolution (DE) (Storn & Price, 1997),particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), sim-ulated annealing and so on, get rapid development because of theirhigh probability to gain the global optimization. Moreover, stochas-tic algorithms do not require explicit structure knowledge of theproblems. Combined with constraint handling method, stochasticalgorithms have been widely used to solve the constrained optimi-zation problems.

There are three main categories to handle constrains in thestochastic global optimization methods: (1) penalty functionmethod; (2) separation of objective functions and constrainsmethod; (3) hybrid methods. Penalty function is the simplest andmost commonly used method for handling constraints. Coello(2002) makes a detailed description on penalty function methodabout its operating ways. The main disadvantage of penalty func-tion method is that it is problem-dependent and difficult to choosea suitable penalty parameter. As a result, static penalty parameters,dynamic penalty parameters, and adaptive penalty parameters arewidely used in the latest literature (Coello, 2000; Snyman, Nielen,

Page 2: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

42 Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50

& Roux, 1994; Özgür & Beytepe, 2005). The useful messages whichare obtained from the search process get feedback in order to setpenalty parameter values more appropriately. Srinivas andRangaiah (2007) employed the penalty function method to handleinequalities. Also, they introduced tabu list to DE to prevent revis-iting throughout the whole search process. The technique canimprove computational efficiency and reliability greatly. Thesecond kind of method explicitly uses the information of con-strained structure and a search operator to maintain the feasibilityof solutions. The advantage of this kind of method is that it doesn’tneed any parameters to handle constraints. Especially, it has rathergood result to handle inequalities. This kind of method is alsoreferred to as the feasibility approach. The disadvantage of thiskind of method is that it is difficult to maintain population diver-sity, which may lead to premature convergence. He and Wang(2007) employed the feasibility method for handling the con-straints. The result shows that the algorithm has rather goodsearching quality and robustness for constrained engineering prob-lems. The third kind of method is called hybrid methods which cantake advantages of the superior characters of different algorithms.Zahara and Kao (2009) proposed a hybrid PSO algorithm byembedding Nelder-Mead simplex search. The algorithm combinesthe gradient repair method with ranking strategy which is basedon constraint fitness. Luo, Yuan, and Liu (2007) proposed animproved PSO algorithm, which requires elimination of equationsby partitioning variables and identifying reduced variables forsolving non-convex NLP problems with equations. Yuan and Qian(2010) proposed an improved GA with local solver. The resultsindicate that their algorithm is effective for solving twice differen-tiable NLP problems with inequalities.

In recent years, the feasibility approach has been used by manyresearchers to solve the constrained problems. Takahama andSakai (2006) proposed a e DE algorithm. They combined a con-straint handling method called e-constraint with DE algorithm.The e-constraint method is similar to feasibility approach. Therelaxation of the constraints controlled by the parameter e is grad-ually reduced to zero with the increasing of generation. The resultshows that e-constraint handling method is effective to solve prob-lems with small feasible region, but it employed more parameterscompared to other approaches.

Inspired by the technique of e-constraint handling method inTakahama and Sakai (2006), this paper modifies the iterative wayof parameter e according to the fraction of relatively feasible indi-viduals to the relaxed constraints. By embedding EDAs into AEA (Li& Li, 2011) and introducing l-constraint handling method, a newconstraint handling method l-AEA is proposed. In the meantime,an adaptive penalty function method is used to enhance the stabil-ity of the algorithm. The rest of the paper is organized as follows:The modified AEA employed EDAs is introduced in Section 2. InSection 3, the new method to handle the constraints and its com-bination with modified AEA are described in detail. In Section 4,the comparison results on 11 benchmark test functions betweendifferent methods are showed. Two engineering problemsoptimized by using the method are given in Section 5. Finally, inSection 6, the concluding remarks and relevant observations arepresented.

2. Description of the modified AEA by EDAs

2.1. Introduction of AEA

Alopex (Algorithm of pattern extraction) was originally used tosolve the determining shapes of visual receptive problems, whichwas first proposed by Harth and Tzanakou (1974). Later, it wasproved to be a very useful method to solve combinational

optimization and pattern match problems. Driven by the messageof previous change of independent variables with respect to theobjective function, the algorithm processes a parameter ‘tempera-ture’ to control the probability of walk direction in each variable. Italso employs the ‘noises’ strategy to enhance the search ability toget out of the local optima. In the Alopex algorithm, the parametersare introduced to compute the probability, which enables the algo-rithm to accept a worse solution with a probability in order toimprove the climbing capacity.

The studies on the Alopex algorithm are mainly concentrated onthe improvement of annealing strategy and the correlation expres-sion. Unnikrishnan and Venugopal (1994) suggested setting a largevalue of about 1000 to the first annealing cycle of Alopex, but theliterature also showed that the temperature depends on differentproblems. Osman and Christofides (1994) proposed an annealingstrategy which depends on the maximum change of objective val-ues in the first annealing cycle. Panagiotopoulos, Orovas, andSyndoukas (2010) employed the difference between variablesand corresponding objective function values to compute the tem-perature, so the temperature can update every few iterations.Haykin, Chen, and Becker (2004) added the correlation expressionto the update strategy of variables and proposed an Alopex-B algo-rithm. This algorithm is proved very efficient with experimentalsimulation in the literature. After comparing the improved Alopexto Simulated Annealing algorithm, Unnikrishnan and Venugopal(1994) reached a conclusion: (1) both algorithms are similar intemperature control strategy and probabilistic parameter updatestrategy; (2) Alopex differs from Simulated Annealing algorithmin correlation expression and update strategy of variables.

Inspired by the information of previous change of independentvariables with respect to the objective function in Alopex and theprimary characteristics of evolutionary algorithms, Li and Li(2011) proposed an Alopex-based evolutionary algorithm (AEA).During the AEA iterative process, two populations are used to com-pute the correlation.

The main procedure of AEA can be described by equations asfollows:

Ctij ¼ ðxt

ij � ytijÞ � ðf ðX

ti Þ � f ðYt

i ÞÞ ð2Þ

ptij ¼

1

1þ eðaCtij=TtÞ

;a ¼ �1 ð3Þ

Dtij ¼

1 when ptij P randð0;1Þ

�1; otherwise

(ð4Þ

ðxtijÞ0 ¼ xt

ij þ jxtij � yt

ijj � randð0;1Þ � Dtij ð5Þ

xtþ1ij ¼

ðxtijÞ0; when f ððXt

i Þ0Þ < f ðXt

i Þxt

ij; otherwise

(ð6Þ

Tt ¼ 1KN

XK

i¼1

XN

j¼1

jCtijj ð7Þ

where i = 1, 2, � � �, K, K is the size of the population, j = 1, 2, � � �, N, N isthe number of variable dimension and t is the iteration times. xt

ij

denotes the j-th variable of the i-th individual in population Pt1 at

the t-th iteration. ytij denotes the j-th variable of the i-th individual

in population Pt2 at the t-th iteration. Xt

i denotes the i-th individualin population Pt

1 at the t-th iteration and Yti denotes the i-th individ-

ual in population Pt2 at the t-th iteration. Ct

ij is the correlation valuebetween the variable and its corresponding function value. pt

ij is theprobability which can determine the walking direction. The value ofvariable a in Eq. (3) depends on specific problem: a = +1 is to min-imize function and a = �1 is to maximize function. Dt

ij in Eq. (4)denotes the changing direction of the j-th variable in i-th individualat the t-th iteration and Eq. (5) gives the strategy to generate trialvariable. Lastly, Eq. (6) gives the updating strategy. The temperature

Page 3: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50 43

‘T’ is an important parameter during the optimization process,because it influences the performance of the new algorithm mostly.Eq. (7) gives a self-adjusted annealing strategy of parameter ‘T’.

The main procedure of AEA algorithm is expressed as follows:Without loss of generality, the minimizing function F(x1, x2, � � �, xN)is used as an example. There are two populations, Pt

1 and Pt2 are

generated at generation t. Here, Pt2 is generated from Pt

1 by disor-dering the arrangement of individuals in Pt

1, the detail processcan be seen in literature (Li & Li, 2011). Randomly select two indi-viduals from Pt

1 and Pt2 respectively, and calculate the value of Ct

ij.Then the probability Pt

ij which determines the move directions ofvariables is calculated. These movement directions reflect only ten-dencies, so each variable of the population may move toward a bet-ter solution or a bad solution. Then each individual in Pt

1 isrenewed according to Eq. (5) and a new population is formed.Finally, the function values of individuals in Pt

1 are compared withthat in the new population, and those individuals with good resultswill be retained to the next generation.

In the literature of Li and Li (2011), comprehensive comparisonsbetween the AEA and GA (genetic algorithm), PSO (particle swarmoptimization), as well as DE (differential evolution) showed thatthe performance of AEA algorithm is better. The AEA not only holdsthe primary characteristics of evolutionary algorithms, but alsopossesses the merits of gradient descent method and simulatedannealing algorithm. In AEA, two populations Pt

1 and Pt2 are pro-

duced to compute the correlation Ct. The more information con-tained in Pt

1 and Pt2, the greater probability owned to locate the

optimum. However, the two populations in AEA contain the sameinformation, which is not good for the evolution of population. So,EDA is embedded into AEA to generate the two populations. Then,the modified AEA is employed in this paper to handle constraintproblem.

2.2. Introduction of estimation of distribution algorithm and itscombination with AEA

Generally, an efficient evolutionary algorithm should make useof both the local information of the best solution found so far andthe global information. The local information can be helpful for theexploitation of the whole search space and in the meantime theglobal information can guide the search for exploring promisingareas (González, Lozano, & Larrañaga, 2002). Based on the abovetheory, this paper tries to embed EDA algorithm whose search ismainly based on the global information to AEA algorithm whosesearch can be seen as a kind of local information.

Estimation of distribution algorithm (EDA), introduced byMühlenbein, Bendisch, and Voigt (1996), is a very promising classof EAs (Sun, Zhang, & Edward, 2005). EDA generates new popula-tion by estimating and sampling the joint probability distributionof the selected individuals instead of the operators of crossoverand mutation employed in EAs. According to the dependencebetween variables, several kinds of EDAs have been proposed. Inthis paper, UMDAc (Univariate Marginal Distribution Algorithmfor Continuous Domains) (González et al., 2002) is employed, inwhich the interdependence between variables are not taken intoconsideration.

The joint density function of N-dimensional variable is calcu-lated as Eq. (8):

f ðXÞ ¼YNi¼1

1ffiffiffiffiffiffiffi2pp

ri

exp �12

xi � li

ri

� �2 !

ð8Þ

where the mean parameter li and the standard deviation ri areestimated by using their corresponding maximum likelihood esti-mation (Li, Li, & Mei, 2010).

Generally, the procedure of EDA can be summarized into foursteps: (1) Selecting, promising individuals which contain the globalstatistical information are extracted from the initial population; (2)Modeling, a probability model of promising solutions is built byusing the mean l and the standard deviation r; (3) Sampling,the descendents are got by sampling from the probability model;(4) Replacement, the new population replaces the old populationaccording to the replacement strategy.

In order to increase the global information of Pt1 in AEA algo-

rithm, some promising solutions are selected from Pt1 in each gen-

eration to implement EDA algorithm. The number of selectedpromising solution x and the replacement ratio # are two mainparameters in the modified AEA algorithm. Parameter x is usedto present the numbers of promising solutions selected. Then,the new population generated from EDA replaces the old onewith the replacement ratio # ranging from 0 to 1. The values ofx and # influence the performance of the algorithm greatly. Ifx is too small, the probability model cannot express the globalevolution information of the population. On the contrary, if x istoo large, the probability model cannot reflect the specific evolu-tion information of the promising solution set. So it will reducethe convergent performance of the modified algorithm. Similarly,if the replacement ratio # is too small, the descendents sampledfrom the probability model can account for only a small propor-tion of the former population. Thus, the convergent rate of thepopulation will be very low. If # is too large, the algorithm con-verges quickly and has higher probability of trapping into thelocal optimum.

In order to choose the proper parameter value of x and #, a lotof experimental studies have been tested. By statistical analysis ofboth parameter values on different functions, x and # are set as thebest interval within [0.5, 0.6] and [50, 60], respectively. In thismanuscript, # and x are adopted as 0.5 and 50.

3. Method to handle the constraints and its combination withthe modified AEA

To optimize constrained optimization problems by evolution-ary algorithm, the violation is usually defined as the maximumor the sum of all the constraint violations in one individual. Inthis manuscript, the sum of all the constraint violations isadopted and the total constraint violation of individual k isdefined as follows:

TCVk ¼Xp

j¼1

maxð0; gjðXkÞÞ þXm

j¼pþ1

jhjðXkÞj k ¼ 1;2; � � � ;K ð9Þ

3.1. Adaptive relaxation method

Referring the e-constraint method in e DE (Takahama & Sakai,2006), parameter l is introduced to control relaxation of the con-straints in every generation For the convenience of introduce, inthis manuscript the new proposed constraint handling methodcombined with AEA is called l-AEA. In the initialization step, theobjective function values and TCV of all individuals in the initialpopulation are calculated. And the median value of TCV of all indi-viduals in the population is assigned as the initial value of l. Inevery generation, the TCV of individual k will be compared withl to judge whether the individual is a relatively feasible solutionor not. If the TCV of an individual is less than l, then it will benamed for relatively feasible solution; else, it will be treated as rel-atively infeasible solution. And the relaxation value, l, adjustsadaptively by the number of relatively feasible solutions in everygenerations. By tested to different problems, the updating of lfinally defined as the form of Eq. (10).

Page 4: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

44 Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50

lðt þ 1Þ ¼ lðtÞ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1� 0:34

gðtÞK

� �sð10Þ

where t = 1, 2,� � �, tm � 1, tm is the max iterative time, g(t) is thenumber of relatively feasible solutions at the t-th generation.

In the early stage of iteration, the relaxation of constraintsallows more infeasible individuals into the next generation, whichmay contain some useful information to find the global optimum.It may help to carry out greater exploration of the search space forlocating the global optimum. With the increasing of iterationtimes, relaxation value l adaptively reduces by the fraction of rel-atively feasible solutions. So the relatively feasible region reducesgradually until it converges to the feasible region.

3.2. Adaptive penalty function method

In order to improve the stability and accuracy of l-AEA, anadaptive penalty function method is also introduced. The mainidea of penalty function method is to construct a fitness functionby adding a penalty value to the objective function. It aims totransform a constrained optimization problem to an unconstrainedoptimization problem. The proposed penalty function methodcombined with constraints relaxation method mentioned abovemakes up the main method to handle constraints in l-AEA.

Firstly, Eq. (11) defines the constraint violation of an individualin a population, vi(Xk):

v iðXkÞ ¼maxf0; giðXkÞg if i ¼ 1;2; � � � ;pjhiðXkÞj if i ¼ pþ 1; pþ 2; � � � ;m

�ð11Þ

If an individual Xk violates the i-th constraint, then vi(Xk) > 0. Other-wise, vi(Xk) = 0. If individual Xk satisfies all the m constraints, thenXk is defined as the feasible solution.

Secondly, the fitness function after adding a penalty is definedas follows:

FðXkÞ ¼f ðXkÞ if Xk is feasible

f ðXkÞ þXm

i¼1

ki � v iðXkÞ otherwise

8><>: ð12Þ

where ki is the penalty coefficient.According to Eq. (12), F(Xk) is equal to objective function value

when an individual is feasible solution. Otherwise, F(Xk) adjustsadaptively according to the violation extent of different con-straints. In order to balance the value of objective function andpenalty, this paper adds the maximum absolute value of objectivefunction to the penalty coefficient in case that the extent of penaltywill grow too big or too small. Meanwhile, based on the fact thatdifferent constraints have different challenges to handle, the algo-rithm should exert big penalty to constraints which are difficult tosatisfy and exert small penalty to easily satisfied constraints. Thecomplexity of different constraints is estimated by counting thenumber of violation in one specific constraint of all individuals inthe population every generation. In other words, for one specificconstraint, more constraint violation numbers mean more difficultto satisfy. Therefore, the penalty coefficient is defined as follows:

ki ¼ jfmaxj � 10si=K i ¼ 1;2; . . . ;m ð13Þ

where fmax is the maximum absolute value of objective function,fmax = max {f(X1), f(X2), � � �, f(XK)}. si is the sum of violation numbersin one specific constraint of all individuals in the population.

With the gradual evolution of population, the penalty coefficientdecreases greatly as more and more solutions enter the feasibleregion. Then, the algorithm switches the main task automaticallyto search for the global optimum from locating the feasible region.

By the definition of relatively feasible solutions, the selectingstrategy between target and trial individuals in l-AEA temporarily

treat an individual as relatively feasible solutions if its total abso-lute constraint violation is less than l, which is similar to the fea-sibility approach in Deb (2000). The selecting strategy of l-AEA isadopted as follows:

(1) If both two individuals satisfy TCV 6 l, then the individualwith better fitness function value is preferred.

(2) If one individual satisfy TCV 6 l among two individuals,then the relatively feasible solution is preferred.

(3) If both two individuals do not satisfy TCV 6 l and their val-ues of TCV are not equal, then the individual with smallerTCV is chosen.

(4) If both two individuals do not satisfy TCV 6 l and their val-ues of TCV are equal, then the solution with better fitnessfunction value is preferred.

The adaptive relaxation method and relatively feasible selectingstrategy will enhance the global search by forcing the populationto the feasible region gradually.

Based on the different ranking strategies between constraintoptimization problems and unconstraint optimization problems,the solutions of constraint optimization problems cannot just beranked by objective function value. A novel ranking strategy is pro-posed based on the constraints relaxation method and adaptive pen-alty function method. The ranking strategy is described as follows:feasible individuals are the best solution as users expect, so feasibleindividuals are ranked in the first grade; relatively feasible solutionshave the potential to enter the feasible region, they are ranked in thesecond grade; and the other individuals left in the population belongto the third grade. In a word, the individuals in the population areranked in three grades, and individuals in the first grade are the best,namely individuals in the third grade are the worst.

3.3. Implementation steps of l-AEA algorithm

According to the discussion given above, the implementationsteps of l-AEA algorithm can be described as follows:

Step 1. Set the number of population size K, maximal iterationtimes tm, and x, #, initialize the population Pt

1 within the searchscope randomly and set iteration times, t = 1.Step 2. Evaluate the objective function values, fitness functionvalues using Eq. (12) and total absolute violation using Eq. (9)of the population Pt

1, set l(1) as the median value of TCV.Step 3. Generate another population Pt

2 (by disorder thearrangement of individuals in Pt

1) like this: suppose thesequence number of individuals in population Pt

1 is from 1 toK, then randomly choose an integer I within [2,K], and rearrangethe individual order in population Pt

1 according to the sequenceof [I, I + 1,� � �, K, 1, 2,� � �, I � 1] to form another population Pt

2.Step 4. Adjust the order of the objective function values, fitnessfunction values and total absolute violation TCV in Step 2 inorder to be in correspondence with the order of individuals inpopulation Pt

2, and then calculate Ctij with xt

ij � ytij by using Eq.

(2) and fitness function FðXti Þ � FðYt

i Þ by Eq. (12).Step 5. Compute the probability Pt

ij according to Eq. (3), theannealing temperature Tt according to Eq. (7) and renew theindividuals in Pt

1 to form an intermediate Pt3.

Step 6. Evaluate the objective function values, fitness functionvalues and total absolute violation of Pt

3, compare the popula-tion Pt

1, Pt3 using relatively feasible selecting strategy proposed

in Section 3 and retained the good solutions to the next step.Step 7. Update the value of l by Eq. (10), rank the retained solu-tions in Step 6 by the ranking strategy mentioned in Section 3and select x number good solutions to form the promisingsolutions set.

Page 5: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50 45

Step 8. Compute the mean and standard deviation of every var-iable in promising solution set and establish a probabilitymodel by Eq. (8), and then generate the new solutions by sam-pling the probability model.Step 9. Replace the retained solutions in Step 6 with the ratio #and form a new population Pt

1, check if the maximal iterationtimes tm is met, yes, go to the Step 10; else, set t = t + 1 andgo to Step 2.Step 10. Output the best solution.

4. Experimental results on benchmark functions

In order to test the performance of the proposed algorithm,l-AEA, 11 benchmark problems with equations and/or inequalitiesare used to test and compare l-AEA with other constrained optimi-zation algorithms. These 11 benchmark problems, which are givenin CEC 2006 (Liang et al., 2006), include linear, nonlinear, quadratic,cubic, polynomial of objective functions with different numbers ofvariables and different types of the constraints. Mathematical char-acteristics of these benchmark problems are summarized in Table 1.In which, for the convenience of experiment, the maximum func-tion f2, f3, and f8 have been transformed to the minimum functionby adding the minus sign to these functions.

In Table 1, LI, NI, LE and NE mean linear inequalities, nonlinearinequalities, linear equations and nonlinear equations, respec-tively. The feasibility ratio q is an estimate ratio of the feasiblespace to that of the entire search space. The number of active con-straints is represented by a. N denotes the number of variablesinvolved. Optimum is the published optimal value.

4.1. Parameter settings and testing details

Usually, equation is very difficult to satisfy. So, in the realisticoptimizing problems, equation is usually translated into inequalityby given a tolerance d as shown in Eq. (14) (Barbosa & Lemonge,2005):

jhðXÞj 6 d ð14Þ

where d is a very small positive number.The following parameters were used throughout this study:

population size, K is equal to 100. The maximum number of itera-tion tm is set 2000. Symbol x, # are 50 and 0.5, respectively. Thetolerance d in Eq. (14) is set as 10�4, which is the same as thosein other literature. Total 30 independent runs are performed oneach benchmark problem.

4.2. Experimental results on benchmark functions and the comparisonwith other methods

The algorithm DE + APM, PSO-DE, EA and ANT-b used to com-pare with l-AEA are described as follows: DE + APM was proposed

Table 1Summary of main characteristics of the benchmark problems.

Test function Optimum N Type of functio

f1 Min �15 13 Quadraticf2 Min �0.804 20 Nonlinearf3 Min �1 10 Nonlinearf4 Min �30665.5 5 Quadraticf5 Min 5126.4965 4 Nonlinearf6 Min �6961.8 2 Nonlinearf7 Min 24.306 10 Quadraticf8 Min �0.09583 2 Nonlinearf9 Min 680.63 7 Nonlinearf10 Min 7049.25 8 Linearf11 Min 0.7499 2 Quadratic

by Eduardo and Silva (2011), which combined an adaptive con-straint handling technique to the differential evolution withdynamic use of variants in engineering optimization; Liu, Cai,and Wang (2010) proposed the hybrid PSO-DE algorithm by usingfeasibility approach to optimize the constraints, in the meantime,the objective function with two population of the same size isadded in the algorithm; EA is a hybrid constraint handling methodbased on evolutionary algorithm proposed by Mani andPatvardhan (2009) to optimize constrained optimization problems.ANT-b was proposed by Leguizamon & Coello (2009). ANT-b com-bined a boundary search with ant colony metaphor algorithm forconstrained optimization problems. Besides PSO-DE runs 100times, other algorithms all run 30 times.

The experimental results of l-AEA and the other four algo-rithms are listed in Table 2, in which worst function value (WORST),best function value (BEST), average function value (AVERAGE), stan-dard deviations (STD) and MAPE (Mean Absolute Percentage Error)of 30 independent runs are compared. Among which the MAPE(Mean Absolute Percentage Error) is defined as follows:

MAPE ¼ optim� AVERAGEoptim

��������� 100% ð15Þ

where optim is the published optimal value of benchmark function.The best results among different algorithms are assigned by the

bold-face in Table 2. The data not given and/or not available in lit-erature are denoted as NA. The number of no worse results (NNWR)compared with others and the AVERAGE of MAPE (%) (AMAPE) arelisted at the bottom of Table 2.

From Table 2, it is clear that the algorithm l-AEA can find all theglobal optima of the 11 benchmark functions. Also, except for func-tion f7 f10, the rest 9 functions can find the global optima by l-AEAin almost all the 30 runs with very small standard deviations. Espe-cially it performs well for function f2, which owns 20 dimensionsand has lots of local optima. For f4, f5, f6, f8, l-AEA and DE + APMobtain the same results. But for the remaining 7 problems, l-AEAobtains much better results than DE + APM. PSO-DE performsbetter than l-AEA in f7, f10, and 7 problems obtains equally goodsolutions. But for f2, PSO-DE is worse than l-AEA. Compared withEA, l-AEA obtains better results for 4 out of 10 problems. And ithas better performance for 7 out of 10 problems compared toANT-b.

In the perspective of MAPE (%), l-AEA’s performance is equal toPSO-DE in NNWR, better than the other three algorithms. However,when it comes to the AVERAGE of MAPE (%), on the bottom ofTable 2, PSO-DE is not as good as the performance of l-AEA. Fur-thermore, on the whole, the l-AEA is superior to the other tech-niques on the numbers of NNWR, especially it performs well onthe standard deviations of the results in 30 independent runs. Itdemonstrates that the proposed l-AEA can find the global opti-mum with more stability.

n q (%) LI NI LE NE a

0.011 9 0 0 0 699.990 1 1 0 0 10.00002 0 0 0 1 152.1230 0 6 0 0 20.00000 2 0 0 3 30.0066 0 2 0 0 20.0003 3 5 0 0 60.8560 0 2 0 0 00.5256 0 4 0 0 20.0005 3 3 0 0 30.00000 0 0 0 1 1

Page 6: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Tabl

e2

Com

pari

son

resu

lts

ofth

e11

func

tion

sam

ong

l-A

EA,D

E+

APM

,PSO

-DE,

EAan

dA

NT-

b.

Fun

ctio

nl

-AEA

DE

+A

PMPS

O-D

EEA

AN

T-b

BES

TA

VER

AG

EW

OR

STST

DM

APE

(%)

BES

TA

VER

AG

EW

OR

STST

DM

APE

(%)

BES

TA

VER

AG

EW

OR

STST

DM

APE

(%)

BES

TA

VER

AG

EW

OR

STST

DM

APE

(%)

BES

TA

VER

AG

EW

OR

STM

APE

(%)

f 1�

15.0

00�

15.0

00�

15.0

000

0�

15.0

00�

12.5

00�

6.00

02.

371.

67E

+01

�15

.000

�15

.000

�15

.000

2.1E�

080

�15

.000

�15

.000

�15

.000

00

�15

.000

�15

.000

�15

.000

0f 2

�0.

804

�0.

804

�0.

804

6.49

E�

080

�0.

804

�0.

7688

�0.

6709

3.57

E�02

4.38

�0.

804

�0.

757

�0.

637

3.3E�

025.

85N

AN

AN

AN

AN

A�

0.80

4�

0.80

3�

0.79

31.

24E�

01f 3

�1.

001

�1.

001

�1.

001

3.69

E�08

0�

1�

0.20

150

3.45

E�01

7.99

E+

01�

1.00

1�

1.00

1�

1.00

10

0N

AN

AN

AN

AN

A�

1.00

0�

1.00

0�

1.00

09.

99E�

02f 4

�30

665.

54�

3066

5.54

�30

665.

540

0�

3066

5.54

�30

665.

54�

3066

5.54

00

�30

665.

54�

3066

5.54

�30

665.

548.

3E�

100

�30

665.

54�

3066

5.54

�30

665.

540

0�

3066

5.54

�30

665.

54�

3066

5.54

0f 5

5126

.497

5126

.497

5126

.497

00

5126

.497

5126

.497

5126

.497

00

NA

NA

NA

NA

NA

5126

.498

751

27.2

3551

35.9

281.

7E�

011.

44E�

0251

26.5

5138

.37

5132

.14

2.32

E�01

f 6�

6961

.81

�69

61.8

1�

6961

.81

00

�69

61.8

1�

6961

.81

�69

61.8

10

0�

6961

.81

�69

61.8

1�

6961

.81

2.3E�

090

�69

61.8

1�

6961

.81

�69

61.8

10

0�

6961

.81

�69

61.7

40�

6961

.710

1.01

E�03

f 724

.306

24.3

1688

24.4

0746

2.28

E�02

4.48

E�02

24.3

0630

.404

121.

747

2.16

E+

012.

51E

+01

24.3

0624

.306

24.3

061.

3E�

060

24.3

1924

.41

24.5

415.

4E�

024.

28E�

0124

.37

24.6

424

.92

1.37

f 8�

0.09

583

�0.

0958

3�

0.09

583

00

�0.

0958

3�

0.09

583

�0.

0958

30

0�

0.09

583

�0.

0958

3�

0.09

583

00

�0.

0958

3�

0.09

583

�0.

0958

30

0N

AN

AN

AN

Af 9

680.

6368

0.63

680.

632.

13E�

060

680.

6368

0.63

680.

633.

00E�

050

680.

6368

0.63

680.

630

068

0.63

680.

6368

0.66

79.

0E�

030

680.

6368

0.67

680.

725.

88E�

03f 1

070

49.2

570

50.7

9270

69.9

323.

91E�

002.

19E�

0270

49.2

573

51.1

783

32.1

25.

26E

+02

4.28

7049

.248

7049

.248

7049

.248

3.0E�

050

7049

.424

7075

.022

7111

.849

1.9E

+01

3.66

E�01

7052

.371

99.0

174

93.1

52.

12f 1

10.

750.

750.

750

00.

750.

9874

91

5.59

E�02

3.17

E+

010.

750.

750.

752.

5E�

070

0.75

0.75

0.75

00

0.75

0.75

0.75

0

NN

WR

119

97

910

55

45

109

95

97

65

56

63

33

AM

APE (%

)6.

06E�

031.

47E

+01

5.85

E�01

8.98

E�02

3.95

E�01

46 Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50

In order to test the experimental time complexity of l-AEAalgorithm, the average CPU time (ACPU) is counted after thealgorithm’s function evaluation numbers (FENs) reached the presetvalue. According to Table 2, PSO-DE holds the best result amongDE + APM, PSO-DE, EA and ANT-b. So, in this section, the result ofl-AEA algorithm is only compared to PSO-DE. For the convenienceof comparison, the stopping criterion of both algorithms is amaximum of FENs which come from the literature of PSO-DE.The l-AEA is implemented in MATLAB code and the results areobtained on a DELL PRECISION T5500 computer with an Intel XeonCPU, E5645 with frequency 2.40 GHz, 24G RAM under Windows 7operation system. A total of 30 runs are conducted for l-AEA andthe best (BEST), worst (WORST), average (AVERAGE), standard devi-ations (STD) function value, and ACPU of 30 independent runs onl-AEA are listed in Table 3. The results of PSO-DE in Table 3 areobtained from literature. The algorithm l-AEA and PSO-DE sharethe same FENs which are listed in the middle of Table 3.

As can be seen from Table 3, l-AEA shows a better result thanPSO-DE for g01, g02, g04, g06, g08, and g11 when they share thesame FENs. PSO-DE obtained better results for function g03, g07,g09 and g10 than l-AEA. Also, for function g01, g04, g06, g08and g09, l-AEA and PSO-DE get the same good results in termsof BEST, AVERAGE, WORST except for a small gap between thetwo algorithms in terms of STD. In terms of ACPU, the maximumvalue consumed by l-AEA is 2.27s and the minimum value is0.05s. So, a conclusion can be made that the proposed l-AEAhas reasonable experimental time complexity for hard problemsof high-dimensional functions or large computational require-ments in general.

In order to observe the convergence of l-AEA in a more visual-ized way, Fig. 1 is drawn to display the best function values of eachgeneration in all 11 benchmark test functions. In Fig. 1, only thefirst 300 iterations are plotted because the best function valuesafter 300 iterations are insignificant with respect to the earlier iter-ations. After 300 iterations, the best function values of 11 bench-mark test functions achieved by l-AEA are �14.9999, �0.80338,�0.99934, �30665.5, 5126.505, �6961.81, 24.9089, �0.09583,680.6362, 7054.887, and 0.7499, respectively.

The best function value on each iteration has been normalizedto be [0, 1] according to the ceilings and floors of different functionresults listed in Table 4 in order to be demonstrated in a unifiedcoordinate axis. Combining the best function values listed abovewith Fig. 1, it is obvious that f4, f6, f8, f9, f11 can rapidly convergeto the optimum with no more than 200 iterations, and the rest testfunctions also can converge near to the optimum gradually within300 iterations.

4.3. Nonparametric statistical analysis of the results using theWilcoxon signed ranks test method

In order to make a further discussion on l-AEA and the otheralgorithms, a nonparametric statistical procedure, Wilconxon’s test(Joaquín, Salvador, Daniel, & Francisco, 2011), is conducted. Thistest procedure can detect whether there are significant differencesor not between the l-AEA and the other algorithms. Wilcoxon’stest is described as follows. Define di as the difference betweenthe results of the two algorithms on i-th out of n problems. Thispaper applies the average ranks for dealing with ties. For example,if there are two differences tied in the ranks of 1 and 2, rank 1.5 isassigned to the both differences (Function f1 and f4 in Table 5 aresuch circumstance).

According to Joaquín et al. (2011), as Eq. (16) presents, let R+ bethe sum of ranks which the first algorithm outperforms the secondone, and R- be the sum of ranks which the first algorithm underper-forms the second one. Ranks of di = 0 are split evenly among thesums; if there is an odd number of them, one is ignored.

Page 7: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Table 3Comparison of the experimental time complexity between PSO-DE and l-AEA.

F PSO-DE(2010) FENs l-AEA ACPUs

BEST AVERAGE WORST STD BEST AVERAGE WORST STD

g01 �15.000 �15.000 �15.000 2.1E�08 140,100 �15.000 �15.000 �15.000 1.82E�09 0.9350g02 �0.8036 �0.7567 �0.6368 3.3E�02 140,100 �0.8036 �0.7710 �0.7709 1.82E�03 1.1489g03 �1.0005 �1.0005 �1.0005 3.8E�12 140,100 �1.0005 �1.0005 �1.0004 1.83E�05 0.6543g04 �30665.54 �30665.54 �30665.54 8.3E�10 70,100 �30665.54 �30665.54 �30665.54 6.72E�11 0.3691g05 NA NA NA NA 140,100 5126.497 5126.497 5126.497 8.00E�08 0.7454g06 �6961.81 �6961.81 �6961.81 2.3E�09 140,100 �6961.81 �6961.81 �6961.81 1.85E�12 0.6993g07 24.306 24.306 24.306 1.3E�06 140,100 24.306 24.328 24.366 1.36E�02 0.9270g08 �0.09583 �0.09583 �0.09583 1.3E�12 10,600 �0.09583 �0.09583 �0.09583 7.33E�17 0.0529g09 680.63 680.63 680.63 4.6E�13 140,100 680.63 680.63 680.63 5.09E�05 0.7584g10 7049.25 7049.25 7049.25 3.0E�05 140,100 7049.25 7052.13 7068.44 4.63E+00 2.2749g11 0.7499 0.7499 0.7500 2.5E�07 70,100 0.7499 0.7499 0.7499 1.51E�14 0.3114

Fig. 1. The convergent graphs of l-AEA in 11 benchmark test functions.

Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50 47

Rþ ¼Xdi>0

rankðdiÞ þ12

Xdi¼0

rankðdiÞ

R� ¼Xdi<0

rankðdiÞ þ12

Xdi¼0

rankðdiÞ ð16Þ

Based on the above definition, let T = min (R+, R�). If T is no morethan the value of Wilcoxon’s distribution for n degrees of freedom

(which is listed in Table B.12 in Zar’s book (Zar, 2009)), the nullhypothesis of equality about the two algorithms is rejected. So aconclusion can be made that the first algorithm outperforms theother one with the p-value associated. The p-value can be obtainedby the well-known statistical software packages (SPSS, SAS, R, etc.).

Table 5 lists the comparing result of di, sign and rank for differ-ent 10 functions after using Wilcoxon’s test in the AVERAGE valueof 30 times runs between l-AEA and ANT-b. The AVERAGE value

Page 8: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Table 4Ceilings and Floors of different functions for normalization.

Normalized range f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

Ceiling �2.07077 �0.14956 �0.29297 �28971.8 7842.667 �6609.84 85.77252 �0.00798 902.4805 8773.654 0.806975Floor �14.9999 �0.80338 �0.99934 �30665.5 5126.505 �6961.81 24.9089 �0.09583 680.6362 7054.887 0.7499

Table 5Wilcoxon signed ranks test results 1 (l-AEA versus ANT-b).

Functions l-AEA ANT-b d Sign rank

f1 �15.000 �15.000 0 0 1.5f2 �0.804 �0.803 �0.001 � 4.5f3 �1.001 �1.000 �0.001 � 4.5f4 �30665.54 �30665.54 0 0 1.5f5 5126.497 5138.37 �11.873 � 9f6 �6961.81 �6961.740 �0.07 � 7f7 24.31688 24.64 �0.32312 � 8f8 �0.09583 NA NA NA NAf9 680.63 680.67 �0.04 � 6f10 7050.792 7199.01 �148.218 � 10f11 0.7499 0.75 �1.00E�04 � 3

Table 6Wilcoxon signed ranks test results 2 (l-AEA versus the other algorithm).

Comparison R+ R� n p-Value

l-AEA versus ANT-b 1.5 53.5 10 0.00391l-AEA versus DE+APM 7.5 58.5 11 0.01563l-AEA versus PSO-DE 32 23 10 0.37500l-AEA versus EA 10.5 34.5 9 0.09072

Table 7Comparison of the best solution for engineering problem 1 by different methods.

Methods x1(d) x2(D) x3(P) f(x)

Fig. 2. Tension string design problem.

48 Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50

of ANT-b comes from Table 2. In Table 6, the value of R+, R� arecomputed and the p-value has been obtained by using the R soft-ware package. The p-value of l-AEA to ANT-b is 0.00391 which isless than 0.01. It means that the l-AEA shows a significantimprovement over ANT-b, with a level of significance a = 0.01.

Also, the Wilcoxon test method is used to detect the differencesbetween the proposed l-AEA and other algorithms presented inTable 6. Analogous to the procedure of Wilcoxon test in l-AEAversus ANT-b, the tests about l-AEA to DE + APM, PSO-DE and EAare calculated, and the results are listed in Table 6. In Table 6, nis the number of functions compared and the AVERAGE values ofthese algorithms also come from Table 2. As the p-value in Table 6states, l-AEA shows significant improvement over ANT-b withsignificance level a = 0.01, and shows big improvement overDE + APM with significance level a = 0.05, with a level of signifi-cance a = 0.1 over EA, and a little improvement over PSO-DE, withp-value = 0.375.

Coello 0.051480 0.351661 11.632201 0.0127048Coello and Montes 0.051989 0.363965 10.890522 0.0126810CPSO 0.051728 0.357644 11.244543 0.0126747HPSO 0.051706 0.357126 11.265083 0.0126652l-AEA 0.051689 0.356718 11.288966 0.0126652

Table 8Statistical results of different methods for engineering problem 1.

Methods BEST AVERAGE WORST STD

Coello 0.0127048 0.0127690 0.012822 3.9390e�005Coello and Montes 0.0126810 0.0127420 0.012973 5.9000e�005CPSO 0.0126747 0.0127300 0.012924 5.1985e�004HPSO 0.0126652 0.0127072 0.0127191 1.5824e�005l-AEA 0.0126652 0.0126652 0.0126652 6.3387e�013

5. Applications and discussion on two engineering problems

In this section, the l-AEA algorithm is applied to two engineer-ing problems. They had been previously solved by other tech-niques. So they can be used to test the validity and effectivenessof l-AEA by comparing with other techniques.

5.1. Engineering problem 1: Tension string design problem

Engineering problem 1, first described by Arora (1989), is theminimization of the weight of spring. In this problem, the weightof a spring subject is minimized along with several stress con-straints. And as shown in Fig. 2, the variables to be determinedare the wire diameter d, the mean coil diameter D and the number

of active coils N. For the convenience of introduction, variables d, Dand N are defined as x1, x2 and x3, respectively. The description ofthe problem is show in Eq. (17).

The tension string design problem can be mathematically for-mulated as follows:

Minimize f ðXÞ ¼ ðx3 þ 2Þx2x21

s:t: g1ðXÞ ¼ 1� x32x3

71785x41

6 0

g2ðXÞ ¼4x2

2 � x1x2

12566ðx2x31 � x4

1Þþ 1

5108x21

� 1 6 0

g3ðXÞ ¼ 1� 140:45x1

x22x3

6 0

g4ðXÞ ¼x1 þ x2

1:5� 1 6 0

ð17Þ

This problem has been well studied by a GA-based co-evolutionmodel (Coello, 2000), CPSO (He & Wang, 2006), GA through theuse of dominance-based tournament selection (Coello & Montes,2002) and a hybrid particle swarm optimization with feasibility-based rule HPSO (He & Wang, 2007). Table 7 lists the best solutionsobtained by the mentioned techniques as well as the proposedl-AEA. Table 8 shows the standard deviations of the results in 30independent runs.

Table 7 shows that the best solution obtained by l-AEA is equalto HPSO, and it is much better than the other previously reported

Page 9: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50 49

solutions. Furthermore, as is shown in Table 8, the AVERAGE qualityand the WORST solution obtained by l-AEA are superior to thoseoptimized by the other techniques. The standard deviation ofl-AEA is near to zero.

5.2. Engineering problem 2: Welded beam design problem

The second engineering problem is the welded beam designproblem, as is shown in Fig. 3. Which is taken from Rao (2009),it is designed for minimize the cost subject to shear stress con-straints s, bending stress r, buckling load Pc, end deflection dand various constraints. Eq. (18) has described the mathematicalformulation of the problem. Four variables h, l, t and b, namedas x1, x2, x3 and x4 in Eq (18), respectively, have been determinedby using several optimization tools (Coello, 2000; Coello &Montes, 2002; He & Wang, 2006; He & Wang, 2007), as well asthe new proposed l-AEA. The comparison of the best solutionsof different techniques and the standard deviations of the resultsin 30 independent runs has been shown in Table 9 and Table 10respectively.

The welded beam design problem can be mathematically for-mulated as follows:

Table 9Comparison of the best solution for engineering problem 2 by different methods.

Methods x1(h) x2(l) x3(t) x4(b) f(x)

Coello 0.208800 3.420500 8.997500 0.210000 1.748309Coello and Montes 0.205986 3.471328 9.020224 0.206480 1.728226CPSO 0.202369 3.544214 9.048210 0.205723 1.728024HPSO 0.205730 3.470489 9.036624 0.205730 1.724852l-AEA 0.205729 3.470489 9.036624 0.205730 1.724852

Table 10Statistical results of different methods for engineering problem 2.

Methods BEST AVERAGE WORST STD

Coello 1.748309 1.771973 1.785835 0.011220Coello and Montes 1.728226 1.792654 1.993408 0.074713CPSO 1.728024 1.748831 1.782143 0.012926HPSO 1.724852 1.749040 1.814295 0.040049l-AEA 1.724852 1.724852 1.724582 1.13e�015

Fig. 3. Welded beam design problem.

Minimize f ðXÞ ¼ 1:10471x21x2 þ 0:04811x3x4ð14:0þ x2Þ

s:t: g1ðXÞ ¼ sðXÞ � 13600 6 0g2ðXÞ ¼ rðXÞ � 30000 6 0g3ðXÞ ¼ x1 � x4

g4ðXÞ ¼ 0:10471x21 þ 0:04811x3x4ð14:0þ x2Þ � 5:0 6 0

g5ðXÞ ¼ 0:125� x1 6 0g6ðXÞ ¼ dðXÞ � 0:25 6 0g7ðXÞ ¼ P � PCðXÞ 6 00:1 6 x1; x4 6 20:1 6 x2; x3 6 10

ð18Þ

where,

sðXÞ ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðs0Þ2 þ 2s0s00 x2

2Rþ ðs00Þ2

r

s0 ¼ Pffiffiffi2p

x1x2

s00 ¼ QRJ

Q ¼ P Lþ x2

2

R ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2

2

4þ x1 þ x3

2

� 2r

J ¼ 2ffiffiffi2p

x1x2x2

2

12þ x1 þ x3

2

� 2 �� �

rðXÞ ¼ 6PLx4x2

3

dðXÞ ¼ 4PL3

Ex33x4

PcðXÞ ¼4:013E

ffiffiffiffiffiffiffix2

3x6

436

qL2 1� x3

2L

ffiffiffiffiffiffiE

4G

r !

P ¼ 6000; L ¼ 14; E ¼ 30� 106;G ¼ 12� 106

From Table 9, it is clear that the best solution obtained by l-AEAalgorithm is as good as the HPSO, and much better than the othermethods. Also, according to Table 10, it can be found that the stabil-ity of l-AEA is superior to that of HPSO. The standard deviation ofthe solutions by l-AEA is nearly zero, which is far less than thoseof other methods.

Overall, the proposed constraint handling algorithm l-AEA isvery efficient and reliable for both general constrained problemsand complex constrained problems. So it has great potential inhandling many engineering application problems with constraints.

6. Conclusion

This paper presents a new constraint handling method l-AEAbased on the modified AEA by embedding Estimation of Distribu-tion Algorithm and l-constraint. The proposed constraint handlingmethod can gradually converge to the feasible region from the rel-atively feasible region by introducing l in the iteration process.Also the relaxation of constraints allows more infeasible individu-als which may contain some useful information into the next gen-eration. And it can make greater exploration of the search space forlocating the global optimum. In the meantime, the introduction ofan adaptive penalty function method ensures the penalty staymoderate. It can adaptively adjust the penalty coefficient accordingto the feature of constraints. By comparing the results with theother four algorithms on 11 well-known benchmark functions, itis shown the new method can locate the global optimum reliably

Page 10: A new constraint handling method based on the modified Alopex-based evolutionary algorithm

50 Z. Wang et al. / Computers & Industrial Engineering 73 (2014) 41–50

and efficiently. Overall, the proposed algorithm l-AEA is verysuitable for constrained optimization problems and also has greatpotential in handling engineering optimizing problems withconstraints.

Acknowledgments

The authors of this paper appreciate the National NaturalScience Foundation of China (under Project No. 21176072) andthe Fundamental Research Funds for the Central Universities fortheir financial support.

References

Arora, J. S. (1989). Introduction to optimum design. New York: McGraw-Hill.Barbosa, H. J. C., & Lemonge, A. C. C. (2005). A genetic algorithm encoding for a class

of cardinality constraints. In Proceedings of the 2005 conference on genetic andevolutionary computation (pp. 1193–1200).

Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineeringoptimization problems. Computers in Industry, 41(2), 113–127.

Coello, C. A. C., & Montes, E. M. (2002). Constraint-handling in genetic algorithmsthrough the use of dominance-based tournament selection. AdvancedEngineering Information, 16(3), 193–203.

Coello, C. A. C. (2002). Theoretical and numerical constraint-handling techniquesused with evolutionary algorithms: A survey of the state of the art. ComputerMethods in Applied Mechanics and Engineering, 191(11–12), 1245–1287.

Deb, K. (2000). An efficient constraint handling method for genetic algorithms.Computer Methods in Applied Mechanics and Engineering, 186(2–4), 311–338.

Eduardo, K., & Silva, D. (2011). An adaptive constraint handling technique fordifferential evolution with dynamic use of variants in engineering optimization.Optimization and Engineering, 12(1–2), 31–54.

Floudas, C. A., Aggarwal, A., & Ciric, A. R. (1989). Global optimum search for non-convex NLP and MINLP problems. Computers and Chemical Engineering, 13(10),1117–1132.

González, C., Lozano, J. A., & Larrañaga, P. (2002). Mathematical modeling of UMDAcalgorithm with tournament selection. Behavior on linear and quadraticfunctions. International Journal of Approximate Reasoning, 31(3), 313–340.

Harth, E., & Tzanakou, E. (1974). Alopex: A stochastic method for determining visualreceptive fields. Vision Research, 14(12), 1475–1482.

Haykin, S., Chen, Z., & Becker, S. (2004). Stochastic correlative learning algorithm.IEEE Transactions on Signal Processing, 52(8), 2200–2209.

He, Q., & Wang, L. (2006). An effective co-evolutionary particle swarm optimizationfor constrained engineering design problems. Engineering Applications ofArtificial Intelligence, 20(1), 89–99.

He, Q., & Wang, L. (2007). A hybrid particle swarm optimization with a feasibilitybased rule for constrained optimization. Applied Mathematics and Computation,186(2), 1407–1422.

Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: TheUniversity of Michigan Press.

Joaquín, D., Salvador, G., Daniel, M., & Francisco, H. (2011). A practical tutorial on theuse of nonparametric statistical tests as a methodology for comparingevolutionary and swarm intelligence algorithms. Swarm and EvolutionaryComputation, 1(1), 3–18.

Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings ofIEEE international conference on neural networks (pp. 1942–1948).

Kocis, G. R., & Grossmann, I. E. (1998). Global optimization of non-convex mixed-integer non-linear programming (MINLP) problems in process synthesis.Industrial and Engineering Chemistry Research, 27(8), 1407–1421.

Leguizamon, G., & Coello, C. A. C. (2009). Boundary search for constrained numericaloptimization problems with an algorithm inspired by the ant colony metaphor.IEEE Transactions on Evolutionary Computation, 13(2), 350–368.

Li, S. J., Li, F., & Mei, Z. Z. (2010). A Hybrid evolutionary algorithm based on alopexand estimation of distribution algorithm and its application for optimization. InProceedings of 1st international conference on advances in swarm intelligence (pp.549–557).

Li, S. J., & Li, F. (2011). Alopex-based evolutionary optimization algorithms and itsapplication to reaction kinetic parameter estimation. Computers & IndustrialEngineering, 60(2), 341–348.

Liang, J. J., Runarsson, T. P., Mezura-Montes, E., Clerc, M., Suganthan, P., Coello, C.,et al. (2006). Problem definitions and evaluation criteria for the CEC 2006special session on constrained real-parameter optimization. In Technical Report.Singapore: Nanyang Technological University.

Liu, H., Cai, Z. X., & Wang, Y. (2010). Hybridizing particle swarm optimization withdifferential evolution for constrained numerical and engineering optimization.Applied Soft Computing, 10(2), 629–640.

Luo, Y. Q., Yuan, X. G., & Liu, Y. G. (2007). An improved PSO algorithm for solvingnon-convex NLP/MINLP problems with equations. Computers and ChemicalEngineering, 31(3), 153–162.

Mani, A., & Patvardhan, C. (2009). A novel hybrid constraint handling technique forevolutionary optimization. In Proceedings of the eleventh conference on congresson evolutionary computation (pp. 2577–2583).

Mühlenbein, H., Bendisch, J., & Voigt, H. M. (1996). From recombination of genes tothe estimation of distributions II. Continuous parameters. Parallel ProblemSolving from Nature-Lecture Notes in Computer Science, 1141, 188–197.

Osman, I. H., & Christofides, N. (1994). Capacitated clustering problems by hybridsimulated and tabu search. International Transactions in Operational Research,11(3), 317–336.

Özgür, Y., & Beytepe, A. (2005). Penalty function method for constrainedoptimization with genetic algorithms. Mathematical and ComputationalApplications, 10(1), 45–56.

Panagiotopoulos, D., Orovas, C., & Syndoukas, D. (2010). A heuristically enhancedgradient approximation (HEGA) algorithm for training neural network.Neurocomputing, 73(7–9), 1303–1323.

Rao, S. S. (2009). Engineering optimization. New York: Wiley.Ryoo, H. S., & Sahinidis, N. V. (1995). Global optimization of non-convex NLPs and

MINLPs with applications in process design. Computers and ChemicalEngineering, 19(5), 551–566.

Snyman, J. A., Nielen, S., & Roux, W. J. (1994). A dynamic penalty function methodfor the solution of structural optimization problems. Applied MathematicalModelling, 18(8), 453–460.

Srinivas, M., & Rangaiah, G. P. (2007). Differential evolution with tabu list for solvingnonlinear and mixed-integer nonlinear programming problems. Industrial andEngineering Chemistry Research, 46(22), 7126–7135.

Storn, R., & Price, K. (1997). Differential evolution-a simple and efficient adaptivescheme for global optimization over continuous spaces. Journal of GlobalOptimization, 11(4), 341–359.

Sun, J. Y., Zhang, Q. F., & Edward, P. K. T. (2005). DE/EDA: A new evolutionaryalgorithm for global optimization. Information Sciences, 169(3–4), 249–262.

Takahama, T., & Sakai, S. (2006). Constrained optimization by the e constraineddifferential evolution with gradient-based mutation and feasible elites. InProceedings of the IEEE congress on evolutionary computation (pp. 16–21).

Tetsuyuki, T., & Setsuko, S. (2010). Efficient constrained optimization by the econstrained adaptive differential evolution. In Proceedings of WCCI-2010 IEEEworld congress on computational intelligence (pp. 18–23).

Unnikrishnan, K. P., & Venugopal, K. P. (1994). Alopex: A correlation-based learningalgorithm for feed forward and recurrent neural networks. Neural Computation,6(3), 469–490.

Yuan, Q., & Qian, F. (2010). A hybrid genetic algorithm for twice continuouslydifferentiable NLP problems. Computers and Chemical Engineering, 34(1), 36–41.

Zahara, E., & Kao, Y. T. (2009). Hybrid Nelder-Mead simplex search and particleswarm optimization for constrained engineering design problems. ExpertSystems with Applications, 36(2), 3880–3886.

Zar, J. H. (2009). Biostatistical analysis. Prentice Hall.Zhang, H. B., & Rangaiah, G. P. (2012). An efficient constraint handling method with

integrated differential evolution for numerical and engineering optimization.Computers and Chemical Engineering, 37, 74–88.