5
Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Nasser Sadati Intelligent Systems Laboratory Electrical Engineering Department Sharif University of Technology Tehran, IRAN Majid Zamani Intelligent Systems Laboratory Electrical Engineering Department Sharif University of Technology Tehran, IRAN Hamid Reza Feyz Mahdavian Intelligent Systems Laboratory Electrical Engineering Department Sharif University of Technology Tehran, IRAN Abstract Particle Swarm Optimization (PSO) algorithms recently invented as intelligent optimizers with several highly desirable attributes. In this paper, two new hybrid Particle Swam Optimization schemes are proposed. The proposed hybrid algorithms are based on using the Particle Swarm Optimization techniques in conjunction with the Simulated Annealing (SA) approach. By simulating three different test functions, it is shown how the proposed hybrid algorithms offer the capability of converging toward the global minimum or maximum points. More importantly, the simulation results indicate that the proposed hybrid particle swarm-based simulated annealing approaches have much superior convergence characteristics than the previously developed PSO methods. I. INTRODUCTION PSO is a particle swarm optimization algorithm for global optimization that originally was introduced by Kennedy and Eberhart in 1995 [1], [2]. This approach differs from other well-known Evolutionary Algorithms (EA) that has already been developed, as shown in [1], [3]-[6], in that no operators, inspired by evolutionary procedures are applied on the population to generate new promising solutions. Instead, in PSO, each individual, named as particle, of the population, called swarm, adjusts its trajectory toward its own previous best position, and toward the previous best position attained by any member of its topological neighborhood [7]. In the global variant of PSO, the whole swarm is considered as the neighborhood. Thus, the global sharing of information takes place and the particles profit from the discoveries and previous experience of all other companions during the search for promising regions of the landscape. For example, in the single-objective minimization case, such regions possess lower function values than others, visited previously, but PSO has deficiency on finding the global min or maximum points. The simulated annealing (SA) algorithm is a kind of global optimization method that mimics the nature in the way that a thermal system is cooled down to its lowest energy states. It has an explicit strategy to avoid the local minima [8]. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chance of finding configurations with lower internal energy than the initial one. SA algorithm can find the global minimum using stochastic searching technology from the means of probability and it assures that a global minimum can be found when the parameter space is sampled infinitely many times during annealing period. II. SWARM OPTIMIZATION Assuming that the search space is D-dimensional, the i-th particle of the swarm is represented by the D-dimensional vector Xi = (xi] Xi2 ...X Xid ) and the best particle in the swarm, i.e. the particle with the smallest function value, is denoted by the index g (Pg = (Pg, Pg2 ,--- Pgd )) . The best previous position of the i-th particle is recorded and represented as t= (Pil PI2 , Pid ), while the position change (velocity) of the i-th particle is represented as Vi =(Vil.Vi2vid), which is clamped to a maximum velocity Vmax =(Vmaxi,Vmax2 -Vmaxd) specified by the user. Following this notation, the particles are manipulated according to the following equations: Vid = WVid + c1 rand(.)(Pid -Xid) + C2 rand(-)(Pgd- Xid) Xid = Xid + Vid (1) (2) where w can be expressed by the inertia weights approach [9], c1 and c2 are the acceleration constants which influence the convergence speed of each particle [10], and rand(.)is a random number in the range of [0, 1]. For equation (1), the first part represents the inertia of the previous velocity; the second part is the "cognition" part, which represents the private thinking by itself and the third part is the "social" part which represents the cooperation among the particles [11]. If the summation in (1) would cause the velocity Vid on that dimension to exceed vmaxd , then Vid is limited to Vmaxd Vmax determines the resolution with which regions between the present position and the target position are searched [11], [12]. If Vmax is too large, the particles might fly the past good solutions. If Vmax is too small, the particles may not explore sufficiently beyond local solutions. In many experiences with PSO, Vmax is often set to the maximum of dynamic range of the variables on each dimension. The constants Cl and c2 1-4244-0136-4/06/$20.00 (C2006 IEEE 644

HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniqueshamidrez/1.pdf · 2012-07-25 · HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniques NasserSadati

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniqueshamidrez/1.pdf · 2012-07-25 · HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniques NasserSadati

Hybrid Particle Swarm-Based-Simulated AnnealingOptimization Techniques

Nasser SadatiIntelligent Systems Laboratory

Electrical Engineering DepartmentSharif University of Technology

Tehran, IRAN

Majid ZamaniIntelligent Systems Laboratory

Electrical Engineering DepartmentSharif University of Technology

Tehran, IRAN

Hamid Reza Feyz MahdavianIntelligent Systems Laboratory

Electrical Engineering DepartmentSharif University of Technology

Tehran, IRAN

Abstract Particle Swarm Optimization (PSO) algorithmsrecently invented as intelligent optimizers with several highlydesirable attributes. In this paper, two new hybrid ParticleSwam Optimization schemes are proposed. The proposed hybridalgorithms are based on using the Particle Swarm Optimizationtechniques in conjunction with the Simulated Annealing (SA)approach. By simulating three different test functions, it isshown how the proposed hybrid algorithms offer the capabilityof converging toward the global minimum or maximum points.More importantly, the simulation results indicate that theproposed hybrid particle swarm-based simulated annealingapproaches have much superior convergence characteristics thanthe previously developed PSO methods.

I. INTRODUCTION

PSO is a particle swarm optimization algorithm for globaloptimization that originally was introduced by Kennedy andEberhart in 1995 [1], [2]. This approach differs from otherwell-known Evolutionary Algorithms (EA) that has alreadybeen developed, as shown in [1], [3]-[6], in that no operators,inspired by evolutionary procedures are applied on thepopulation to generate new promising solutions. Instead, inPSO, each individual, named as particle, of the population,called swarm, adjusts its trajectory toward its own previousbest position, and toward the previous best position attainedby any member of its topological neighborhood [7]. In theglobal variant of PSO, the whole swarm is considered as theneighborhood. Thus, the global sharing of information takesplace and the particles profit from the discoveries andprevious experience of all other companions during the searchfor promising regions of the landscape. For example, in thesingle-objective minimization case, such regions possesslower function values than others, visited previously, but PSOhas deficiency on finding the global min or maximum points.

The simulated annealing (SA) algorithm is a kind of globaloptimization method that mimics the nature in the way that athermal system is cooled down to its lowest energy states. Ithas an explicit strategy to avoid the local minima [8]. Thename and inspiration come from annealing in metallurgy, atechnique involving heating and controlled cooling of amaterial to increase the size of its crystals and reduce theirdefects. The heat causes the atoms to become unstuck fromtheir initial positions (a local minimum of the internal energy)and wander randomly through states of higher energy; theslow cooling gives them more chance of findingconfigurations with lower internal energy than the initial one.

SA algorithm can find the global minimum using stochasticsearching technology from the means of probability and itassures that a global minimum can be found when theparameter space is sampled infinitely many times duringannealing period.

II. SWARM OPTIMIZATION

Assuming that the search space is D-dimensional, the i-thparticle of the swarm is represented by the D-dimensionalvector Xi = (xi] Xi2 ...XXid ) and the best particle in the swarm,i.e. the particle with the smallest function value, is denoted bythe index g (Pg = (Pg, Pg2 ,--- Pgd )) . The best previousposition of the i-th particle is recorded and represented as

t= (Pil PI2 , Pid ), while the position change (velocity) of thei-th particle is represented as Vi =(Vil.Vi2vid), which isclamped to a maximum velocity Vmax =(Vmaxi,Vmax2 -Vmaxd)specified by the user. Following this notation, the particles aremanipulated according to the following equations:

Vid = WVid + c1 rand(.)(Pid -Xid)+ C2 rand(-)(Pgd-Xid)

Xid = Xid + Vid

(1)

(2)

where w can be expressed by the inertia weights approach[9], c1 and c2 are the acceleration constants which influencethe convergence speed of each particle [10], and rand(.)is arandom number in the range of [0, 1]. For equation (1), thefirst part represents the inertia of the previous velocity; thesecond part is the "cognition" part, which represents theprivate thinking by itself and the third part is the "social" partwhich represents the cooperation among the particles [11]. Ifthe summation in (1) would cause the velocity Vid on thatdimension to exceed vmaxd , then Vid is limited to VmaxdVmax determines the resolution with which regions betweenthe present position and the target position are searched [11],[12]. If Vmax is too large, the particles might fly the past goodsolutions. If Vmax is too small, the particles may not exploresufficiently beyond local solutions. In many experiences withPSO, Vmax is often set to the maximum of dynamic range of

the variables on each dimension. The constants Cl and c2

1-4244-0136-4/06/$20.00 (C2006 IEEE 644

Page 2: HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniqueshamidrez/1.pdf · 2012-07-25 · HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniques NasserSadati

represent the weighting of the stochastic acceleration termsthat pull each particle toward pi and pg positions. Low

values allow particles to roam far from the target regionsbefore being tugged back. On the other hand, high valuesresult in abrupt movement toward, or past, the target regions.Hence, the acceleration constants Cl and c2 are often set tobe 2.0 according to the past experiences [3]. Suitableselection of inertia weight w provides a balance betweenglobal and local explorations, thus requiring less iterations onaverage to find a sufficiently optimal solution. As originallydeveloped, w often decreases linearly from about 0.9 to 0.4during a run. In general, the inertia weight w is set accordingto the following equation:

W= Wm - Wmax Wmin .iter (3)itermax

where itermax represents the maximum number of iterations,

and iter is the current number of iterations or generations.Moreover, wmax and wmin are the maximum and minimumweight values, respectively. From the above discussion, it isobvious that PSO resembles, to some extent, the "mutation"operator of Genetic Algorithms through the position updateequations (1) and (2). However it should be noted, that inPSO, the "mutation" operator is guided by the particle's own"flying" experience and benefits from the swarm's "flying"experience. In other words, PSO is considered as performingmutation with a "conscience" as pointed out by Eberhart andShi [13].

III. SIMULATED ANNEALING

As its name implies, the simulated annealing (SA) exploitsan analogy between the way in which a metal cools into aminimum energy crystalline structure (the annealing process)and search for a minimum in a more general system.According to the computational procedure, simulatedannealing can be divided into four basic steps:

1) A new state generator; which is used to generate anew solution just based on the previous one. Thechanges to the previous solution are randomlygenerated in each iteration, with a statistic distributionof Cauchy or Gaussian type.

2) An acceptance function; which is used to accept orreject the new solution based on the change of thecost function. The new solution will always beaccepted if the cost function decreases, otherwise thenew solution will be randomly accepted with aspecified probability.

3) A temperature schedule; which is used to determinehow the temperature is to be cooled and to generate anew value of temperature is used in the next iteration.

4) A stop criterion; which is used to determine thepoints to stop the temperature cooling and finish theoptimization.

SA has proved to be an effective global optimizationalgorithm because of the advantage described as; 1) suitabilityto problem in wide area, 2) no restriction on the form of costfunction, 3) high probability to find global optimization, 4)easy implementation by programming. But SA is notuniversal and its performance is mainly dependable on thefollowing four "enough"; 1. the initial temperature is highenough, 2. the temperature is cooled slowly enough, 3. theparameter space is sampled often enough, 4. the stoptemperature is low enough. These requirements make the SAconverge very slowly in most cases.

IV. PARTICLE SWARM-BASED-SIMULATEDANNEALING

The PSO-B-SA is an optimization algorithm which combinthe PSO with the SA. In fact by combining PSO with SA, thestrong points of SA can be used in PSO. This is the basic ideaof the PSO-B-SA. The PSO-B-SA algorithm's searchingprocess is started from initializing a group of randomparticles. In this paper, if only the pg that is the leader of the

swarm is based on SA, independently from other particles, thealgorithm is named to be the PSO-B-SAL. But, if all particlesare based on SA, the algorithm is named to be PSO-B-SA2,and then a group of new individuals are generated. In thiscase, particles of the new generation are obtained aftertransforming each particle's velocity and position accordingto the equations (1) and (2). This process evolves throughtime until the terminating condition is satisfied.

In the process of simulated annealing, the new individualsare given randomly around the original individuals. Here we

set the original particles as a parameter rt, to each particle:

present = present + r, rand(.) (4)

In the above equation the rt rand (.) is a random numberbetween 0 and 1. Now to find the global minimum of thefollowing optimizing problem

min f(xl,x2,...,x ) (5)s.t. xi E [ai,b1] ; i = 1,2, ... ,n

the steps of the particle swarm-based-simulated annealingoptimization is as follows:

1) Initialize a group of particles (the scale is m),including random position and velocity.

2) Evaluate each particle's fitness.3) If the chosen algorithm is PSO-B-SAI, then the pg

is based on SA independently and a new global bestposition ( pg ) is obtained. If the algorithm is PSO-

B-SA2, then each particle is based on SAindependently and a group of new individuals areobtained.

645

Page 3: HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniqueshamidrez/1.pdf · 2012-07-25 · HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniques NasserSadati

4) For each particle, compare its fitness and itspersonal best position (pi ). If its fitness is better,

replace pi with its fitness.5) For each particle, compare its fitness and the global

best position ( pg ). If its fitness is better, then

replace pg with its fitness.

6) Transform each particle's velocity and positionaccording to the expressions (1) and (2).

7) This process given evolves through time until theterminating condition is satisfied.

The proposed approaches are used to optimize threedifferent test functions as described in Tables I and II.

TABLE ITHE TEST FUNCTIONS

Sphere:- n

-4F xS(X) = Ex)i=l

Rosenbrock:

- nl

FROS(X) = Z(X(-x+ox21x )2 +(1 )i=l

Rastrigin:-* n

FRaS(X) = Z(x--1Ocos(2Ai)+ IO)R i

TABLE IIPARAMETERS OF THE TEST FUNCTIONS

Function Dim. Initial Range

Sphere 30 [-100;100]n

Rosenbrock 30 [-30;30]n

Rastrigin

Goal

0.01

100

100

V. SIMULATION RESULTS

In this section, the experiments that have been done tocompare the different variants of PSO with PSO-B-SAl andPSO-B-SA2 for continuous function optimization aredescribed. Since we are interested in understanding whetherthe proposed modifications of the standard PSO algorithmcan improve its performance, we have focused ourexperimental evaluation on the comparison with other PSOalgorithms. It should be noted however that PSO is known tobe a competitive method which often produces results that arecomparable or even better than those produced by othermetaheuristics (e.g., see [14]). In all our experiments, thePSO algorithms use the parameter values w =.729 andc, = c2 =1.494, as recommended in [15], unless statedotherwise. Each run has been repeated 100 times and theaverage results are presented. The particles have been

initialized with a random position and a random velocitywhere in both cases the values in every dimension have beenrandomly chosen according to a uniform distribution over theinitial range [X innXax].* The values of Xn and Xmax depend

on the objective function. During a run of an algorithm, theposition and velocity of a particle is not been restricted to theinitialization intervals, but a maximum velocityVmax = Xmax has been used for every component of velocityvector vi. The set of test functions (see Table I) containsfunctions that are commonly used in the field of continuousfunction optimization. Table II shows the values that havebeen used for the dimension of these functions, the range ofthe corresponding initial position and the velocities of theparticles, and the goals that have to be achieved by thealgorithms. The first two functions (Sphere and Rosenbrock)are unimodal functions (i.e., they have a single local optimumthat is also the global optimum) and the remaining function(Rastrigin) is multimodal (i.e., it has several local optima).All tests have been run over 10000 iterations. The swarm sizethat has been used in the experiments, is equal to 40 (m = 40).In this experiment the number of iterations required to reach acertain goal for each test function has been determinedcomparing PSO-g, PSO-1, H-PSO, A H-PSO, V H-PSO, PSO-B-SAI and PSO-B-SA2 . We used the results of [16] for thedifferent variants of PSO. For PSO-g, PSO-1 and H-PSO, 2parameter sets have been used. All algorithms have beentested with two different parameter sets that were taken fromthe literature. One parameter set is w =.6 and C=c2= 1.7,as suggested in [15] for a faster convergence rate. The otherparameter set has the common parameter values w =.729 andcl = C2 = 1.494. Algorithms that use the first (second) set ofparameter values are denoted by appending "-a" (respectively,"-b") to its name, e.g. PSO-g-a denotes PSO-g with the firstparameter set.

A. Significance

In the experiments done using the above algorithms, thenumber of iterations required to reach a specified goal foreach function was recorded. If the goal could not reach withinthe maximum number of 10000 iterations, the run wasconsidered unsuccessful. The success rate denotes thepercentage of successful runs. For the successful runs, theaverage, median, maximum, and minimum number ofiterations required to achieve the goal value were calculated.The expected number of iterations has been determined as(average/success rate). The results are all shown in Table III.

B. Results

The proposed approaches are first compared with the PSO.The results are shown in Fig.l. Moreover, the PSO-B-SAland PSO-B-SA2 are compared with other PSO algorithms.The comparison is based on the number of iterations requiredto reach a certain goal for each of the test functions. Thealgorithms all use the swarm size of m = 40. In Table III, the

646

30 [-5.12;5.12]n

Page 4: HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniqueshamidrez/1.pdf · 2012-07-25 · HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniques NasserSadati

average, median, maximum and minimum iterations requiredto achieve the goal value are shown. Also, the success rateand the expected number of iterations (average/success rate)to reach the goal are given. (PSO-B-SA2 is the fastestalgorithm to achieve the desired goal in less iterations). Onlyfor the Sphere function, the PSO-B-SAl does require anaverage of 9.6 iterations, where it is taken to be 10. For allother test functions, it obtains the lowest average andexpected number of iterations required to reach the goal.

Rastrigin

1 0

a *~~~~~*'.~~ ..

o 2.06-10

Ci)

12.04lo24 _

0 20 40 60 80 100 120Iteration

6 Rosenbrock

D

cn

1 04L0 50 100Iteration

150

10°

1 o-2

210

C/)

c0D1 0~

-10(

5 1 0 1 5 20 25 30 35 40Iteration

Fig. 1. Solution quality for PS0 ( ), PSO-B-SA1( -- ),PSO-B-SA2( ), with w =.729, c1 =C2 =1.494,

r, = 0.01 and swarm size m = 40

VI. CONCLUSION

In this paper, two novel approaches for optimization basedon Particle Swarm and Simulated Annealing optimizationtechniques are presented. The Particle Swarm Optimizationbased Simulated Annealing can narrow the field of search and

speed up the rate of convergence continually in the optimizingprocess, as shown in Table III. It is shown that the proposedapproaches have higher searching efficiency. They can alsoescape from local minimums as shown in Fig. 1. These twoalgorithms are applied to several test functions asoptimization problems. The PSO-B-SAl algorithm has betterspeed than the PSO-B-SA2 but the PSO-B-SA2 algorithm hasbetter convergence than PSO-B-SAL. However, both havemuch smaller number of iterations and faster convergencerates than the other PSO algoirthms. It is shown that thePSO-B-SA2 could be used as a new and a promisingtechnique for solving optimization problems.

VII. REFERENCES

[1] J. Kennedy and R. Eberhart, "Particle SwarmOptimization," in Proc. IEEE Int. Conf NeuralNetworks, vol. 4, 1995, pp. 1942-1947.

[2] R. Eberhart, J. Kennedy, "A new optimizer usingparticle swarm theory," in Proc. 6th Int. Symposium onMicro Machine and Human Science, 1995, pp. 39-43.

[3] R. C. Eberhart, P. Simpson, and R. Dobbins,Computational Intelligence PC Tools, Academic Press:1996, pp. 212-226.

[4] J. Kennedy and R.C. Eberhart, Swarm Intelligence,Morgan Kaufmann Publishers: 2001.

[5] R. C. Eberhart and Y. Shi, "Comparison betweengenetic algorithms and particle swarm optimization," inProc. of the 7th Annual Conf on EvolutionaryProgramming, 1998,pp. 611-616.

[6] P. J. Angeline, "Evolutionary optimization versusparticle swarm optimization: philosophy andperformance difference," in Proc. of the 7th AnnualConf on Evolutionary Programming, 1999, pp. 601-610.

[7] J. Kennedy, "The behavior of particles," EvolutionaryProgramming VII, 1998, pp.581-587.

[8] S. Kirkpatrick, C. D. Gelatt, and M. P., Vecchi,"Optimization by simulated annealing," Science, vol.220, no. 4598, 13, 1983, pp. 671-680.

[9] Y. Shi, R. Eberhart, "A modified particle swarmoptimizer," in Proc. IEEE Int. Conf on EvolutionaryComputation, 1998, pp. 69- 73.

[10] R. Eberhart, Y. Shi, "Particle swarm optimization:developments, applications and resources," in Proc.IEEE Int. Conf on Evolutionary Computation, 2001,pp. 81-86.

[11] J. Kennedy, "The particle swarm: social adaptation ofknowledge," in Proc. IEEE Int. Conf on EvolutionaryComputation, 1997, pp. 303-308.

[12] Y. Shi, R. Eberhart, "Parameter selection in particleswarm optimization," in Proc. of the 7th Annual Confon Evolutionary Programming, 1998, pp. 591-600.

[13] R. Eberhart and Y. H. Shi, "Evolving artificial neuralnetworks," in Proc. Int. Conf on Neural Networks andBrain, 1998.

647

1 1 T T T

i 1.

11 t-----------------------------

----t

--------------------------------

Page 5: HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniqueshamidrez/1.pdf · 2012-07-25 · HybridParticle Swarm-Based-SimulatedAnnealing OptimizationTechniques NasserSadati

[14] J. Kennedy and W. M. Spears, "Matching algorithms to [15] I. C. Trelea, "The particle swarm optimizationproblems: an experimental test of the particle swarm algorithm: convergence analysis and parameterand some genetic algorithms on the multimodal problem selection," Inform. Process. Lett., vol. 85, 2003.generator," in Proc. Int. Conf Evolutionary [16] S. Janson and M. Middendorf, "A hierarchical particleComputation, 1998, pp. 78-83. Swarm Optimizer and its adaptive variant," IEEE Trans.

on Systems, Man, and Cybernetics, vol. 35, 2005.

TABLE IIISTEPS REQUIRED TO ACHIEVE A CERTAIN GOAL FOR PSO-G, PSO-1, H-PSO, A H-PSO, V H-PSO, PSO-B-SAI AND PSO-B-SA2.AVERAGE (AVG), MEDIAN (MED), MAXIMUM (MAX), MINIMUM (MIN) AND EXPECTED (EXP:= AVG/SUCC) NUMBER OF

ITERATIONS; "-a" AND "_b" DENOTE THE USED PARAMETER VALUES AS DESCRIBED IN SECTION V.

Algorithm Avg Med Max Min Succ ExpSphere

PSO-g-a (1) 309.4 303 530 238 1 309.4PSO-l-a (2) 449.4 452 482 406 1 449.4H-PSO-a (3) 360 361 414 301 1 360PSO-g-b (4) 363 355 539 289 1 363PSO-l-b (5) 563.2 563 625 516 1 563.2H-PSO-b (6) 453.9 453 529 388 1 453.9A H-PSO (7) 351.5 348 503 301 1 351.5V H-PSO (8) 209.6 205 319 173 1 209.6PSO-B-SA1 (9) 9.6 10 25 3 1 9.6PSO-B-SA2 (10) 10 9 14 5 1 10

RastriginPSO-g-a (1) 104 101.5 191 64 .98 106.1PSO-l-a (2) 185.7 176 611 102 .99 187.6H-PSO-a (3) 432.7 291 2482 95 .99 437.1PSO-g-b (4) 142 132 282 82 .98 144.9PSO-l-b (5) 270 220.5 3657 119 1 270H-PSO-b (6) 500.9 367.5 1703 142 1 500.9A H-PSO (7) 151.2 127 677 65 1 151.2V H-PSO (8) 184.4 117 4192 52 1 184.4PSO-B-SA1 (9) 147 148 231 98 1 147PSO-B-SA2 (10) 38.8 36 124 13 1 38.8

RosenbrockPSO-g-a (1) 497.1 302.5 4189 195 1 497.1PSO-l-a (2) 704.3 497 7851 326 1 704.3H-PSO-a (3) 528.3 340.5 5924 247 1 528.3PSO-g-b (4) 641.2 382.5 5165 230 1 641.2PSO-l-b (5) 798 615 5164 408 1 798H-PSO-b (6) 780.2 471 5315 295 1 780.2A H-PSO (7) 702 369.5 4183 238 1 702V H-PSO (8) 352.7 262.5 2251 144 1 352.7PSO-B-SA1 (9) 270.2 260 389 203 1 270.2PSO-B-SA2 (10) 52.64 54 132 37 1 52.64

648