Upload
harinima
View
219
Download
0
Embed Size (px)
Citation preview
8/10/2019 A Review of Particle Swarm Optimization- V2
1/25
1
Review of Particle Swarm Optimization: Basic Concepts, Variants and
Applications
Abstract: Particle swarm optimization is a type of stochastic optimization, evolutionary and
simulating algorithm inspired from the nature. PSO has gained wide appeal due to its ease of
implementation, special characteristic of memory and potentiality for specialization andhybridization. Particle swarm optimization has few parameters to adjust, presents extensive
review of literature available on concept, development and modification of Particle swarmoptimization. This study comprises a snapshot of particle swarm optimization from the authors
perspective, including variations in the algorithm, modifications and refinements introduced to
prevent swarm stagnation and hybridization of PSO with other heuristic algorithms. Issuesrelated to parameter tuning are also discussed.
1. Introduction
Conventional computing paradigms often have difficulty in dealing with real world problems,such as those characterized by noisy or incomplete data or multimodality, because of their
inflexible construction. Natural computing paradigms seem to be a suitable replacement in
solving such problems. These working together. They have inspired several natural computing
paradigms that can be used where conventional computing techniques perform unsatisfactorily.
In the past few decades several nature inspired optimization algorithms have been developed that
are based on the nature inspired analogy. These are mostly population based meta-heuristics alsocalled general purpose algorithms because of their applicability to a wide range of problems.
Global optimization techniques are fast growing tools that can overcome most of the limitations
found in derivative-based techniques. Some popular global optimization algorithms include
Genetic Algorithms (GA) (Holland, 1992), Particle Swarm Optimization (PSO) (Kennedy andEberhart, 1995), Differential Evolution (DE) (Price and Storn, 1995), Evolutionary Programming
(EP) (Maxfield and Fogel, 1965), Ant Colony Optimization (ACO) (Dorigo et al., 1991) etc. A
chronological order of the development of some of the popular nature inspired meta-heuristics isgiven in Fig. 1. These algorithms have proved their mettle in solving complex and intricate
optimization problems arising in various fields.
PSO is a well known and popular search strategy that has gained widespread appeal amongst
researchers and has been shown to offer good performance in a variety of application domains,with potential for hybridization and specialization. It is a simple and robust strategy based on the
social and cooperative behavior shown by various species like flock of bird, school of fish etc.
PSO and its variants have been effectively applied to a wide range of benchmark as well as reallife optimization problems.
As PSO has undergone many changes in its structure and tried with many parameter adjustment
methods, a large number of works have been carried out in the past decade. This paper presents a
state of the art review of the previous studies which have proposed modified versions of particleswarm optimization and applied them on various optimization problems of different fields. The
review demonstrated that the particle swarm optimization and its modified versions have
8/10/2019 A Review of Particle Swarm Optimization- V2
2/25
8/10/2019 A Review of Particle Swarm Optimization- V2
3/25
3
( ) (1) (2)
where is the new velocity after updation, is the velocity of the particle before updation, are random numbers generated within the range of [0,1], is the cognitive learningfactor, the social learning factor, the particles individual best(i.e., local-best positionor its experience) and global best(the best position among all particles in the population) particle
respectively and is the current particles position in search space.The algorithm steps of PSO is given as
1. Initialize a population of particles with random velocities and positions on d dimensions
in the problem space.
2. For each particle evaluate the objective fitness function in d variables3. Compare the particles fitness value with particles pBest value. If current value is better
than pBest, then set pBest value equal to the current value and the pBest location equal to
the current location in d dimensional search space.
4. Compare fitness evaluation with the populations overall previous best. If current value isbetter than gBest, then reset gBest to the current particles array index and value.
5. Change velocity and position of the particles with equation 1 and 2, respectively
6. Loop to step 2 until the criterion is met, usually a good fitness or a maximum number ofiterations.
As PSO was applied to solve problems in different domains, several considerations have been
taken into account to facilitate the convergence and prevent an explosion of the swarm. A fewimportant and interesting modifications in the basic structure of PSO focus on limiting the
maximum velocity, adding inertia weight and constriction factor. Details are discussed in the
following:
2.1 Evolution of PSO models
The basic model of PSO was found with particles being accelerated out of search space.
Motivated to control the search space, Eberhart et al. 1996 put forward a clamping scheme that
limited the speed of each particle to a range [-vmax, vmax] with vmax usually being somewhere
between 0.1 and 1.0 times the maximum position of the particle.
However, by far the most problematic characteristic of PSO is its tendency to converge,
prematurely, on early best solutions. Many strategies have been developed in attempts to
overcome this but by far the most popular are inertia and constriction. The inertia term, w wasintroduced by Shi and Eberhart 1998b. The inertia weight controls the exploration of search
space. The velocity update function with inertia weight is given in equation 3.
. ( ) (3)
8/10/2019 A Review of Particle Swarm Optimization- V2
4/25
4
With further analysis the other strategies to adjust inertia weight were introduced. The adaption
of inertia weight using fuzzy system (Eberhart and Shi 2000) was reported to significantly
improve PSO performance. The optimal strategy is to initially set w to 0.9 and reduce it linearlyto 0.4, allowing initial exploration followed by acceleration toward an improved global optimum.
Another effective strategy is to use an inertia weight with a random component, rather than time-
decreasing (Eberhart and Shi 2001) values and successfully used = U(0.5, 1).
Clercs analysis of the iterative system led him to propose a strategy for the placement of
constriction coefficients on the terms of the formulas;Clerc and Kennedy (2002) noted that
there can be many ways to implement the constriction coefficient. One of the simplest methodsof incorporating it is the following:
{ ( )} (4)
||where 4,21 (5)
When Clercs constriction method is used, is commonly set to 4.1, 1 = 2, and the constant
multiplier is approximately 0.7298.
The constricted particles will converge without using any Vmax at all. However, subsequent
experiments and applications (Eberhart and Shi 2000) concluded that a better approach to use asa prudent rule of thumb is to limit Vmax toXmax, the dynamic range of each variable on each
dimension, in conjunction with (4) and (5).
3. PSO : A Review
As a member of stochastic search algorithms, PSO has two major drawbacks (Lovbjerg, 2002).The first drawback is that the PSO has a problem-dependent performance. This dependency isusually caused by the way parameters are set, i.e. assigning different parameter settings to PSO
will result in high performance variance. In general, no single parameter setting exists which can
be applied to all problems and performs dominantly better than other parameter settings. The
common way to deal with this problem is to use self-adaptive parameters. The second drawbackof PSO is its premature character, i.e. it could converge to local minimum. According to
Angeline 1998a&b, although PSO converges to an optimum much faster than other evolutionary
algorithms, it usually cannot improve the quality of the solutions as the number of iterations is
increased. PSO usually suffers from premature convergence when high multi-modal problems
are being optimized
Several efforts have been attempted on PSO to overcome or reduce the impact of the
disadvantages. Some of them have already been discussed, including inertia weight, theconstriction factor and so on. Further modifications as well as hybridization of the PSO with
another kind of optimization algorithm are discussed in this section.
The changes attempted in PSO methodology can be categorized into
1. Parameter Setting
8/10/2019 A Review of Particle Swarm Optimization- V2
5/25
5
2. Changes in methodology
3. Hybridization with other methodologies
3.1 Parameter Setting
In solving problems with PSO, the properties affect their performance. The property of PSOdepend on the parameter setting and hence users need to find the apt value for the parameters tooptimize the performance. The interactions between the parameters have a complex behavior and
so each parameter will give a different effect depending on the value set for others. Without prior
knowledge of the problem, parameter setting is difficult and time consuming. The two majorways of parameter setting are through parameter tuning and parameter control (Eiben and Smith
2003). Parameter tuning is the commonly practiced approach that amounts to find appropriate
values for the parameters before running the algorithm. Parameter control steadily modifies the
control parameter values during the run. This could be achieved either through deterministic oradaptive or self-adaptive techniques (Boyabalti and Sabuncuoglu 2007). A brief discussion of
adjustment of parameters of particle swarm optimization is summed up from the literature
review.
The issue of parameter setting has been relevant since the beginning of the EA research and
practice. Eiben et al. 2007 emphasized that the parameter values greatly determinate the successof an EA in finding an optimal or near-optimal solution to a given problem. Choosing
appropriate parameter values is a demanding task. There are many methods to control theparameter setting during an EA run. Parameters are not independent but trying all combinations
is impossible and the found parameter values are appropriate only for the tested probleminstances. Nannen et al. 2008 state that major problem of parameter tuning is weak
understanding of the effects of EA parameters on the algorithm performance.
3.1.1 Inertia Weight Adaption Mechanisms
Inertia weight parameter was originally introduced by Shi and Eberhart in 1998a. A range ofconstant values were set for w and showed that large values of w, i.e. w> 1.2, resulted in weak
exploration and with low values of this parameter, i.e. w< 0.8, PSO tends to traps in local
optima. They suggest that with w within the range [0.8, 1.2], PSO finds the global optimum in a
reasonable number of iterations. Shi and Eberhart 1998 analyzed the impact of the inertia weightand maximum velocity on the performance of the PSO.
A random value of inertia weight is used to enable the PSO to track the optima in a dynamicenvironment in Eberhart and Shi 2001.
(6)where rand() is a random number in [0,1]; w is then a uniform random variable in the range[0.5,1].
Most of the PSO variants use time-varying inertia weight strategies in which the value of theinertia weight is determined based on the iteration number. These methods can be either linear or
8/10/2019 A Review of Particle Swarm Optimization- V2
6/25
6
non-linear and increasing or decreasing. A linear decreasing inertia weight was introduced and
was shown to be effective in improving the fine-tuning characteristic of the PSO. In this method,
the value of w is linearly decreased from an initial value (w max) to a final value (wmin) accordingto the following equation[ref5,8,20,24,31,32,56,58,67,92]:
(7)where Iterationiis the current iteration of the algorithm and Iterationmaxis the maximum number
of iterations the PSO is allowed to continue. This strategy is very common and most of the PSOalgorithms adjust the value of inertia weight using this updating scheme.
Chatterjee and Siarry 2006proposed a nonlinear decreasing variant of inertia weight in which at
each iteration of the algorithm, w is determined based on the following equation:
(8)
where n is the nonlinear modulation index. Different values of n result in different variations of
inertia weights all of which start from wmaxand end at wmin.
Feng et al.2007use a chaotic model of inertia weight in which a chaotic term is added to the
linearly decreasing inertia weight. The proposed w is as follows.
(9)where z=4z(1z). The initial value of z is selected randomly within the range (0,1).Lei et al.2006 used a Sugeno function as inertia weight decline curve.
(10)Where is iter/itermaxand s is a constant larger than 1.In Alireza and Hamidreza 2011, every particle in swarm dynamically adjusts inertia weightaccording to feedback taken from particles best memories considering a measure called
adjacency index (AI), which characterizes the nearness of individual fitness to the real optimal
solution. Based on this index, every particle could decide how to adjust the values of inertia
weight.
(11)where is a positive constant in the range (0, 1].
To resolve the conflicts in Engineering optimization design of highly complex and nonlinear
constraints, inertia weight is adaptively changed according to the current evolution speed and
8/10/2019 A Review of Particle Swarm Optimization- V2
7/25
7
aggregation degree of the swarm (Xiaolei 2011). This provides the algorithm with dynamic
adaptability, enhances the search ability and convergence performance of the algorithm.
An adaptive inertia weight integer programming particle swarm optimization was proposed in
Chunguo et al. 2012, to solve integer programming problem. Based on Grey relational analysis,
two grey-based parameter automation strategies for particle swarm optimization (PSO) wasproposed (Liu and Yeh 2012). One is for the inertia weight and the other is for the accelerationcoefficients. By the proposed approaches, each particle has its own inertia weight and
acceleration coefficients whose values are dependent upon the corresponding grey relational
grade. The adaption of inertia weight is as follows.
(12)An adaptive parameter tuning of particle swarm optimization based on velocity information wasproposed by Gang Xu 2013.This algorithm introduces the velocity information which is defined
as the average absolute value of velocity of all the particles based on which the inertia weight is
dynamically adjusted. * + (13) * +where w(t) is the inertia weight; wmaxis the largest inertia weight and wminthe smallest inertia
weight; w is the step size of the inertia weight.Arumugam and Rao 2008, used the ratio of the global best fitness and the average of particleslocal best fitness to determine the inertia weight in each iteration.
(14)In the adaptive particle swarm optimization algorithm proposed by Panigrahi et al. 2008,
different inertia weights were assigned to different particles based on the ranks of the particles.
(15)where Ranki is the position of the i
th particle when the particles are ordered based on their
particle best fitness, and Total population is the number of particles. The rational of thisapproach is that the positions of the particles are adjusted in a way that highly fitted particles
move more slowly compared to the lowly fitted ones.
Nickabadi et al. 2011, proposed an adaptive inertia weight methodology based on the successrate of the swarm as follows.
(16)
8/10/2019 A Review of Particle Swarm Optimization- V2
8/25
8
Where Ps(t) is the success percentage of the swarm. The success percentage is the success
average of the particles in the swarm. The success of the particles is based on the distance of the
particle from the optima.
3.1.2 Acceleration coefficients Adaption Mechanisms
The acceleration coefficients c1 (cognitive) and c2 (social) in PSO represents the stochasticacceleration terms that pull each particles towards the pBest and gBest positions. Suitable fine-
tuning of cognitive and social parameters c1 and c2 may result in faster convergence of the
algorithm and alleviate the risk of settling in one of the local minima.
Initially the values of c1 and c2 were fixed to constant value of 2 (James and Kennedy 1995).Instead of having a fixed value of acceleration factors Suganthan and Zhao 2009 through
empirical studies suggested that acceleration coefficients should not be always equal to 2 for
obtaining better solutions. From the literature it could be noted that majority of works had the
values of c1 and c2 set to 2. The next level of values for c1 and c2 were found to be around 1.49.
Few works were based on the values of c1 and c2 as 2.05 each. .
As c1 is said to the cognitive factor, making the swarm to search the optima effectively
(Exploration) and c2 the social factor, helping in movement of the swarm toward the optima(convergence), it is evident from literature that larger values of c1 and smaller values of c2 at
initial stage helps in fixing up the global optima and smaller values of c1 and larger values of c2
at later stages makes the convergence easier.
To alter the values of c1 and c2 during evolution many adaptive methods have been proposed in
literature. To overcome the premature convergence of particle swarm optimization the evolution
direction of each particle is redirected dynamically by adjusting the two sensitive parameters i.e.
acceleration coefficients of PSO in the evolution process (Ying et al. 2012) as.
(17) (18)d1and d2are constant factors, MaxGen is the maximum generation number.
Multi swarm and multi best particle swarm optimization was proposed by Li and Xiao, 2008,
according to author advantage of information at every position of particle should be taken unlike
SPSO in which use of information at best position only is made (P best and G best), for thisauthor suggests new values of C1, C2 instead of a constant value of 2 to be replaced in SPSO
equation, as
(19) (20)Pfit is fitness value of P best and Gfit is fitness value of G best.
8/10/2019 A Review of Particle Swarm Optimization- V2
9/25
9
To improve optimizing efficiency and stability new metropolis coefficients were given by Jie
and Deyun 2008, which represent fusion of simulated annealing and particle swarm
optimization. Metropolis coefficients Cm1 and Cm2 vary according to distance between currentand best position and according to generation of particles. This method gives better results in less
number of iteration steps and less time. These coefficients are given as
(21) () (22)The acceleration coefficients are adapted in Ratnaweera et al. 2004 dynamically and in
Arumugam et al. 2008 by balancing the cognitive and the social components. The acceleration
coefficients are adjusted based on evolution state estimator derived from the Euclidean distance
by Zhan et al. 2009 to perform global search over entire search space with faster convergence.The adaptive methodologies adjust the parameters based on strategies independent of the data
used in the applications.
Yau-Tarng et al. 2011 proposed an adaptive fuzzy particle swarm optimization algorithm
utilizing fuzzy set theory to adjust PSO acceleration coefficients adaptively, and was thereby
able to improve the accuracy and efficiency of searches. In PSO the acceleration coefficient c1decreasing linearly while c2 increases linearly over time is proposed in Jing and David 2012,
Gang et al. 2012. This promotes more explorations in the early stages, while encouraging
convergence to a good optimum near the end of the optimization process, by attracting particles
more towards the global best positions.
( ) (23) ( ) (24)c1,min, c1,max, c2,min, and c2,max can be set to be 0.5, 2.5, 0.5, and 2.5, respectively. Itermax is the
maximum number of iterations to be done.
To improve the convergence speed, Xingjuan et al. 2008 proposed a new setting about social
coefficient by introducing an explicit selection pressure, in which each particle decides its search
direction toward the personal memory or swarm memory. The dispersed social coefficient of
particlej at time t is set as follows:
( ) (25)where cup and clow are two predefined numbers, and c2,j (t) represents the social coefficient ofparticlej at time t . Gradej(t) a new index based on the performance differences (fitness values).
8/10/2019 A Review of Particle Swarm Optimization- V2
10/25
10
Liu and Yeh 2012proposedan adaptive method based on Grey relational analysis for adjusting
the acceleration coefficients during evolution. The adaption of acceleration coefficients are as
follows.
(26)
(27)where CmaxCfinalCmin, and Cfinalrepresents the final value of the acceleration coefficient c2.
3.2 Modification in Methodology
Like other evolutionary computation techniques, particle swarm optimization also has drawbackof premature convergence. To reach an optimum value particle swarm optimization depends on
interaction between particles. If this interaction is restrained algorithms searching capacity will
be limited, thereby requiring long time to come out of local optimums.
To overcome the shortcomings of PSO many modifications were made in the methodology as
altering the velocity update function, fitness function etc. The changes proposed in the velocity
update function noted in literature are listed in table 1.
Table 1. Modification in Velocity Update
Modification Strategy Equation Reference
Acceleration
concept
The acceleration
produced by the near
neighborhood attraction
force and repulsionforce considered in
velocity updation
Liu et al.
2012
Diversity The direction of the
particles movement iscontrolled by the
diversity information of
the particles
[ ( ) ( )] Lu and
Lin2011
Chaotic
Maps
An improved logistic
map, namely a double-
bottom map, was
applied for local search
[() ]
( ) ( )
Yang et
al. 2012
Acceleration
CoefficientC3
Swarm behavior or
neighborhood behavioris considered in velocity
updation
Ref13,14,
ref15ref16
&31
8/10/2019 A Review of Particle Swarm Optimization- V2
11/25
11
Detecting
ParticleSwarm
Detecting particles
search in approximatespiral trajectories
created by the new
velocity updating
formula in order to findbetter solutions.
ZangandTeng
2009
Euclidean
PSO
If the global best fitness
has not been updated for
K times, velocities ofparticles will get an
interference factor to
make most of particles
fly out of the localoptima but the best one
is kept continuing to do
local search
( )
Zhu and
Pu 2009
Modified
PSO
Better optimization
results are achieved bysplitting the cognitive
component of the
general PSO into two
different components,called good experience
component and bad
experience component.
Deepa
andSuguma
-ran
2011
Diversity-
maintainedQPSO
Is based on the analysis
of Quantum based PSOand integrates a
diversity control
strategy into QPSO toenhance the global
search ability of the
particle swarm.
(
) ( ) () () Jun Sun
et al.2012
A parallel asynchronous PSO (PAPSO) algorithm was proposed to enhance computational
efficiency (Kho et al. 2006). This design follows a master/slave paradigm. The master processor
holds the queue of particles ready to send to slave processors and performs all decision-makingprocesses such as velocity/position updates and convergence checks. It does not perform anyfunction evaluations. The slave processors repeatedly evaluate the analysis function using the
particles assigned to them.
A new approach of extending PSO to solve optimization problems by using the feedback control
mechanism (FCPSO) was proposed by Wong et al. 2012. The proposed FCPSO consists of two
major steps. First, by evaluating the fitness value of each particle, a simple particle evolutionary
8/10/2019 A Review of Particle Swarm Optimization- V2
12/25
12
fitness function is designed to control parameters involving acceleration coefficient, refreshing
gap, learning probabilities and number of the potential exemplars automatically. By such a
simple particle evolutionary fitness function, each particle has its own search parameters in a
search environment.
Chuang and Sai 2006 proposed Catfish particle swarm optimization (CatfishPSO) a noveloptimization algorithm. Catfish particle were incorporated into the linearly decreasing weight
particle swarm optimization (LDWPSO). The catfish particles initializes a new search from the
extreme points of the search space when the gbest fitness value (global optimum at eachiteration) has not been changed for a given time, which results in further opportunities to find
better solutions for the swarm by guiding the whole swarm to promising new regions of the
search space, and accelerating convergence.
Chaotic maps were introduced into catfish particle swarm optimization (CatfishPSO), to increase
the search capability of CatfishPSO via the chaos approach (Chuang et al. 2011). SimpleCatfishPSO relies on the incorporation of catfish particles into particle swarm optimization
(PSO). The introduced catfish particles improve the performance of PSO considerably. Unlikeother ordinary particles, the catfish particles initialize a new search from extreme points of thesearch space when the gbest fitness value (global optimum at each iteration) has not changed fora certain number of consecutive iterations.
Liu et al. 2008 suggested two improvement strategies. In first approach at start, solution isinitialized in a limited range then according to limit of searching range certain steps of size
proportional to population size are selected to distribute the points uniformly. This way
combination of uniform and random solutions in initial iterations enhances diversity. In second
approach author suggests that speed and location updating is not necessary after each iteration.Coordinates can be assigned to each optimal fitness value, i.e. a certain incremental or
detrimental value is intercepted from every dimension of each optimal solution then it iscompared with right fitness value.
According to Maeda et al. 2009 one of the reason of premature convergence of PSO is its same
intensity of search all along the process and the process being one as a whole without beingdivided into exclusive segments. Based on hierarchical control concept of control theory two
layer particle swarm optimization was introduced. In this one swarm is at the top and L number
of swarm are at the bottom layer in which parallel computation takes place, thereby increasingnumber of particles as l multiplied by number of particles in each of L swarms, thus diversity is
increased. When the velocity of particles on bottom layer is less than marginal value and the
position of particles cannot be updated with velocity particle velocity need to be updated and re-
initialized without considering former strategy, to avoid premature convergence.
Voss 2005 introduced a new principle component particle swarm optimization (PCPSO) which
could be an economic alternative for large number of engineering problems. It reduces timecomplexity of high dimensional problems. In PCPSO particles fly in two separate axial spaces at
same time, one in original space and other being rotated space wise. Then new z locations are
mapped into x space, using a parameter which defines fraction of considered rotated space flight.
Now their weighted average is found out. Pbest and Gbest are found and updated usingcovariance.
8/10/2019 A Review of Particle Swarm Optimization- V2
13/25
13
Ji et al. 2010 proposed a bi swarm PSO with co-operative co evolution. In this swarm consists of
two parts, first swarm is generated randomly in the whole search space, and the second swarm isgenerated periodically centering towards largest and smallest bound of the best and worst
particle of the first swarm in all directions. Two swarms share information from each other
during each generation. Updation of velocity and position follow SPSO equations (1) and (2).Then best particles of swarm one is compared with those of two and this way the best particle isfound out. Experiments show that this method performs better than SPSO regarding
convergence, speed and precision.
Multi-Swarm Self-Adaptive and Cooperative Particle Swarm Optimization (MSCPSO) based on
four sub-swarms are employed to avoid falling into local optimum, improve the diversity and
achieve better solution. Particles in each sub-swarms share the only global historical best
optimum to enhance the cooperative capability. Besides, the inertia weight of a particle in eachsub-swarms were modified, which is subject to the fitness information of all particles, and the
adaptive strategy is employed to control the influence of the historical information to create a
more potential search ability. To effectively keep the balance between the global exploration andthe local exploitation, the particle in each swarm takes advantage of the shared information to
maintain cooperation with each other and guides its own evaluation. On the other hand, in order
to increase the diversity of the particles and avoid falling into a local optimum, diversity
operation is adopted to guide the particles to jump out of the local optimum and achieve the
global best position smoothly.
Another approach to multiswarm particle swarm optimization was proposed, in which the
initialized population of particles is divide into n number of groups randomly, every group is
regarded as a new population, which update their velocity and positions synchronously, thus n Gbests are obtained, then again these groups are combined to give one population. Now it becomes
easy to calculate real optimum value out of n G bests (Li and Xiao, 2008). This helps ineffective exploration of global search area. Particles quality is always estimated based on fitnessvalue and not on dimensional behaviour, but some particles may have different dimensional
behaviour in spite of same fitness value. Also in SPSO each updation of velocity is considered as
overall improvement, while it is possible that some particle may move away from the solutionwith this updating. To beat these problems a new parameter called particle distribution degree
(dis(s)) was introduced (Zu et al. 2008).
(28)here s is swarm, dim is dimensionalityof problem, N is equal separation size of particle swarm,
I dimension, l separation area. Particles crowd together more centrally the bigger the dis(s) is.
Another approach given by Wang et al. 2009 for avoiding local convergence in case of
multimodal and multidimensional problems is called group decision particle swarm optimization(GDPSO). It takes into account every particles information for making group decision in early
stages (implying human intelligence in which everybodys individual talent and intelligence
makes up for lack of experience) then at later stages original decision making of particle swarm
8/10/2019 A Review of Particle Swarm Optimization- V2
14/25
14
optimization is used. Thus in GDPSO search space is enlarged and diversity is increased to solve
high dimension functions.
3.3 Hybridization with other Methodologies
Like other evolutionary computation techniques, particle swarm optimization faces problemregarding their local search abilities in optimization problems. More specifically, although PSOis capable of detecting the region of attraction of the global optimizer fast, once there, PSO
suffers to perform a refined local search to compute the optimum with high accuracy, unless
specific procedures are incorporated in their operators. This triggered the development ofMemetic Algorithms (MAs), which incorporate local search components. MAs constitute a class
of metaheuristics that combines population-based optimization algorithms with local search
procedures.
Recently many local search methods have been incorporated with PSO. Table 2 lists the various
search methods combined with PSO from the literature.
Table 2. Local Search Methods applied in PSO
Local Search Strategy Reference
ChaosSearching
Technique
Transform the variable of problems from the solution space tochaos space and then perform search to find out the solution by
virtue of the randomicity, orderliness and ergodicity of the chaos
variable. Logistic map used
Wan et al2012
Chaotic Local
Search +Roulette
WheelMechanism
A well-known logistic equation is employed for the hybrid PSO.
The logistic equation is defined as follows.
where is the control parameter,x is a variable and n = 0,1,2,...,
. Although the above equation is deterministic, it exhibits chaotic
dynamics when = 4 andx0{0,0.25,0.5,0.75,1} .
Xia 2012
Adaptive
Local Search
Two different LS operators are considered
Cognition-Based Local Search (CBLS)
Random Walk with Direction Exploitation (RWDE)
Wang et
al. 2012
Intelligent
multiplesearch
methods
Non-uniform mutation-based method
In the non-uniform mutation-based method, the dth
dimension ofthe solution xi g is randomly picked to be mutated to generate a
new solution as
( ) ( ) where i is the current iteration index of PSO; Udand Ld are the
upper and lower bounds of xig,d; r is a uniform random number
from (0, 1).
Adaptive sub-gradient method
Mengqi
et al.2012
8/10/2019 A Review of Particle Swarm Optimization- V2
15/25
15
In the sub-gradient method, a new solution x g is generated as where
igis the sub-gradient of the objective function; iis the
step size
ReducedVariable
NeighborhoodSearch
RVNS is a variation of the Variable NeighborhoodSearch(VNS). VNS becomes RVNS if LocalSearch step is
removed. RVNS is usually preferred as a general optimizationmethod for problems where exhaustive local search is expensive
Zulal andErdogan
2010
ExpandingNeighborhood
Search
Based on a method called Circle Restricted Local Search Moves(CRLSM) and, in addition, a number of local search phases.
Marinakisand
Marinaki
2010
Chaotic and
Gaussian local
search
A Logistic Chaotic map is employed. ( ) where j is the j
th
Chaotic variable in the kth
generation. When = 4, Logistic function comes into a thorough chaos state.
Jia et al.
2011
Variable
NeighborhoodDescend
algorithm
Is an enhanced local improvement strategy which is commonly
used as a subordinate in Variable Neighborhood Search
Goksal,
F. P., etal. 2012
Two Stage
Hill Climbing
The proposed local search employs two stages to find a better
member.
Ahandani
et al.
2012
Simulated
Annealing
Metropolis-Hastings strategy, is the key idea behind the
simulated annealing (SA) algorithm
Safaeia et
al. 2012
Extremal
optimization
EO successively updates extremely undesirable variables of a
single sub-optimal solution, assigning them new, random values.
Chen et
al. 2010
Hybridization is a growing area of intelligent systems research, which aims to combine thedesirable properties of different approaches to mitigate their individual weaknesses. A natural
evolution of the population based search algorithms like that of PSO can be achieved by
integrating the methods that have already been tested successfully for solving complex problems(Thangaraj et al., 2011). Researchers have enhanced the performance of PSO by incorporating in
it the fundamentals of other popular techniques like selection, mutation and crossover of GA and
DE.
Many developments combining PSO with Genetic Algorithms were tried out for balancing
between exploration and exploitation. Table 3 lists the works in literature on combination of PSO
and GA
8/10/2019 A Review of Particle Swarm Optimization- V2
16/25
16
Table 3. Hybrid of PSO and GA
Combination Strategy Reference
Binary PSO +Crossover
After updating position and velocity, each particle will be sentto the crossover step where a crossover probability is generated
which will decide the crossover step should be performed or noton the particle. This process is iterated until the result isobtained or maximum number of evaluation is achieved.
One Point Crossover is chosen.tt
Vikas etal. 2007
PSO +
crossover +
Mutation
A parent is generated through PSO, and that parent goes
through crossover and mutation operators in GA to generate
another parent. Finally, the next iterative parent is generated byelitist selection.Crossover rate: 60%, Mutation rate: 1%.
Kuo et al.2012
PSO+Intelligent
Mulitple
Search +CauchyMutation
An extended Cauchy mutation is employed to increase thediversity of the swarm. In the extended Cauchy mutation
operator, a randomly selected dimension d of a randomly
selected particle m will be mutated as where iis the scale parameter of Cauchy distribution.
Mengqiet al.
2012
PSO+ Cauchy
Mutation +Adaptive
Mutation
The version adopting adaptive and Cauchy mutation operator
can be represented as follows: () ( ) ( )
where 1and 2denote the parameter of operator set, and i
are stochastic variables observing Cauchy distribution.
Wu 2011
GA+PSO The algorithm is initialized by a set of random particles which is
flown through the search space. In order to get approximate
nondominated solutions PND, an evolution of this particle isperformed
Crossover rate 0.9, Mutation rate 0.7
Selection operator : Stochastic universal samplingCrossover operator Single point, Mutation operator Real-value
Mousa et
al. 2012
PSO +GA First, the algorithm is initialized by a set of random particleswhich travel through the search space. During this travel an
evolution of these particles is performed by integrating PSO and
GA.Crossover rate 0.9, Mutation rate 0.7Selection operator Stochastic universal sampling
Crossover operator Single point, Mutation operator Real-value
Abd-El-Waheda
2011
PSO + Mutatio
+
Research employs the specialist recombination operator for
crossover operator. Three types of mutation operator are utilized
Ahandani
et al.
8/10/2019 A Review of Particle Swarm Optimization- V2
17/25
17
Recombination for the DPSO algorithm.
NS1: select a random exam and move it to the new non-conflicttime slot.
NS2: select a random pair of time slot and swap all the exams
in one time slots with all the exams in the other time slot.
NS3: select a set of random exams and order them by aconsidered graph coloring heuristic ,then assign non-conflict
timeslots to them.
2012
Elite particle swarm optimization with mutation (EPSOM) was suggested by Wei et al. 2008.According to author, after some initial iteration each particle is aligned according to its fitness
value. Particles of higher fitness value are sorted out to form a different swarm and give a better
convergence. Mutation operator as given below is introduced to avoid decreasing diversity and
increasing chance of being trapped in local minima. where is a random number usually distributed between 0 and 1. EPSOM gives better resultsthan random inertia weight and linearly decreasing inertia weight.
The study carried out by Paterlini and Krink 2006has shown that DE is much better than PSO interms of giving accurate solutions to numerical optimization problems. On the other hand, the
study by Angeline 1998has shown that PSO is much faster in identifying the promising region
of the global minimizer but encounters problems in reaching the global minimizer. Thecomplementary strengths of the two algorithms have been integrated to give efficient and reliable
PSO hybrid algorithms.
The DE evolution steps namely mutation, crossover and selection are performed in each iterationof the PSO to the best particles positions by Epitropakis et al. 2012. The Differential evolution
point generation scheme (mutation and crossover rules) are applied to the new particles in the
swarm generated by PSO in each iteration in Ali and Kaelo 2008. When DE is introduced to
PSO at each iteration, the computational cost will increase sharply and at the same time the fastconvergence ability of PSO may be weakened. In order to perfectly integrate PSO with DE, DE
was introduced to PSO only at specified interval of iterations. In this interval of iterations, the
PSO swarm serves as the population for DE algorithm, and the DE is executed for a number of
generations (Sedki and Ouazar 2012).
Shi et al. 2011 proposed a cellular particle swarm optimization, hybridizing cellular automata
and particle swarm optimization for function optimization. In the proposed methodology, a
mechanism of CA is integrated in the velocity update to modify the trajectories of particles to
avoid being trapped in the local optimum. The proposal employed a CA mechanism to improvethe performance of PSO by devising two versions of cellular particle swarm optimization:
CPSO-inner and CPSO-outer. The CPSO-inner uses the information inside the particle swam to
interact by considering every particle as a cell; whereas, the CPSO-outer enables cells thatbelong to the particle swarm to communicate with cells outside the particle swarm with every
potential solution defined as a cell and every particle of the swarm defined as a smart-cell.
8/10/2019 A Review of Particle Swarm Optimization- V2
18/25
18
A novel hybrid algorithm based on particle swarm optimization and ant colony optimization
called hybrid ant particle optimization algorithm (HAP) to find global minimum was proposedby Kiran et al. 2012. In the proposed method, ACO and PSO work separately at each iteration
and produce their solutions. The best solution is selected as the global best of the system and its
parameters are used to select the new position of particles and ants at the next iteration. Thus, theants and particles are motivated generating new solutions by using the system solutionparameters.
To exploit the sensitivity of Nelder Mean Simplex Method to initial solutions present, the goodglobal search ability of particle swarm optimization is combined with NMSM. This gives a
hybrid particle swarm optimization (hPSO). Choice of initial points in simplex search method is
predetermined but PSO has random initial points (particles). PSO proceeds towards best
decreasing difference between current and best position, whereas simplex search method evolvesby moving away from a point which has worst performance. On each iteration worst particle is
replaced by a new particle generated by one iteration of NMSM. Then all particles are again
updated by PSO. The PSO and NMSM are performed iteratively (Ouyang et al. 2009).
Hsu and Gao 2008 further for effectively solving multi dimensional problems suggested a hybrid
approach incorporating NMSM along with existence of a centre particle in PSO. With
exploitation property of PSO and exploration property of NMSM, the centre particle whichdwells near optima attracting many particles convergence, improves the accuracy of PSO further.
Pan et al. 2008 have combined PSO with simulated annealing and swarm core evolutionaryparticle swarm optimization to improve local search ability of PSO. In PSO with simulated
annealing particle moves to next position not directly with a comparison criterion of best position
but with some probability function controlled by temperature. In swarm core evolutionary
particle swarm optimization, the particle swarm was divided into three sub swarms, as per thedistance as core, near and far and assigned different tasks. This process works better in cases
where optima change frequently.
PSO converges early in highly discrete problems and traps into the local optimum solution. In
order to improve PSO, proposed an improved optimization hybrid swarm algorithm called the
particle-bee algorithm (PBA) combining Bee algorithm with PSO, that imitates a particularintelligent behavior inspired of bird and honey bee swarms and integrates their advantages (Lien
and Chen 2012).
4. Applications of PSO
The PSO is an important algorithm in optimization and for the reason of its high adaptability;
PSO has many applications in diverse sciences such as medical, financial, economics, security
and military, biological, system identification etc. As a state-of-art the research work on the PSOapplication in some fields such as electrical engineering and mathematics are extensive, but in
other fields for example chemical and civil engineering are exceptional. In mathematics PSO
finds application in the field of multi modal function, multi objective and constrainedoptimization, salesman problem, data mining, modelling etc. To quote examples of engineering
8/10/2019 A Review of Particle Swarm Optimization- V2
19/25
19
fields, PSO can be used in material engineering, electronics (antenna, image and sound analysis,
sensors and communication), in computer science and engineering (visuals, graphics, games,
music, animation), in mechanical engineering (robotics, dynamics, fluids), in industrialengineering (in job and resource allocation, forecasting, planning, scheduling, sequencing,
maintenance, supply chain management), traffic management in civil engineering and chemical
process in chemical engineering. In electrical engineering PSO finds uses in generation,transmission, state estimation, unit commitment, fault detection and recovery, economic loaddispatch, control application, in optimal use of electrical motor, structuring and restructuring of
network, neural network and fuzzy systems and Renewable Energy Systems (RES). As a method
to find optima of complex search processes through iteration of each particle of population PSOcan provide answers to planning, designing and control of RES (Yang et al. 2007, Wai et al.
2011).
5. Conclusions
Like other evolutionary algorithms, PSO has become an important tool for optimization and
other complex problem solving. It is an interesting and intelligent computational technique forfinding global minima and maxima with high capability or multimodal functions and practical
applications. Particle swarm optimization works on theory of cooperation and competition
between particles. Many applications of PSO are given in the literature, like neural fuzzy
networks, optimization of artificial neural networks, computational biology, image processingand medical imaging, optimization of electricity generation and network routing, financial
forecasting etc.
Although there are few challenges yet remaining to overcome, such as dynamic problems, pass
up stagnation, handle constraint and multiple objectives, and these are important research points
apparent from the literature. So the drawbacks to be worked with are its tendency of particles to
converge at local optima, slow speed of convergence, and search space being very large. PSOsometimes cannot effectively and accurately solve non linear equations. Hybridizing with other
algorithm generally demands higher number of functions to be evaluated. Most of the approaches
proposed such as inertia weight method, adoptive variation and hybrid PSO do solve prematureconvergence problem but there is a problem of low convergence speed. Therefore no generalized
solution can be given applicable to all type of problems. Yet PSO is a promising method working
in direction of simulation and optimization of difficult engineering and other problems. Toovercome the problem of stagnation of particles in search space, to improve efficiency, to
achieve better adjustability, adoptability and vigour of parameters different researchers are taking
it up as an active research topic and coming up with new ideas applicable for different problems.
Further analysis of the comparative potency of PSO, and the problems in using a PSO basedsystem are needed.
References
1. Abd-El-Waheda W F, A.A. Mousab, M.A. El-Shorbagy, Integrating particle swarm
optimization with genetic algorithms for solving nonlinear optimization problems,
Journal of Computational and Applied Mathematics 235 (2011) 14461453
8/10/2019 A Review of Particle Swarm Optimization- V2
20/25
20
2. Ahmad Nickabadi, Mohammad Mehdi Ebadzadeh, Reza Safabakhsh, A novel particle
swarm optimization algorithm with adaptive inertia weight, Applied Soft Computing 11
(2011) 365836703. Aise Zulal SEVKLI, Fatih Erdogan SEVILGEN, StPSO: Strengthened Particle Swarm
Optimization, Turkey Journal of Electrical Engineering & Computer Science, Vol.18,
No.6, 2010,1095-11144. Ali m M, P. Kaelo , Improved particle swarm algorithms for global optimization, AppliedMathematics and Computation 196 (2008) 578593
5. Alireza Alfi , Hamidreza Modares, System identification and control using adaptive
particle swarm optimization, Applied Mathematical Modelling, 35 (2011) 121012216. Amaresh Sahu, Sushanta Kumar Panigrahi, Sabyasachi Pattnaik, Fast Convergence
Particle Swarm Optimization for Functions Optimization, Procedia Technology 4 ( 2012 )
319324
7. Angeline P J, Evolutionary optimization versus particle swarm optimization: philosophyand performance differences, Evolutionary Computation VII. Lecture Notes in Computer
Science, 1447, Springer, Berlin, 1998, pp. 601610
8.
Angeline, P.J., Using selection to improve particle swarm optimization. The IEEEInternational Conference on Evolutionary Computation Proceedings Anchorage, AK ,
USA, 1998b pp: 84-89.9. Arumugam M s, M.V.C. Rao, On the improved performances of the particle swarm
optimization algorithms with adaptive parameters, cross-over operators and root meansquare (RMS) variants for computing optimal control of a class of hybrid systems,
Applied Soft Computing (2008) 324336.
10.Boyabalti, O,Sabuncuoglu, I., 2007. Parameter Selection In Genetic Algorithms. System,Cybernatics &Informatics. Volume 2-Number 4, Pp. 78-83
11.Byung-Il Koh, Alan D. George, Raphael T. Haftka, Benjamin J. Fregly, Parallel
asynchronous particle swarm optimization, International Journal for Numerical Methods
in Engineering 2006; 67:57859512.
Chatterjee A, P. Siarry, Nonlinear inertia weight variation for dynamic adaption in
particle swarm optimization, Computer and Operations Research 33 (2006) 859871,
March 2006.13.Cheng-Hong Yang, Sheng-Wei Tsai, Li-Yeh Chuan, Cheng-Huei Yang, An improved
particle swarm optimization with double-bottom chaotic maps for numerical
optimization, Applied Mathematics and Computation, 219 (2012) 26027914.Chunguo Fei1, Fang Ding1, Xinlong Zhao, Network Partition of Switched Industrial
Ethernet by Using Novel Particle Swarm Optimization, Procedia 24 (2012), 1493-1499
15.Clerc, M., & Kennedy, J. The particle swarmexplosion, stability, and convergence in a
multidimensional complex space. IEEE Transaction on Evolutionary Computation, 6(1),(2002) 5873.
16.Deepa S N, G. Sugumaran, Model order formulation of a multivariable discrete systemusing a modified particle swarm optimization approach, Swarm and Evolutionary
Computation 1 (2011) 20421217.DongLi Jia, GuoXin Zheng, BoYang Qu, Muhammad Khurram Khan, A hybrid particle
swarm optimization algorithm for high-dimensional problems, Computers & Industrial
Engineering 61 (2011) 11171122
8/10/2019 A Review of Particle Swarm Optimization- V2
21/25
21
18.Dorigo, M., V. Maniezzo and A. Colorni, Positive feedback as a search strategy.
Technical Report 91-016, Dipartimento di Elettronica, Politecnico di Milano, IT,
1991,pp: 91-106.19.Eberhart, R. C., & Shi, Y. Comparing inertia weights and constriction factors in particle
swarm optimization. In Proceedings of the IEEE congress on evolutionary computation
(CEC), 2000 (pp. 8488), San Diego, CA. Piscataway: IEEE.20.Eberhart, R. C., & Shi, Y. Tracking and optimizing dynamic systems with particle
swarms. In Proceedings of the IEEE congress on evolutionary computation (CEC) (pp.
94100), Seoul, Korea. Piscataway: IEEE, 2001.
21.Eberhart, R. C., Simpson, P. K., & Dobbins, R. W. (1996). Computational intelligence
PC tools. Boston: Academic Press.
22.Eiben A. E. and Michalewicz Z. and Schoenauer M. and Smith J. E., Parameter Controlin Evolutionary Algorithms , Lobo F.G. and Lima C.F. and Michalewicz Z. (eds.) ,
Parameter Setting in Evolutionary Algorithms , Springer , 2007 , pp. 19-46
23.Eiben A. E., Smith J. E., Introduction to Evolutionary Computing, Springer-Verlag,
Berlin Heidelberg New York, 200324.
Epitropakis M G, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and socialexperience in particle swarm optimization through differential evolution, in: IEEE, 2010.
25.Epitropakis M G, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and socialexperience in Particle Swarm Optimization through Differential Evolution: A hybrid
approach, Information Sciences 216 (2012) 5092
26.Feng Y, G. Teng, A. Wang, Y.M. Yao, Chaotic inertia weight in particle swarmoptimization, in: Second International Conference on Innovative Computing, Informationand Control (ICICIC 07), 2007, pp. 4751475.
27.Gang Xu, An adaptive parameter tuning of particle swarm optimization algorithm,
Applied Mathematics and Computation 219 (2013) 4560456928.
Goksal, F. P., et al. A hybrid discrete particle swarm optimization for vehicle routing
problem with simultaneous pickup and delivery. Computers & Industrial Engineering
(2012), doi:10.1016/j.cie.2012.01.005
29.Guan-Chun Luh , Chun-Yi Lin, Optimal design of truss-structures using particle swarmoptimization, Computers and Structures, 89 (2011) 22212232
30.He S, Wu QH et al. A particle swarm optimizer with passive congregation. Biosystems
2004; 78(13):135147.31.Holland, J.H., Adaptation in natural and artificial systems: An introductory analysis with
applications to biology, control, and artificial intelligence: The MIT Press,1992.
32.Hongbing Zhu, Chengdong Pu, Euclidean Particle Swarm Optimization, 2009 SecondInternational Conference on Intelligent Networks and Intelligent Systems,669-672
33.Hongfeng Wang, Ilkyeong Moon, Shenxiang Yang, Dingwei Wang, A memetic particle
swarm optimization algorithm for multimodal optimization problems, Information
Sciences 197 (2012) 385234.Hsu C C, C.H. Gao, Particle swarm optimization incorporating simplex search 969 and
center particle for global optimization, in: Conference on Soft Computing 970 in
Industrial Applications, Muroran, Japan, 2008.
35.Ji H, J. Jie, J. Li, Y. Tan, A bi-swarm particle optimization with cooperative co-evolution, international conference on computational aspects of social networks, in:
IEEE, 2010.
8/10/2019 A Review of Particle Swarm Optimization- V2
22/25
22
36.Jie X, Deyun X, New metropolis coefficients of particle swarm optimization, in: IEEE,
2008.
37.Jing Cai 1, W. David Pan, On fast and accurate block-based motion estimation algorithmsusing particle swarm optimization, Information Sciences ,197 (2012) 5364
38.Jiuzhong Zhang n, XuemingDing , A Multi-Swarm Self-Adaptive and Cooperative
Particle Swarm Optimization, Engineering Applications of Artificial Intelligence 24(2011) 95896739.Jun Sun, Xiaojun Wu, Wei Fang, Yangrui Ding, Haixia Long, Webo Xu, Multiple
sequence alignment using the Hidden Markov Model trained by an improved quantum-
behaved particle swarm optimization, Information Sciences 182 (2012) 9311440.Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proceedings of the IEEE
International Conference on Neural Networks, Piscataway, pp: 1942-1948.
41.Kuo R J , Y.J. Syu, Zhen-Yao Chen, F.C. Tien, Integration of particle swarm
optimization and genetic algorithm for dynamic clustering, Information Sciences 195(2012) 124140
42.Lei K, Y. Qiu, Y. He, A new adaptive well-chosen inertia weight strategy to
automatically harmonize global and local search ability in particle swarm optimization,in: ISSCAA, 2006.
43.Li J, X. Xiao, Multi swarm and multi best particle swam optimization algorithm, in:
IEEE, 2008.
44.Li-Chuan Lien, Min-Yuan Cheng, A hybrid swarm intelligence based particle-beealgorithm for construction site layout optimization, Expert Systems with Applications 39
(2012) 96429650
45.Lili Liu , Shengxiang Yang , Dingwei Wang, Force-imitated particle swarm optimizationusing the near-neighbor effect for locating multiple optima, Information Sciences, 182
(2012) 139155
46.Liu E, Y. Dong, J. Song, X. Hou, N. Li, A modified particle swarm optimization
algorithm, in: International Workshop on Geosciences and Remote Sensing, 2008, pp.666669
47.Li-Yeh Chuang a, Sheng-Wei Tsai b, Cheng-Hong Yang, Chaotic catfish particle swarm
optimization for solving global numerical optimization problems, Applied Mathematicsand Computation 217 (2011) 69006916
48.Li-Yeh Chuang, Sheng-Wei Tsai, and Cheng-Hong Yang, Catfish Particle Swarm
Optimization, 2008 IEEE Swarm Intelligence Symposium,49.Lovbjerg, M., Improving particle swarm optimization by hybridization of stochastic
search heuristics and self-organized criticality. Master's Thesis, Department of Computer
Science, University of Aarhus, 2002.50.Maeda Y, N. Matsushita, S. Miyoshi, H. Hikawa, On simultaneous perturbation particle
swarm optimization, in: CEC 2009 IEEE, Proceedings on Eleventh Conference on
Congress on Evolutionary Computation, 2009.
51.Maxfield, A.C.M. and L. Fogel, Artificial intelligence through a simulation of evolution.
Biophysics and Cybernetics Systems: Proceedings of the Second Cybernetics Sciences.Spartan Books, Washington DC, 1965.
52.Mengqi Hu, Teresa Wu, Jeffery D. Weir, An intelligent augmentation of particle swarm
optimization with multiple adaptive methods,Information Sciences 213 (2012) 6883
8/10/2019 A Review of Particle Swarm Optimization- V2
23/25
23
53.Min-Rong Chen, Xia Li, Xi Zhang, Yong-Zai Lu, A novel particle swarm optimizer
hybridized with extremal optimization, Applied Soft Computing 10 (2010) 367373
54.Min-Shyang Leu, Ming-Feng Yeh, Grey particle swarm optimization, Applied SoftComputing 12 (2012) 29852996
55.Moayed Daneshyari, Gary G. Yen, Constrained Multiple-Swarm Particle Swarm
Optimization Within a Cultural Framework, IEEE Transactions on Systems, Man, andCyberneticsPart A: Systems and Humans, VoL. 42, No. 2, March 2012, 475-490.56.Morteza Alinia Ahandani, Mohammad Taghi Vakil Baghmisheh, Mohammad Ali
Badamchi Zadeh, Sehraneh Ghaemi , Hybrid particle swarm optimization transplanted
into a hyper-heuristic structure for solving examination time tabling problem, Swarm andEvolutionary Computation 7 (2012) 2134
57.Mousaa A A, M.A. El-Shorbagy, W.F. Abd-El-Wahed, Local search based hybrid
particle swarm optimization algorithm for multiobjective optimization, Swarm and
Evolutionary Computation 3 (2012) 11458.Mustafa Servet Kran, Mesut Gunduz, Omer Kaan Baykan, A novel hybrid algorithm
based on particle swarm and ant colony optimization for finding the global minimum,
Applied Mathematics and Computation 219 (2012) 1515152159.
Nannen V, S. K. Smit and A. E. Eiben, Costs and benefits of tuning parameters of
evolutionary algorithms, Proceedings of the 10th International Conference on Parallel
Problem Solving from Nature, PPSN X, Dortmund, Germany, 2008, pp. 528538.
60.Nima Safaeia, Reza Tavakkoli-Moghaddam, Corey Kiassat, Annealing-based particleswarm optimization to solve the redundant reliability problem with multiple component
choices, Applied Soft Computing 12 (2012) 34623471
61.Ouyang A, Y. Zhou, Q. Luo, Hybrid particle swarm optimization algorithm for solvingsystems of nonlinear equations, in: IEEE International Conference on Granular
Computing, 2009, pp. 460465.
62.Pan G, Q. Dou, X. Liu, Performance of two improved particle swarm optimization in
dynamic optimization environments, in: Proceedings of the Sixth InternationalConference on Intelligent Systems Design and Applications, 2006
63.Panigrahi B.K, V.R. Pandi, S. Das, Adaptive particle swarm optimization approach for
static and dynamic economic load dispatch, Energy Conversion and Management 49(2008) 14071415.
64.Paterlini S, T. Krink, Differential evolution and particle swarm optimization in partitional
clustering, Computational Statistics and Data Analysis 50 (1) (2006) 12201247
65.Price, K. and R. Storn, 1995. Differential Evolution-a simple and efficient adaptive
scheme for global optimization over continuous spaces. International Computer Science
Institute-Publications.
66.Qi Wu, Hybrid forecasting model based on support vector machine and particle swarm
optimization with adaptive and Cauchy mutation, Expert Systems with Applications 38(2011) 90709075
67.Ratnaweera, A., Halgamuge, S.K., Watson, H.C., 2004. Self-organizing hierarchicalparticle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions
on Evolutionary Computation 8 (3), 240255.
68.Sedki A, D. Ouazar, Hybrid particle swarm optimization and differential evolution foroptimal design of water distribution systems, Advanced Engineering Informatics 26
(2012) 582591
8/10/2019 A Review of Particle Swarm Optimization- V2
24/25
24
69.Shi, Y., & Eberhart, R. C. A modified particle swarm optimizer. In Proceedings of the
IEEE international conference on evolutionary computation (pp. 6973). Piscataway:
IEEE, (1998).70.Shing Wa Leung, Shiu Yin Yuen, Chi Kin Chow, Parameter control system of
evolutionary algorithm that is aided by the entire search history, Applied Soft Computing
12 (2012) 3063307871.Shuguang Zhao,Ponnuthurai N. Suganthan: Diversity enhanced particle swarm optimizer
for global optimization of multimodal problems. IEEE Congress on Evolutionary
Computation 2009: 590-597
72.Thangaraj, R., M. Pant, A. Abraham and P. Bouvry, Particle swarm optimization:Hybridization perspectives and experimental illustrations. Applied Mathematical.
Computation, 217, 2011, 5208-5226
73.Vikas Singh, Deepak Singh, Ritu Tiwari, Discrete Optimization Problem Solving with
three Variants of Hybrid Binary Particle Swarm Optimization, BADS11, June 14,2011,ACM, 43-48
74.Voss M S, Principle component particle swarm optimization, in: IEEE Congress 977 on
Evolutionary Computation, vol. 1, 2005, pp. 298305.75.Wai R J, S. Cheng, Y.-C. Chen, 6th IEEE on Industrial Electronics and Applications
(ICIEA), 2011.
76.Wan Z, GuangminWang, BinSun, A hybrid intelligent algorithm by combining particle
swarm optimization with chaos searching technique for solving nonlinear bilevelprogramming problems, swarm and Evolutionary Computation (2012), http://dx.doi.org/
10.1016/j.swevo. 2012.08.001
77.Wang L, Z. Cui, J. Zeng, Particle swarm optimization with group decision making, in:Ninth International Conference on Hybrid Intelligent Systems, 2009.
78.Wei J, L. Guangbin, L. Dong, Elite particle swarm optimizaion with mutation, in: AsiaSimulation Conference 7th Intl. Conf. on Sys. Simulation and Scientific Computing,
IEEE, 2008, pp. 800803.79.Wong W K, S.Y.S. Leung, Z.X. Guo, Feedback controlled particle swarm optimization
and its application in time-series prediction, Expert Systems with Applications 39 (2012)
8557857280.Xiaohua Xia, Particle Swarm Optimization Method Based on Chaotic Local Search and
Roulette Wheel Mechanism, Physics Procedia 24 (2012) 269275
81.Xiaolei Wang, Conflict Resolution in Product Optimization Design based on Adaptive
Particle Swarm Optimization, Procedia Engineering 15 (2011) 4920-492482.Xingjuan Cai, Zhihua Cui, Jianchao Zeng, Ying Tan, Dispersed particle swarm
optimization, Information Processing Letters, 105 (2008) 231235
83.Xu JJ, Xin ZH. An extended particle swarm optimizer. Parallel and Distributed
Processing Symposium, 2005. Proceedings of the 19th IEEE International, Denver, CO,U.S.A., 2005.
84.Yang B, Y. Chen, Z. Zhao, IEEE International Conference on Survey on Applications of
Particle Swarm Optimization in Electric Power Systems, May 30 2007June 1 2007,2007.
85.Yang Shi, Hongcheng Liu, Liang Gao, Guohui Zhang, Cellular particle swarm
optimization, Information Sciences, 181 (2011) 44604493
http://www.informatik.uni-trier.de/~ley/pers/hd/z/Zhao:Shuguang.htmlhttp://www.informatik.uni-trier.de/~ley/pers/hd/z/Zhao:Shuguang.htmlhttp://dx.doi.org/http://dx.doi.org/http://dx.doi.org/http://www.informatik.uni-trier.de/~ley/pers/hd/z/Zhao:Shuguang.html8/10/2019 A Review of Particle Swarm Optimization- V2
25/25
25
86.Yannis Marinakis, Magdalene Marinaki, Georgios Dounias, A hybrid particle swarm
optimization algorithm for the vehicle routing problem, Engineering Applications of
Artificial Intelligence 23 (2010) 46347287.Yannis Marinakis, Magdalene Marinaki, A Hybrid Multi-Swarm Particle Swarm
Optimization algorithm for the Probabilistic Traveling Salesman Problem, Computers &
Operations Research 37 (2010) 432 - 44288.Yau-Tarng Juang, Shen-Lung Tung, Hung-Chih Chiu, Adaptive fuzzy particle swarmoptimization for global optimization of multimodal functions, Information Sciences 181
(2011) 45394549
89.Ying Wang, Jianzhong Zhou, Chao Zhou, Yongqiang Wang, Hui Qin, Youlin Lu, Animproved self-adaptive PSO technique for short-term hydrothermal scheduling, Expert
Systems with Applications, Volume 39, Issue 3,2012, pp 2288-2295.
90.Ying-Nan Zhang, Hong-Fei Teng, Detecting particle swarm optimization, , Concurrency
and Computation: Practice and Experience, 2009; 21:44947391.Ying-Nan Zhang,Qing-Ni Hu and Hong-Fei Teng, Active target particle swarm
optimization, Concurrency and Computation: Practice and Experience, 2007; 20:2940
92.
Zu W, Y.l. Hao, H.t. Zeng, W.Z. Tang, Enhancing the particle swarm optimization basedon equilibrium of distribution, in: Control and Decision Conference, China, 2008, pp.
285289.