Upload
others
View
23
Download
1
Embed Size (px)
Citation preview
1 / 48
A Multiobjective Memetic Algorithm
Based on Particle Swarm Optimization
Dr. Liu Dasheng
James Cook University, Singapore
2 / 48
Outline of Talk
1. Particle Swam Optimization
2. Multiobjective Particle Swarm Optimization
3. A Multiobjective Memetic Algorithm Based on
Particle Swarm Optimization
4. Research Ideas
3 / 48
Particle Swarm Optimization
▪ Particle swarm optimization (PSO) was
first introduced by James Kennedy (a
social psychologist) and Russell Eberhart
(an electrical engineer) in 1995, which
originates from the simulation of behavior
of bird flocks.
4 / 48
Particle Swarm Optimization
▪ There are a number of algorithms to
simulate the movement of a bird flock or
fish school.
▪ Kennedy and Eberhart became particularly
interested in the models developed by
Heppner (a zoologist) [62].
5 / 48
Heppner’s Model
▪ In Heppner’s model, birds would begin by
flying around with no particular destination
and in spontaneously formed flocks until
one of the birds flew over the roosting
area.
6 / 48
Particle Swarm Optimization
▪ To Eberhart and Kennedy, finding a roost
is analogous to finding a good solution in
the field of possible solutions.
▪ They revised Heppner’s methodology so
that particles will fly over a solution space
and try to find the best solution depending
on their own discoveries and past
experiences of their neighbors.
7 / 48
Working Principle of PSO
8 / 48
Original Version
▪ In the original version of PSO, each
individual is treated as a volume-less
particle in the D dimensional solution
space.
▪ The equations for calculating velocity and
position of particles are shown below:
9 / 48
Adjustable Step Size
▪ Further research shows that to adjust
velocity not by a fixed step size but
according to the distance between current
position and best position can improve
performance.
10 / 48
Vmax
▪ One parameter Vmax is introduced, and the
particle’s velocity on each dimension
cannot exceed Vmax.
▪ If Vmax is too large, particle may fly past
good solutions.
▪ If Vmax is too small, particle may not
explore sufficiently beyond locally good
regions.
▪ Vmax is usually set at 10-20% of the
dynamic range of each dimension.
11 / 48
Inertial Weight
▪ To better control the exploration and
exploitation in particle swarm optimization,
the concept of inertial weight (w) was
developed.
1
, , 1 1 , ,
2 2 , ,
( )
( )
k k k k k
i d i d i d i d
k k k
g d i d
v w v c r p x
c r p x
12 / 48
1
, , 1 1 , ,
2 2 , ,
( )
( )
k k k k k
i d i d i d i d
k k k
g d i d
v w v c r p x
c r p x
1 1
, , ,
k k k
i d i d i dx x v
w is the inertia weight; c1 is the cognition weight and c2 is the
social weight; r1 and r2 are two random values uniformly
distributed in the range of [0, 1].
Each individual in PSO is assigned a random velocity and
flies across the solution space with a memory of its own
best position called pbest and a knowledge of the whole
swarm’s global best position called gbest.
Particle Swarm Optimization Formula
13 / 48
Terminology
14 / 48
Terminology
15 / 48
▪ Real world problems usually involve simultaneous
optimization of several competing objectives.
▪ Solution exists in the form of alternative tradeoffs.
A minimization problem
✓ Non-inferior solutions are known as
nondominated solutions
✓ The set of nondominated solutions
form the Pareto solution set
f1
f2Unfeasible
Region
Trade-off Curve
Multiobjective Optimization
16 / 48
▪ Multi-objective particle swarm optimization (MOPSO) is a
powerful tool for solving MO optimization problems.
✓ Capable of searching for the global
trade-off.
✓ Maintain a diverse set of solutions
✓ Robust and applicable to a wide
variety of problems.
f1
f2
▪ Conventional optimization search techniques
✓ Hardly handle multiple objectives
✓ The gradients need to be well-defined and differentiable
✓ May trap in “local optima”
MOPSO
17 / 48
MOPSO
18 / 48
Performance Assessments
▪ For MOO, performance metrics
must be able to measure
quality in terms of:
✓ Diversity.
✓ Proximity between the
generated and true Pareto
front. f1
f2
Minimization
Min
imiz
atio
n
Non-dominated solution
Pareto Frontier
Non-dominated set
19 / 48
▪ Generational Distance (GD) (Veldhuizen, 1999)
✓ Represents how far the evolved solution set is from the
true Pareto front.
▪ Spacing (S) (Schott, 1995)
✓ Measures how “evenly” evolved solutions distribute itself.
▪ Maximum Spread (MS) (Zitzler, 1999)
✓ Measures how well the true Pareto front is covered by the
evolved solution set.
Performance Measures
20 / 48
Problem Test Suite
Test
Problem
Features
1 ZDT1 Pareto front is convex.
2 ZDT2 Pareto front is non-convex.
3 ZDT3 Pareto front consists of several noncontiguous convex
parts.
4 ZDT4 Pareto front is highly multi-modal where there are 219
local Pareto fronts.
5 ZDT6 The Pareto optimal solutions are non-uniformly
distributed along the global Pareto front. The density of
the solutions is low near the Pareto front and high
away from the front.
ZDT1
ZDT2
ZDT3 ZDT4 ZDT6
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1-1
-0.5
0
0.5
1
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
21 / 48
Test
Problem
Features
6 FON Pareto front is non-convex.
7 KUR Pareto front consists of several noncontiguous convex
parts.
8 POL Pareto front and Pareto optimal solutions consist of
several noncontiguous convex parts.
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
FON
-20 -19 -18 -17 -16 -15 -14-15
-10
-5
0
5
KUR
0 5 10 15 200
5
10
15
20
25
POL
Problem Test Suite
22 / 48
Two modification introduced to
improve performance
▪ Fuzzy global best
▪ Synchronous particle local search
Memetic algorithm : evolution algorithm with
local improvement technique
23 / 48
Two Modifications
▪ Fuzzy Global Best (f-gbest)
✓ A new particle updating strategy is proposed based upon the
concept of fuzzy global-best to deal with the problem of premature
convergence and diversity maintenance within the swarm.
▪ Synchronous Particle Local Search (SPLS)
✓ Hybridized with a directed local search operator for local fine tuning,
which helps to discover a well-distributed Pareto front.
24 / 48
Fuzzy gbest
▪ Fuzzy Global Best (f-gbest) accounts for the uncertainty of
global best knowledge to prevent premature convergence
✓ Incorporates fuzzy number
to represent global best.
✓ F-gbest is characterized by
normal distribution.
✓ Degree of uncertainty
reduces with generations
synonymous with
information gain.
x2
x1
x3
Possible location
of gbest
Particle
Search Region
incorporating f-gbest Search trajectory using
conventional gbest
gbest
25 / 48
The calculation of particle velocity can be rewritten as
, ,( , )
k k
c d g dp N p
( )f k
1
, , 1 1 , , 2 2 , ,( ) ( )
k k k k k k k k
i d i d i d i d c d i dv w v c r p x c r p x
,( , )
k
g dN p f-gbest is characterized by a normal distribution, , where
representing the degree of uncertainty about the optimality of the global-best.
Formula for Fuzzy gbest
26 / 48
▪ SPLS of assimilated particles along x1 and x3
x2
x1
Possible location of
assigned gbest
A
x3
Assimilated Particle A'
Assimilated Particle B'
Trajectory along assigned
search direction, x1
B
Trajectory along assigned
search direction, x3
Synchronous Particle Local Search
27 / 48
▪ SPLS is performed in the vicinity of the particles.
SPLS:
Select LSS particles randomly from particle swarm
Select LSN nondominated particles from the archive with
Assign an arbitrary dimension to each of the LSS particles
update the position of particles in the desion space
Assimilation: With the exception of the assigned dimension,
with the selected gbest position
the best niche count into a selection pool
Assign an arbitrary nondominated solution from the selection
pool to each of theLS
S
LSS
particles as gbest
Update the position of allLSS assimilated particles using fuzzy
gbest along the pre-assigned dimension
Synchronous Particle Local Search
28 / 48
Update Particle Position
using f-gbest
Select Personal
BestNo
Initialize Particle
Swarm
cycle = max_cycles?Yes
Return Archive
Select Global Best
Evaluate Particles
SPLS
Select particles
for SPLSLS
S
Archiving
Implementation
29 / 48
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.5
1
1.5
2
2.5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
(a) (b) (c)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.5
1
1.5
2
2.5
3
3.5
4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
(d) (e) (f) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Evolved tradeoffs by a) FMOPSO, b) CMOPSO, c) SMOPSO, d) IMOEA, e)
NSGA II, and f) SPEA2 for ZDT1
Simulation Results
30 / 48
1 2 3 4 5 6
0
4
8
12
16
1 2 3 4 5 6
0
0.2
0.4
0.6
0.8
1
1 2 3 4 5 6012345678
(a) (b) (c)
1 2 3 4 5 6
0
0.5
1
1.5
2
2.5
3
3.5
1 2 3 4 5 6
0
0.2
0.4
0.6
0.8
1
1 2 3 4 5 6
0123456789
(d) (e) (f)
1 2 3 4 5 6
0
0.04
0.08
0.12
0.16
1 2 3 4 5 6
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 4 5 6
0.2
0.6
1
1.4
1.8
(g) (i) (h)
Statistical performance of the different algorithms: a) GD, b) MS, c) S for ZDT4; d)
GD, e) MS, f) S for ZDT6; and g) GD, i) MS, h) S for FON
Simulation Results
31 / 48
Research Ideas
▪ Handling of high dimensional problems.
✓ Use of dimensional reduction techniques.
✓ Use of learning techniques to gain information on
the shape and position of Pareto front/Pareto set.
▪ Apply surrogates to reduce evaluation time.
✓ Surrogate models are cheap and approximate
evaluation models.
▪ Solving real world problems of your interest
Researchers are facing the challenge of increasing
dimensionality and computational cost of today’s
applications.
32 / 48
▪ Zhinzhong Ding, Fuqiang Lu, and Hualing Bi, “A TWO-STAGE PARTICLE SWARM
OPTIMIZATION FOR VIRTUAL ENTERPRISE RISK MANAGEMENT”, International Journal of
Innovative Computing, Information and Control, vol. 10, no. 4, pp. 1495-1508, 2014.
▪ Marco Corazza, Giovanni Fasano, S. Y., and Riccardo Gusso, “Particle Swarm Optimization with
non-smooth penalty reformulation, for a complex portfolio selection problem”, Applied
Mathematics and Computation 244, pp. 611-624, 2013.
▪ Kuo, R. J. and Hong, C. K. “Integration of Genetic Algorithm and Particle Swarm Optimization for
Investment Portfolio Optimization”, Applied Mathematics & Information Sciences, vol. 7, no. 6, pp.
2397-2408, 2013.
▪ Jui-Fang Chang, Peng Shi, “Using investment satisfaction capability index based particle swarm
optimization to construct a stock portfolio”, Information Sciences 181, pp. 2989-2999, 2011
▪ Liu, D. S., Tan, K. C., Goh, C. K. and Ho, W. K., “A Multiobjective Memetic Algorithm Based on
Particle Swarm Optimization,” IEEE Transactions on Systems, Man and Cybernetics: Part B
(Cybernetics), vol. 37, no. 1, pp. 42-50, 2007.
The Papers