15
International J. Soc. Sci. & Education 2020 Vol.10 Issue 1, ISSN: 2223-4934 E and 2227-393X Print http://www.ijsse.com 102 A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm Shi Jia 1 , Xiaodan Liang 2 , Maowei He 3 , Liling Sun 4 , Hanning Chen 5 1-4 School of Computer Science and Technology, Tianjin Polytechnic University, Tianjin, 300387; 5 School of Computer Science and Technology, Tianjin Polytechnic University, Tianjin, 300387, CHINA. [email protected], [email protected], [email protected], [email protected], [email protected] ABSTRACT Intelligent optimization algorithms have been applied to solve optimization problems in various fields of engineering. Particle swarm optimization (PSO) and differential evolution (DE) are the two representative algorithms in the intelligent optimization algorithms. In this paper, a hybrid algorithm with particle swarm optimization and differential evolution algorithm (APSODE) is proposed. In APSODE algorithm, generation starts with both PSO and DE then switch to either PSO or continue with DE based on the fitness value. We divide the individuals into two sub-swarms. Two different mutation operations are employed to breed the individuals of these two sub- swarms. Thus, one sub-swarm takes responsibility for exploring, and the other swarm owns better exploitation ability. The numbers of individuals in two sub-swarms change dynamically with iterations. In addition, parameter adaptive strategy is adopted in this paper. For examining the success of APSODE in solving optimization problems, 30 benchmark functions with different specifications are selected. The experimental results show that APSODE provides relatively competitive performance compared with three comparison algorithms in terms of both solution quality and efficiency. Keywords: particle swarm optimization, differential evolution, global optimization 1. INTRODUCTION In recent years, optimization problems are frequently applied in many practical applications such as computer science[1], artificial intelligence[2], information theory[3] etc. Many swarm intelligence and evolutionary algorithms are widely proposed, such as particle swarm optimization (PSO)[4-5], genetic algorithm (GA)[6] and differential evolution (DE)[7,8]. Among these intelligent optimization algorithms, particle swarm optimization and differential evolution algorithms are more popular. Particle swarm optimization (PSO) is a nature- inspired and global optimization algorithm developed by Kennedy and Eberhart [9], which mimics the bird forage behavior. It has the advantages of simple structure and fast global convergence, but the PSO algorithm is easy to lose the diversity and fall into a local optimum in the late evolution. Differential evolution (DE) is a random search algorithm proposed by Storn and Price.DE hunts for optimal solutions by using crossover, mutation and selection operators. DE is good at exploring the search space and locating the region of global optimum, but it is slow at exploitation of the solution. Hybridization is one of the most efficient strategies where merits of PSO and DE algorithms are utilized to improve the performance of the optimizers. Therefore, we propose a hybrid algorithm named a hybrid algorithm with particle swarm optimization and differential evolution algorithm (APSODE)

A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

International J. Soc. Sci. & Education 2020 Vol.10 Issue 1, ISSN: 2223-4934 E and 2227-393X Print

http://www.ijsse.com 102

A Hybrid Algorithm with Particle Swarm Optimization and

Differential Evolution Algorithm

Shi Jia1, Xiaodan Liang

2, Maowei He

3, Liling Sun

4, Hanning Chen

5

1-4School of Computer Science and Technology, Tianjin Polytechnic University, Tianjin, 300387;

5School of Computer Science and Technology, Tianjin Polytechnic University, Tianjin, 300387,

CHINA.

[email protected], [email protected], [email protected], [email protected], [email protected]

ABSTRACT

Intelligent optimization algorithms have been applied to solve optimization problems

in various fields of engineering. Particle swarm optimization (PSO) and differential

evolution (DE) are the two representative algorithms in the intelligent optimization

algorithms. In this paper, a hybrid algorithm with particle swarm optimization and

differential evolution algorithm (APSODE) is proposed. In APSODE algorithm,

generation starts with both PSO and DE then switch to either PSO or continue with

DE based on the fitness value. We divide the individuals into two sub-swarms. Two

different mutation operations are employed to breed the individuals of these two sub-

swarms. Thus, one sub-swarm takes responsibility for exploring, and the other swarm

owns better exploitation ability. The numbers of individuals in two sub-swarms

change dynamically with iterations. In addition, parameter adaptive strategy is

adopted in this paper. For examining the success of APSODE in solving optimization

problems, 30 benchmark functions with different specifications are selected. The

experimental results show that APSODE provides relatively competitive performance

compared with three comparison algorithms in terms of both solution quality and

efficiency.

Keywords: particle swarm optimization, differential evolution, global

optimization

1. INTRODUCTION

In recent years, optimization problems are frequently applied in many practical applications

such as computer science[1], artificial intelligence[2], information theory[3] etc. Many

swarm intelligence and evolutionary algorithms are widely proposed, such as particle swarm

optimization (PSO)[4-5], genetic algorithm (GA)[6] and differential evolution (DE)[7,8].

Among these intelligent optimization algorithms, particle swarm optimization and differential

evolution algorithms are more popular. Particle swarm optimization (PSO) is a nature-

inspired and global optimization algorithm developed by Kennedy and Eberhart [9], which

mimics the bird forage behavior. It has the advantages of simple structure and fast global

convergence, but the PSO algorithm is easy to lose the diversity and fall into a local optimum

in the late evolution. Differential evolution (DE) is a random search algorithm proposed by

Storn and Price.DE hunts for optimal solutions by using crossover, mutation and selection

operators. DE is good at exploring the search space and locating the region of global

optimum, but it is slow at exploitation of the solution. Hybridization is one of the most

efficient strategies where merits of PSO and DE algorithms are utilized to improve the

performance of the optimizers. Therefore, we propose a hybrid algorithm named a hybrid

algorithm with particle swarm optimization and differential evolution algorithm (APSODE)

Page 2: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 103

that combines the PSO and DE. Our experimental results show that APSODE not only has the

advantage of fast convergence but owns the high accuracy search ability.

The main contributions of this paper can be summarized as follows:

(1) Firstly, APSODE based on differential mutation and the modified velocity update

scheme to optimize problem. PSO and DE evolve together in a cooperative and competitive

way.

(2) In addition, APSODE algorithm adopts dual swarm strategy. The two sub-swarms

apply different mutation strategies, which have a good balances in explortion ability and

exploitation ability. A adjustment scheme for the number of individuals in two sub-swarms is

proposed.

(3) Besides, a parameter adaptive strategy is adopt in this paper. This scheme is simple

and effective.

The remainder of this paper is presented as follows: The related work about PSO, DE and

their hybrids algorithm are introduced in Section 2. Section 3 presents the proposed APSODE

algorithm. The experimental results and comparison are shown in Section 4. The conclusion

and the future work are given in Section 5.

2. RELATED WORK

2.1. Overview of particle swarm optimization theory

The particle swarm optimization (PSO) is one of the swarm intelligence algorithms. Its basic

idea is to find the optimal solution through the cooperation and information sharing among

individuals in the swarm. A particle represents a potential solution for optimization problem.

The particles fly in the search space to search for the global optimum by iteration. when

searching in the D dimension space, each particle i has a velocity vector vi = (vi1,vi2,...,viD)

and a position vector xi = (xi1, xi2,..., xiD). The position xi and velocity vi are initialized

randomly. The velocity and position of the particles are updated as follows :

)()( 211 t

id

t

gd

t

id

t

id

t

id

t

id xPrandcxPrandcwvv (1)

11 t

id

t

id

t

id vxx (2)

where N denotes the population size, t denotes the current iteration number. And w is named

as the inertia weight. c1 and c2 are called the acceleration parameters. rand is a random

number in the range [0,1]. Pi is the individual optimal solution. Pg is the global optimal

solution.

In order to improve the performance of the original PSO, many different PSO variants have

been designed. Mustafa [10] presents a distribution-based uodate rule for PSO algorithm.

Chen et al.[11] proposed a chaotic dynamic weight particle swarm optimization (CDW-PSO).

In the CDW-PSO algorithm, a chaotic map and dynamic weight are introduced to modify the

search process. Zhan et al. [12] incorporated an orthogonal learning strategy into PSO that

named orthogonal learning PSO (OLPSO). OLPSO has higher efficiency compared with

other PSO variants. However, OLPSO is easily trapped into the local optimal for some

difficult problems.

Page 3: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

104 http://www.ijsse.com

2.2. Overview of differential evolution theory

Differential evolution (DE) is a simple and powerful evolutionary algorithm. DE guides the

population into global optimum through repeated cycles of mutation, crossover and selection.

The main procedure of DE is explained in detail as follows.

(1) Mutation: DE achieves individual variation through differential strategy. There are five

widely used mutation operators. They are listed as follows.

DE/rand/1

)( ,3,2,1, drdrdrdim xxFxv (3)

DE/current-to-best/1

)()( ,2,1,,,, drdrdidididim xxFxPFxv (4)

DE/best/1

)( ,2,1,, drdrdidim xxFPv (5)

DE/best/2

)()( ,4,3,2,1,, drdrdrdrdidim xxFxxFPv (6)

DE/rand/2

)()( ,5,4,3,2,1, drdrdrdrdrdim xxFxxFxv (7)

Where F is the scaling constant. The r1, r2, r3, r4 and r5 are different integers randomly

selected from [1, N]. N is the population size. Pi is the individual optimal solution.

The boundary handling technique is employed to ensure the validity of the solution.

ddimd

ddimd

dim

uvifu

lviflv

,,

,

,

,

(8)

Where ld and ud are the lower and upper of the solution.

(2) Crossover: Crossover operation is to combine the individuals of the current population

and the individuals of the mutant population to generate cross-populations according to

certain rules. The process can be expressed as follows:

otherwisex

jjorCRrifvu

di

randdim

di

,,

1,

,

, (9)

where r1 is a random number in the range of [0, 1]. CR is crossover rate. jrand is an integer

randomly generated from the range [1, D].

(3) Selection: The selection operation aims at selecting the better individual by comparing the

fitness value. It can be expressed as follows:

Page 4: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 105

otherwisex

xfufifux

i

iii

ide

,

)()(, (10)

Next, we will introduce some DE variants. Tian et al. [13] proposed a novel differential

evolution algorithm to improve the search efficiency of DE by a combined mutation strategy

and diversity-based selection strategy. Mallipddi et al. [14] introduced a new DE variant with

ensemble of parameters and mutation strategies, which employs a set of mutation strategies

along with a set of parameter values that compete to generate offspring. And a self-adaptive

DE algorithm is enhanced in [15] by a teaching and learning mechanism.

2.3. Differential evolution and particle swarm optimization hybrids

The PSO and PSO variants show great advantages in solving optimization problems. DE has

shown superior performance for global optimization. However, the challenge of premature

convergence still exists. In a degree, PSO and DE are complementary, and their combination

improves the performance of the algorithm. In order to break the limitation of a single

algorithm, many scholars have made further research on the hybrid algorithm. Next, we will

make a brief introduction of hybrid algorithms of DE and PSO.

To improve the convergence speed, reference [16] proposed a simple and compact

evolutionary algorithm which applyed different DE mutations to evolve the cognitive of

social experience of different PSO variants. An adaptive hybrid algorithm based on PSO and

DE for global optimization proposed by Yu et al [17]. Adaptive mutation applied on current

population when the population are clustered around the local optima. A hybrid DE algorithm

based on PSO[18] was designed for solving practical nonsmooth and non convex economic

load dispatch problems. In this method, a differentially evolved population is generated using

DE method and used PSO procedure.

3. APSODE ALGORITHM

In APSODE algorithm, the basic idea is to use DE and PSO to achieve the coevolution. In

this paper, generation starts with both PSO and DE then switch to either PSO or continue

with DE based on the fitness value. In order to improve the convergence accuracy of PSO,

the new speed update strategy is proposed. DE operations are utilized to breed a promising

population. And PSO apply the promising population to speed formula for updating position.

In addition, a two sub-swarms strategy with different mutation is adopted to enhance

population diversity. One sub-swarm owns good global exploration ability and the other owns

better local exploitation ability. Meanwhile, adjustment scheme strategy keeps a good balance

of the number of the two sub-swarms. The following sections will introduce the process of

the above strategies in detail.

3.1. Architecture of APSODE

In the first generation, DE generates new populations named xide (please see section 3.2)

through mutation, crossover and selection, which is then applied to the speed update formula

of PSO. From the second generation to the last, PSO or DE will be chosen to evolve

according to the calculated value. The calculated value formulas are as follows.

1 ( ) /pg de pgf f f (11)

2 ( ) /de pso def f f (12)

Page 5: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

106 http://www.ijsse.com

where fpg is the best fitness value, fde is the fitness value of the optimal solution obtained by

DE evolution, fpso is the fitness value of the optimal solution obtained by PSO evolution.

During the evolution process, we select the better algorithm to participate in the next

generation based on the calculated value. When rand < eλ1

/(eλ1

+eλ2

), we will choose PSO.

Otherwise, DE will be chosen.

In the evolutionary phase of DE, we divide the population into two sub-swarms. Swarm1

adopts DE/rand/1 mutation operation and swarm2 adopts DE/current-to-best/1 mutation

operation. The DE/rand/1 mutation strategy has a strong global search ability, and the

DE/current-to-best/1 mutation strategy has a strong local search ability. Two mutation

strategies allow the population to keep a good balance between exploration and exploitation.

When the CR is small, the swarm diversity increases, which is conducive to global search.

When the CR is large, the local search ability is enhanced, and the convergence speed is

accelerated. Therefore, the CR of swarm1 should be small and the CR of swarm2 should be

large.

In the evolutionary phase of PSO, a new speed formula is designed. In this formula, the xide

that population evolved from DE replaces the personal optimal position in the original PSO.

Obviously, the xide can provide a good guidance for individuals.

)()( 2,11 t

id

t

gd

t

id

t

dide

t

id

t

id xPrandcxxrandcwvv

(13)

where xide is DE evolved population, Pg is global optimal position.

3.2. Adjustment scheme for the number of individuals in two sub-swarms(AS)

Here, we use N1 and N2 represents the number of individuals in swarm1 and swarm2. They

satisfy the following relationship.

1 2N N N (14)

During the initial evolution stages, there should be more individuals in swarm1. With the

increase of iterations, the exploitation capability needs to be enhanced. Swarm2 should have

more individuals. A dynamic adjustment scheme is proposed for the number of individuals in

two sub-swarms. This strategy can be shown as follows:

2 /N t Maxit N (15)

where Maxit is the maximal iteration and ⌊⌋ is a round down operator.

3.3. Parameter adaptive strategy

In this paper, a parameter adaptive strategy is adopted. By adaptively adjusting the scale

factor F parameter in the DE algorithm and the inertia weight w parameter in the PSO

algorithm. The specific improvement formulas are as follows.

MaxitFFtFF /)( minmaxmax (16)

Maxittwwww /101

1

minmaxmin )/(

(17)

Page 6: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 107

where Fmax and Fmin are the upper and lower bounds of scale factor F. And wmax and wmin are

the upper and lower bounds of inertia weight w.

3.4. Procedures of the APSODE

The flowchart of APSODE is illustrated in Figure.1.

Initialization

t=1

Swarm1:DE/rand/1

Mutation strategy

Crossover: CR1

Selection

Update vi and xi

Evaluation

t=t+1

rand<

Swarm1:DE/rand/1

Mutation strategySwarm2:DE/curre

nt-to-best/1

Mutation strategy

Crossover:Control

parameter CR1

Crossover:Control

parameter CR2

Selection Selection

Evaluation

Update Pi and Pg

Evaluation

t=t+1

Output best

Yes

Yes No

Yes No21

1

ee

e

Update vi and xi

t<Maxit

No

Swarm2:DE/curre

nt-to-best/1

Mutation strategy

Crossover: CR2

Selection

Evaluation

Figure 1. The flowchart of APSODE

4. EXPERIMENTS AND COMPARISON

4.1. Tests and benchmark problems

In this section, CEC2014 benchmark functions are employed to test the performance of

APSODE. Obviously, these shifted and rotated functions are more complex and make our test

results more convincing. Table 1 summarizes several features of the benchmark problems.

Page 7: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

108 http://www.ijsse.com

Table1. The features of CEC2014 benchmark problems. [U: Unimodal; M: Multimodal]

Problem Name Type Dim Low Up Bias

F1 Rotated High Conditioned Elliptic

Function U 30 -100 100 100

F2 Rotated Bent Cigar Function U 30 -100 100 200

F3 Rotated Discus Function U 30 -100 100 300

F4 Shifted and Rotated Rosenbrock’s

Function M 30 -100 100 400

F5 Shifted and Rotated Ackley’s Function M 30 -100 100 500

F6 Shifted and Rotated Weierstrass

Function M 30 -100 100 600

F7 Shifted and Rotated Griewank’s

Function M 30 -100 100 700

F8 Shifted Rastrigin’s Function M 30 -100 100 800

F9 Shifted and Rotated Rastrigin’s

Function M 30 -100 100 900

F10 Shifted Schwefel’s Function M 30 -100 100 1000

F11 Shifted and Rotated Schwefel’s

Function M 30 -100 100 1100

F12 Shifted and Rotated Katsuura Function M 30 -100 100 1200

F13 Shifted and Rotated HappyCat

Function M 30 -100 100 1300

F14 Shifted and Rotated HGBat Function M 30 -100 100 1400

F15

Shifted and Rotated Expanded

Griewank’s plus Rosenbrock’s

Function

M 30 -100 100 1500

F16 Shifted and Rotated Expanded

Scaffer’s F6 Function M 30 -100 100 1600

F17 Hybrid Function 1 (N = 3) H 30 -100 100 1700

F18 Hybrid Function 2 (N = 3) H 30 -100 100 1800

F19 Hybrid Function 3 (N = 4) H 30 -100 100 1900

F20 Hybrid Function 4 (N = 4) H 30 -100 100 2000

F21 Hybrid Function 5 (N = 5) H 30 -100 100 2100

F22 Hybrid Function 6 (N = 5) H 30 -100 100 2200

F23 Composition Function 1 (N = 5) C 30 -100 100 2300

F24 Composition Function 2 (N = 3) C 30 -100 100 2400

F25 Composition Function 3 (N = 3) C 30 -100 100 2500

F26 Composition Function 4 (N = 5) C 30 -100 100 2600

F27 Composition Function 5 (N = 5) C 30 -100 100 2700

F28 Composition Function 6 (N = 5) C 30 -100 100 2800

F29 Composition Function 7 (N = 3) C 30 -100 100 2900

F30 Composition Function 8 (N = 3) C 30 -100 100 3000

Page 8: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 109

4.2. Parameter settings

The parameters setting of APSODE and comparison algorithms are shown in Table 2. To be

fair, the swarm size is set as 40 for all tested algorithms and the maximal number of fitness

evaluations (FEs) is 80000. The dimensions of all functions are set to 30 and each benchmark

function is run for 10 times independently.

Table 2. Parameters settings

Algorithm Parameters settings

APSODE c = 1.4, w = 0.9 ~ 0.7, CR1 = 0.025, CR2 = 0.9, F =

0.6~0.2

PSO c1 = c2 = 1.4, w = 0.7

DE F = 0.5, CR = 0.3

GA F = 0.1, CR = 0.95

4.3. Experimental results and analyses between APSODE and comparisons

The numerical comparison results for CEC2014 benchmark functions are presented in Table

3. The best mean fitness value is marked in boldface. In order to observe the performance of

the algorithm more intuitively, the convergence curves for each test function are shown in

Figure.2.

Table 3. Comparison results of four algorithms on the CEC 2014 test sets (F1-F30)

PSO DE GA APSODE

Mean Std Mean Std Mean Std Mean Std

F1 3.125E+

07

1.787E

+08

1.007E+

08

1.954E+

08

6.068E

+07

2.575E

+08 1.512E

+07 5.735E+

06

F2 1.898E+

09

7.126E

+09

2.371E+

09

1.388E+

10

5.218E

+09

1.379E

+10 1.912E

+07

5.328E

+06

F3 1.949E+

04

1.710E

+05

1.230E+

04

3.782E+

04

4.315E

+05

1.166E

+06

1.325E+

04

1.005E+

04

F4 4.148E

+02

2.201E

+03

8.821E+

02

2.951E+

03

6.711E+

02

1.688E

+03

4.193E+

02

5.785E+

01

F5 5.203E

+02

2.080E

+02

5.210E+

02

4.790E-

02

5.210E

+02

2.081E

+02

5.209E+

02

5.642E-

02

F6 6.339E+

02

2.538E

+02

6.257E+

02

4.508E+

00

6.424E

+02

2.569E

+02 6.220E

+02

2.727E

+00

F7 7.221E+

02

3.092E

+02

7.169E+

02

1.064E+

02

8.084E

+02

4.345E

+02 7.010E

+02

4.221E-

02

F8 9.506E+

02

3.876E

+02

9.285E+

02

4.783E+

01

1.100E

+03

4.706E

+02

9.325E+

02

2.578E+

01

F9 1.100E+

03

4.685E

+02

1.245E+

03

5.246E+

01

1.231E

+03

5.004E

+02

1.071E

+03

3.437E

+01

F10 4.126E+

03

2.046E

+03

4.205E+

03

8.126E+

02

5.225E

+03

3.337E

+03

3.887E

+03

8.163E

+02

Page 9: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

110 http://www.ijsse.com

F11 6.073E+

03

2.606E

+03

7.883E+

03

4.454E+

02

5.999E

+03

3.569E

+03 4.019E

+03

1.210E

+03

F12 1.212E+

03

4.804E

+02

1.217E+

03

5.064E-

01

1.223E

+03

4.816E

+02 1.201E

+03

2.943E-

01

F13 1.312E+

03

5.201E

+02

1.316E+

03

9.213E-

01

1.310E

+03

5.239E

+02 1.300E

+03

9.149E-

02

F14 1.422E+

03

5.822E

+02

1.400E+

03

3.277E+

01

1.533E

+03

6.715E

+02 1.400E

+03

4.701E-

02

F15 2.639E+

04

7.065E

+05

8.484E+

04

1.205E+

06

1.218E

+05

9.304E

+05

1.521E

+03

1.976E

+00

F16 1.613E

+03

6.448E

+02

1.613E+

03

3.844E-

01

1.614E

+03

6.452E

+02

1.614E+

03

1.512E-

01

F17 2.448E+

06

4.413E

+07

1.006E+

07

4.125E+

07

1.029E

+07

2.188E

+07 1.362E

+06

9.172E

+05

F18 3.309E

+07

4.640E

+08

6.467E+

07

5.590E+

08

2.219E

+08

5.355E

+08

1.016E+

08

5.650E+

08

F19 1.919E

+03

7.883E

+02

1.930E+

03

5.229E+

01

2.100E

+03

8.353E

+02 1.919E

+03

2.136E

+00

F20 3.278E+

04

5.661E

+05

2.738E+

04

4.643E+

04

1.259E

+05

5.000E

+06

3.551E

+03

1.936E

+03

F21 9.832E

+05

1.635E

+07

1.578E+

06

7.703E+

06

2.332E

+06

5.233E

+06

1.857E+

06

1.142E+

06

F22 2.791E+

03

2.051E

+03

2.792E+

03

2.045E+

03

8.381E

+02

1.894E

+03 7.112E

+02

1.594E

+02

F23 2.711E+

03

1.086E

+03

2.639E+

03

1.583E+

02

2.790E

+03

1.218E

+03 2.620E

+03

2.947E

+00

F24 2.642E+

03

1.067E

+03

2.651E+

03

2.626E+

01

2.797E

+03

1.154E

+03 2.630E

+03

1.070E

+01

F25 2.721E

+03

1.092E

+03

2.735E+

03

1.843E+

01

2.772E

+03

1.125E

+03 2.721E

+03

9.432E-

01

F26 2.752E+

03

1.082E

+03

2.761E+

03

4.542E+

00

2.713E

+03

1.101E

+03

2.703E

+03

7.035E-

02

F27 3.170E

+03

1.680E

+03

3.902E+

03

2.574E+

02

4.016E

+03

1.630E

+03

3.198E+

03

2.535E+

02

F28 8.875E+

03

3.286E

+03

3.346E+

03

6.355E+

02

4.993E

+03

2.959E

+03

3.361E+

03

5.721E+

01

F29 7.998E+

06

9.009E

+07

2.493E+

06

2.592E+

07

3.751E

+06

3.182E

+07 3.122E

+03

2.358E

+00

F30 4.047E+

04

4.405E

+05

4.092E+

03

5.719E+

05

9.555E

+04

5.484E

+05

7.714E+

04

4.343E+

02

Mean: mean value, Std: Standard deviation

The mean, standard deviations obtained by the four algorithms on the CEC2014 benchmark

functions are presented in Tables 3. After overviewing the 30 functions, it can be intuitively

seen from the data that for most functions, APSODE is significantly better than other single

algorithms in terms of convergence accuracy(Mean value) and robustness(Std value).

Specifically, APSODE is ranked first for 17 times, the second 8 times, and ranked the third

Page 10: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 111

for 2 times, respectively. For multimodal F14, APSODE and DE are ranked first. For F19,

F25, APSODE and PSO achieve the optimal solution at the same time. However, APSODE

has the smallest variance for better stability. This may be due to the modified velocity update

scheme, which improves the accuracy of the optimal solution and guarantes the convergence

speed. The above results have verified that APSODE has more robustness and strong global

optimization ability.

F1 F2

F3 F4

F5 F6

100

105

106

107

108

109

1010

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

0

5

10

15x 10

10

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

100

105

0

5

10

15x 10

6

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

102

103

104

105

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

520

520.5

521

521.5

522

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

620

630

640

650

660

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

Page 11: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

112 http://www.ijsse.com

F7 F8

F9 F10

F11 F12

F13 F14

0 2 4 6 8 10

x 104

500

1000

1500

2000

2500

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

800

900

1000

1100

1200

1300

1400

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

1000

1100

1200

1300

1400

1500

1600

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

2000

4000

6000

8000

10000

12000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

2000

4000

6000

8000

10000

12000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

1200

1202

1204

1206

1208

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

1300

1305

1310

1315

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

1400

1500

1600

1700

1800

1900

2000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

Page 12: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 113

F15 F16

F17 F18

F19 F20

F21 F22

0 2 4 6 8 10

x 104

102

104

106

108

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

1612.5

1613

1613.5

1614

1614.5

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

105

106

107

108

109

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

100

105

100

105

1010

1015

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

1500

2000

2500

3000

3500

4000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

102

104

106

108

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

104

106

108

1010

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

100

105

103

104

105

106

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

Page 13: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

114 http://www.ijsse.com

F23 F24

F25 F26

F27 F28

F29 F30

Figure 2. Convergence progresses of four algorithms on CEC2014

100

105

2000

3000

4000

5000

6000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

100

105

2600

2700

2800

2900

3000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

2700

2750

2800

2850

2900

2950

3000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

2700

2750

2800

2850

2900

2950

3000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

3000

3500

4000

4500

5000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

2000

4000

6000

8000

10000

12000

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

102

104

106

108

1010

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

0 2 4 6 8 10

x 104

102

104

106

108

FEs

Fitness V

alu

e

PSO

GA

DE

APSODE

Page 14: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

Shi Jia, Xiaodan Liang, Maowei He, Liling Sun, Hanning Chen

http://www.ijsse.com 115

Figure 2 plots the convergence curves of a typical run on 30 benchmark functions. As shown

from Figure 2, APSODE has the relatively better performance and the relatively faster

convergence speed for most functions. APSODE performs better on the test functions F1, F3,

F6, F9, F11, F12, F17, F20, F21, F22, F23, F24, F25, F27 than other algorithms. The

APSODE has the highest convergence accuracy and its global optimization capability is

superior to other comparison algorithms. For Unimodal F2, the APSODE has a smooth

convergence curve without falling into a local optimum. For multimodal F15, all for

algorithms slow down and then flatten out a nearly horizontal line. For F28, F29, F30, the

convergence curves of APSODE and DE are approximately the same. This shows that the

effects of two mutation operators are prominent in solving complex optimization problems.

From the experimental results, we observed that the APSODE has the ability to balance

between exploration and exploitation.

5. CONCLUSION

In this paper, we have proposed a hybrid optimization algorithm termed APSODE. APSODE

adopts a dual population strategy. In APSODE, one swarm is good at exploring, and the other

swarm owns better exploitation ability. Each sub-swarm adopts different mutation operators.

This strategy can effectively balance between exploration and exploitation. Meanwhile, a

simple but effective dynamic strategy for adjusting the number of individuals between two

sub-swarms is proposed. This strategy can enhance the exploitation ability of APSODE.

Moreover, the new speed update strategy is proposed to improve the convergence accuracy.

In order to test the performance of APSODE, 30 benchmark functions of CEC2014 are

adopted. The experimental results demonstrated that the proposed APSODE performs well

compared to other algorithms on majority test functions. It will be our future work to study

the control parameters used in hybrid PSO and DE, and to design of hybrid algorithms to

solve practical optimization problem.

6. ACKNOWLEDGMENTS

This work is supported by National key Research and Development Program of China under

Grants nos. 2017YFB1103604, National Natural Science Foundation of China under Grants

nos. 41772123, 61802280, 61806143 and 61772365, Tianjin Province Science and

Technology Projects under Grants nos. 17JCQNJC04500 and 17JCYBJC15100, and Basic

Scientific Research Business Funded Projects of Tianjin nos. 2017KJ093 and 2017KJ094.

REFERENCE

[1] V. Ho-Huu, T. Nguyen-Thoi, T. Vo-Duy et al.(2016). An adaptive elitist differential

evolution for optimization of truss structures with discrete design variables. Comput.

Struct.165 , pp. 59-75.

[2] G. Bartsch, A.P. Mitra, S.A. Mitra, et al.(2016). Use of artificial intelligence and

machine learning algorithms with gene expression profiling to predict recurrent

nonmuscle invasive urothelial carcinoma of the bladder. J. Urol., 195 (2) , pp. 493-

498.

[3] C.M. Lai, W.C. Yeh, Huang Y.C.(2017). Entropic simplified swarm optimization for

the task assignment problem. Appl. Soft Comput., 58 (2017), pp. 115-127.

[4] R.C. Eberhart, J. Kennedy.(1995). “A new optimizer using particle swarm theory”.

Proceedings of the Sixth International Symposium on Micro Machine and Human

Page 15: A Hybrid Algorithm with Particle Swarm Optimization and ...ijsse.com/sites/default/files/issues/2020/v10i1/... · Intelligent optimization algorithms have been applied to solve optimization

A Hybrid Algorithm with Particle Swarm Optimization and Differential Evolution Algorithm

116 http://www.ijsse.com

Science, 1, New York, NY , pp. 39-43.

[5] J. Kenndy, R. Eberhart (1995). “Particle swarm optimization”. Proc. IEEE Int. Conf.

Neural Netw., 4 (1995), pp. 1942-1948.

[6] T. Vidal, T.G. Crainic, M. Gendreau, et al.(2013). “A hybrid genetic algorithm with

adaptive diversity management for a large class of vehicle routing problems with

time-windows”. Comput. Oper. Res., 40 (1), pp. 475-489.

[7] R. Storn, K. Price (1997). Differential evolution – a simple and efficient heuristic for

global optimization over continuous spaces. J. Glob. Optim., 11 (4) (1997), pp. 341-

359.

[8] K. Price, R. Storn(1995). Differential Evolution – A Simple and Efficient Adaptive

Scheme for Global Optimization Over Continuous Space. Technical Report.

International Computer Science Institute (1995).

[9] R.Ebenhart, Kennedy.:“Particle swarm optimization”.Proceeding IEEE Inter

Conference on Neural Networks, 4, Perth, Australia, Piscat-away (1995), pp. 1942-

1948.

[10] M.S.Kiran (2017). Particle swarm optimization with a new update mechanism.

Applied Soft Computing 60, 670-678.

[11] K.Chen, F.Y.Zhou. A.Liu(2018). Chaotic dynamic weight particle swarm

optimization for numerical function optimization”. Knowledge-Based Sydtems 139,

23-40.

[12] Z.H. Zhan, J. Zhang, Y. Li, et al.(2011). Orthogonal learning particle swarm

optimization. IEEE Trans. Evol. Comput., 15 (6), pp. 832-847.

[13] M.N.Tian,X.B.Gao,C. Dai(2011). Differential evolution with improved individual-

based parameter setting and selection strategy., Applied Soft Computing 56, 286-297.

[14] R. Mallipeddi, P.N. Suganthan, Q.K. Pan, M.F. Tasgetiren, (2011). Differential

evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft

Comput. 11(2), 1679-1696

[15] S.Biswas, S.Kundu, S.Das, A.V.Vasilakos,(2013). Teaching and learning best

differential evolution with self-adaptation for real parameter optimization in Proc.

IEEE, Congr. Evol. Comput, Cancun, Mexico, pp.1115-1122.

[16] M.G. Epitropakis, V.P. Plagianakos, M.N. Vrahatis, (2012). Evolving cognitive and

social experience in particle swarm optimization through differential evolution: a

hybrid approach. Inf. Sci., 216, pp. 50-92.

[17] Yu X, Cao J, Shan H, et al.(2014). An adaptive hybrid algorithm based on particle

swarm optimization and differential evolution for global optimization. Sci. World J.

2014; 1-16

[18] Sayah S, Hamouda A.(2013). A hybrid differential evolution algorithm based on

particle swarm optimization for nonconvex economic dispatch problems. Appl. Soft.

Comput.2013;1608-1619