5
K. Deb et al. (Eds.): SEAL 2010, LNCS 6457, pp. 568–572, 2010. © Springer-Verlag Berlin Heidelberg 2010 Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach Indranil Banerjee 1 and Prasun Das 2 1 Software Engineer Videonetics Technology Pvt. Ltd. Sector-V, Salt Lake Block EP, Kolkata 700 091, India [email protected] 2 SQC & OR Division Indian Statistical Institute 203, B.T. Road, Kolkata 700 108, India [email protected] Abstract. The field of evolutionary multi-objective optimization (MOO) has witnessed an ever-growing number of studies to use artificial swarm behavior. In this paper authors have made an endeavor to minimize the computational burden, associated with global ranking methods and local selection modules used in many multi-objective particle swarm optimizers. Two different swarm strategies were employed for global and local search respectively using particle swarms and bacterial chemotaxis. In this paper the authors have shown comparative improvements of the proposed method namely MOBSO, with a benchmark evolutionary MOO method, NSGA-II. The paper also highlights the reduction in computational complexity for large populations, due to the proposed method. Keywords: Swarm intelligence, bacterial chemotaxis, computational complex- ity, particle swarm, Pareto optimality, crowding distance, multi-objective optimization, bio-inspired systems. 1 Introduction Evolutionary multi-objective optimization (MOO) have started becoming popular with advent of algorithms like SPEA, VEGA, MOEA etc. [1], followed by NSGA (Non-dominated sorting genetic algorithm) and its subsequent improvement, namely NSGA II [2]. In recent time, there is a growing trend in using biologically inspired evolutionary techniques such as swarm based heuristics, artificial ant and bee sys- tems, bird-flocking, bacterial foraging (BFOA) [3] etc. Current researches in this field are concerned in applying the basic concepts and ideas of Particle swarm optimization (PSO) in MOO problems [4, 5, 6 and 7]. Some of the main drawbacks of conventional PSO techniques as highlighted by the authors [5, 6], are higher computational com- plexity, large number of fitness function evaluation and premature convergence. Recently, there have been several studies [8, 9] on the hybridization of PSO and

[Lecture Notes in Computer Science] Simulated Evolution and Learning Volume 6457 || Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

Embed Size (px)

Citation preview

Page 1: [Lecture Notes in Computer Science] Simulated Evolution and Learning Volume 6457 || Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

K. Deb et al. (Eds.): SEAL 2010, LNCS 6457, pp. 568–572, 2010. © Springer-Verlag Berlin Heidelberg 2010

Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

Indranil Banerjee1 and Prasun Das2

1 Software Engineer Videonetics Technology Pvt. Ltd.

Sector-V, Salt Lake Block EP, Kolkata 700 091, India [email protected]

2 SQC & OR Division Indian Statistical Institute

203, B.T. Road, Kolkata 700 108, India [email protected]

Abstract. The field of evolutionary multi-objective optimization (MOO) has witnessed an ever-growing number of studies to use artificial swarm behavior. In this paper authors have made an endeavor to minimize the computational burden, associated with global ranking methods and local selection modules used in many multi-objective particle swarm optimizers. Two different swarm strategies were employed for global and local search respectively using particle swarms and bacterial chemotaxis. In this paper the authors have shown comparative improvements of the proposed method namely MOBSO, with a benchmark evolutionary MOO method, NSGA-II. The paper also highlights the reduction in computational complexity for large populations, due to the proposed method.

Keywords: Swarm intelligence, bacterial chemotaxis, computational complex-ity, particle swarm, Pareto optimality, crowding distance, multi-objective optimization, bio-inspired systems.

1 Introduction

Evolutionary multi-objective optimization (MOO) have started becoming popular with advent of algorithms like SPEA, VEGA, MOEA etc. [1], followed by NSGA (Non-dominated sorting genetic algorithm) and its subsequent improvement, namely NSGA II [2]. In recent time, there is a growing trend in using biologically inspired evolutionary techniques such as swarm based heuristics, artificial ant and bee sys-tems, bird-flocking, bacterial foraging (BFOA) [3] etc. Current researches in this field are concerned in applying the basic concepts and ideas of Particle swarm optimization (PSO) in MOO problems [4, 5, 6 and 7]. Some of the main drawbacks of conventional PSO techniques as highlighted by the authors [5, 6], are higher computational com-plexity, large number of fitness function evaluation and premature convergence. Recently, there have been several studies [8, 9] on the hybridization of PSO and

Page 2: [Lecture Notes in Computer Science] Simulated Evolution and Learning Volume 6457 || Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

Evolutionary MOBSO: A Hybrid Approach 569

BFOA, especially on single objective domain. Promising results from these studies encourages possible extension of these techniques to multi-objective domain.

In this paper, a fast hybrid intelligent swarm based optimizer is developed. A gen-eral purpose MOO employing some sort of global ranking scheme will have a com-plexity of O (MN2) per iteration, which makes the computational time large enough so that online optimization becomes impractical. The proposed method (MOBSO) has a reduced complexity of O (kMN2) per iteration. In MOBSO the search space was explored both globally and locally with two mutually-independent swarm based strategies.

The rest of this paper is organized as follows. Section 2 describes the details of the algorithm proposed in this work. In section 3, comparative studies with that of NSGA-II are elaborated with results obtained for the selected benchmark test prob-lems. The paper is concluded in Section 4.

2 The Proposed Algorithm: MOBSO

MOBSO is a hybridization of PSO and BFOA in MOO domain, where the global search is carried out using social part of the standard PSO model. In standard PSO, the particles traverse in multi-dimensional search space (parameter space), having two decision making components; one of which is its’ cognitive part (for local search) and another is the social part (for global search). A particle updates its velocity and posi-tion (per iteration) in the parameter space in the following way,

Vnew (i, d) = w* Vold (i, d) + C1*R1*(X Є global best – Xold (i, d)) + C2*R2*(X Є local

best – Xold (i, d)) . (1)

Xnew (i, d) = Xold (i, d) + Vnew (i, d) . (2)

Where, V (i, d) and X (i, d) are velocity and position vectors of ith particle respec-tively in a d dimensional parameter space. X Є global best is a particle from global best solution set, which, in case of single objective optimization, is the global best solu-tion. The global best set in the new approach is chosen using crowding rank, which is explained later. The term X Є local best denotes the position of a particle chosen from the neighbourhood of the particle or it can be the previous best instance of the particle itself. In MOO domain, definition of this term is often arbitrary. In the proposed solu-tion, this term is avoided and local search is performed using bacterial chemotaxis. Here C1 and C2 are acceleration constants, R1 and R2 are random numbers between (0, 1) and w is the inertia constant.

In MOBSO cognitive local search is applied using bacterial chemotaxis and the third term of the velocity update equation is not used hence, C2 = 0. The global best set is chosen in the following way. First, each non-dominated solution in the population is given a rank based on calculation of its’ crowding distance [1]. This mechanism is used to maintain the diversity of the solution. At the beginning of each iteration, a fraction κ of the best (in terms of crowding distance) non-dominated

Page 3: [Lecture Notes in Computer Science] Simulated Evolution and Learning Volume 6457 || Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

570 I. Banerjee and P. Das

solutions is chosen as the set of global best solution. The fraction κ is determined at the beginning of each iteration as κ = min (NE / (N-NE), 1), where, NE is the number of non-dominated solutions in the population of N individuals. In the proposed method, only the dominated solutions in the population update their position and velocity using Eq. 1.

The evolution process results in creating more non-dominated solutions in the population, which search in their local space using bacterial chemotaxis. In the stan-dard BFO model (developed from the study of e-coli bacterium [3]), each bacterium can move mainly by two mechanisms, one being stumble, in which the bacterium takes a short (defined by stumble length) ‘jump’ on a random direction in the search space. Another mechanism is called run, in which a bacterium that has stumbled to a better solution, continues to take short stumble in that direction until maximum run step is exhausted or no better solution is obtained, whichever comes earlier. The following equations summarize this mechanism.

St (i) = Rd / √ (RdT * Rd) . (3)

Xnew (i, d) = Xold (i, d) + c (i, d) * St (i, d) . (4)

Where, Rd ϵ [0, 1] is a d-dimensional random vector and (.)T stands for its transpose. St (i) is a random unit vector giving the direction of stumble of ith bacterium and c (i, d) is the stumble step size of ith bacterium in dth dimension. In MOBSO independent stumble by a bacterium is considered only if it leads to a run, otherwise the stumble is discarded.

The computational complexity of the proposed approach is O (kMN2), and k ∝ NE/N, thus average k is kept below 1. Since the number of non-dominated solutions needed is more or less a pre-requisite to the user, increasing the number of solutions N does not, therefore, demand an increase in NE in the final non-dominated set. So, for large N, NE/N << 1 and taking k ≈ NE/N, the complexity becomes O (MN2 – ε), where 0 < ε < 1.

3 Experimental Results

To study the performance of MOBSO, authors have taken several benchmark func-tions from [1, 10 and 11]. These are unconstraint MOO problems, bi-objective in particular, with varying size of search space. In Table 1 the parameters chosen for simulation are given. To maintain consistency the parameters were kept constant throughout simulation. In this study two performance metrics have been used namely General Distance metric (γ) and Inverse Generational Distance metric (IGD) [1]. The comparative performance of MOBSO and NSGA II, with respect to γ, is given in Table 2. Table 3 indicates IGD metric values for selected problems from [11]. In all cases first row gives the mean value of the measure over 30 runs and second row gives the standard deviation (SD).

Page 4: [Lecture Notes in Computer Science] Simulated Evolution and Learning Volume 6457 || Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

Evolutionary MOBSO: A Hybrid Approach 571

Table 1. Simulation parameters for MOBSO

Algorithm parameters used in simulation Population size = 100 Maximum Chemotactic run count, Nc = 4 acceleration constant, C1 = 0.5 inertia weight w = 0.9 Maximum number of iterations (generation) = 250 Maximum fitness function evaluations = 10,000 Scaling factor (µ) = 0.01 Stopping fraction of non-dominated solution (NE/N) = 0.8 Minimum fitness function evaluations = 1,000 Number of runs to evaluate each benchmark problem was taken to be 30

Table 2. Comparative result from the γ and Δ metric

Problem SCH FON KUR ZDT1 ZDT2 ZDT3 ZDT4 ZDT5

γ metric NSGA II (Real Coded) Mean 0.003 0.002 0.0289 0.0334 0.0723 0.1145 0.5131 0.2965 SD 0.0 0.0 0.0 0.0048 0.0317 0.0079 0.1184 0.0131 MOBSO Mean 0.001 0.000 0.0006 0.0025 0.0015 0.0014 0.4208 0.0014 SD 0.0 0.0 0.0 0.0 0.0 0.0 0.0008 0.0

Table 3. Result of IGD metric on selected problems from CEC 2009 (special session) [11]

Problem UF1 UF2 UF3 UF4 UF5 UF6 UF7 IGD metric MOBSO Mean 0.0265 0.0271 0.2316 0.0445 0.3732 0.6700 0.0784 SD 0.0035 0.0009 0.0822 0.0062 0.1202 0.1552 0.0020

4 Concluding Remarks

This study introduces a hybrid swarm strategy to simultaneously search for local and global solutions in the problem space. MOBSO method gives a diverse pareto-optimal front with reduced computational burden in limited number of fitness function evalua-tion. With problems having large search space, MOBSO can be implemented in O (kMN2) complexity, with k < 1. It has been found to be competitive with benchmark algorithms in this domain. The result encourages further examination of the parame-ters of MOBSO along with its scalability and convergence, in order to make them problem space independent in future.

Page 5: [Lecture Notes in Computer Science] Simulated Evolution and Learning Volume 6457 || Evolutionary Multi-Objective Bacterial Swarm Optimization (MOBSO): A Hybrid Approach

572 I. Banerjee and P. Das

References

1. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons, Chichester (2001)

2. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Tansactions on Evolutionary Computation 6(2), 182–197 (2002)

3. Passino, K.M.: Biomimicry of Bacterial Foraging for Distributed Optimization and Con-trol. IEEE Control Systems Magazine 22(3), 52–67 (2002)

4. Eberchart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of International Symposium on Sym., pp. 39–43. Micro Machine and Human Science, Na-goya (1995)

5. Nebro, A.J., Durillo, J.J., Garcia-Nieto, J., Coello Coello, C.A., Luna, F., Alba., E.: SMPSO: A New PSO-based Meta-heuristic for Multi-objective Optimization. In: IEEE symposium on Computational intelligence in miulti-criteria decision-making, pp. 66–73 (2009)

6. Zhang, X.-h., Meng, H.-y., Jiao, L.-c.: Intelligent Particle Swarm Optimization in Multiob-jective Optimization. In: The 2005 IEEE Congress on Evolutionary Computation, vol. 1, pp. 714–719 (2005)

7. Guzmań, M.A., et al.: A novel multi-objective optimization algorithm based on bacterial chemotaxis. Engineering Applications of Artificial Intelligence (2009), doi:10.1016/ j.engappai.2009.09.010

8. Biswas, A., Dasgupta, S., Das, S., Abraham, A.: Synergy of PSO and bacterial foraging optimization: a comparative study on numerical benchmarks. In: Corchado, E., et al. (eds.) Second International Symposium on Hybrid Artificial Intelligent Systems (HAIS 2007), Innovations in Hybrid Intelligent Systems. Advances in Soft computing Series, vol. ASC 44, pp. 255–263. Springer, Germany (2007)

9. Dasgupta, S., Das, S., Abraham, A., Biswas, A.: Adaptive computational chemotaxis in bacterial foraging optimization: an analysis. Trans. Evol. Comp. 13(4), 919–941 (2009)

10. Zitzler, E., Deb, K., Thiele, L.: Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evolutionary Computation 8(2), 173–195 (2000)

11. Li, C., Yang, S., Nguyen, T.T., Yu, E.L., Yao, X., Jin, Y., Beyer, H.G., Suganthan, P.: Benchmark Generator for CEC 2009 Competition on Dynamic Optimization. University of Leicester, University of Birmingham, Nanyang Technological University, Tech. Rep. (2008)