158
MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX OPTIMISATION PROBLEMS by XIAOFEN LU A thesis submitted to The University of Birmingham for the degree of DOCTOR OF PHILOSOPHY School of Computer Science College of Engineering and Physical Sciences The University of Birmingham November 2018

MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

MULTI-SPECIES EVOLUTIONARY ALGORITHMSFOR COMPLEX OPTIMISATION PROBLEMS

by

XIAOFEN LU

A thesis submitted toThe University of Birminghamfor the degree ofDOCTOR OF PHILOSOPHY

School of Computer ScienceCollege of Engineering and Physical SciencesThe University of BirminghamNovember 2018

Page 2: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

University of Birmingham Research Archive

e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder.

Page 3: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

ABSTRACT

Evolutionary algorithms (EAs) face challenges when meeting optimisation problems that

are large-scale, multi-disciplinary, or dynamic, etc. To address the challenges, this thesis

focuses on developing specific and efficient multi-species EAs to deal with concurrent

engineering (CE) problems and dynamic constrained optimisation problems (DCOPs).

The main contributions of this thesis are:

First, to achieve a better collaboration among different sub-problem optimisation, it

proposes two novel collaboration strategies when using cooperative co-evolution to solve

two typical kinds of CE problems. Both help to obtain designs of higher quality. An

effective method is also given to adjust the communication frequency among different

sub-problem optimisation.

Second, it develops a novel dynamic handling strategy for DCOPs, which applies

speciation methods to maintain individuals in different feasible regions. Experimental

studies show that it generally reacts faster than the state-of-the-art algorithms on a test

set of DCOPs.

Third, it proposes another novel dynamic handling strategy based on competitive co-

evolution (ComC) to address fast-changing DCOPs. It employs ComC to find a promising

solution set beforehand and uses it for initialisation when detecting a change. It is shown

by experiments that this strategy can help adapt to environmental changes well especially

for DCOPs with very fast changes.

Page 4: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

ACKNOWLEDGEMENTS

First and foremost, I would like to express my deepest gratitude to my supervisor Prof.

Xin Yao. This thesis would not have been completed without his patient supervision,

valuable guidance and substantial support. During my PhD study, his enthusiasm in

research, optimism, scrupulous attention to details, and discipline have inspired me a lot.

He is always a role model of scholar to me. Thank him very much for the encouragement

before my PhD viva.

The second thanks are given to my two co-supervisors, Prof. Ke Tang from the

SUSTech and Dr. Stefan Menzel from the Honda Research Institute Europe, both of whom

offered me many valuable comments and insightful suggestions on both my research and

future career. This thesis would not have been completed without their careful guidance.

I am also greatly indebted to the Honda Research Institute Europe for funding my PhD

study.

Special thanks to my thesis group members, Prof. John Barnden (RSMG represen-

tative) and Dr. Ata Kaban, who have given helpful comments and valuable suggestions

in every thesis group meeting. All of these have greatly improved the thesis and my way

of doing research. Thank Dr. Kaban again for being the internal examiner for my thesis

and arranging my PhD viva. I really appreciate the valuable correction advice from her.

I would also like to thank Prof. Shengxiang Yang for being the external examiner for

my thesis and Prof. Andrew Howes for being the chairman for my PhD viva. Thank Prof.

Yang again for all the carefully marked corrections.

Luckily, I have met some excellent colleagues, senior researchers and friends during

my PhD study. Thank Dr. Haobo Fu, Ruiwen Zhang, Dr. Weiqi Chen, Dr Di Zhang,

Page 5: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Liyan Song, Dr. Fengzhen Tang, Dr. Renzhi Chen, Dr. Guanbo Jia, Dr. Shuo Wang, Dr.

Leandro Minku, Mirabella, Michael, Dr. Mang Wang, Pietro, Hanno, Esra, Momodou,

Xu Wang, Dr. Dongsheng He, Dr. Miqing Li, Dr. Ke Li, Dr. Ran Cheng, Prof. Guofu

Zhang, Prof. Junfeng Chen, Prof. Yaoyao He, Tingting Yang, Ying Chen, Wenjing Hong,

Yao Zhao, Lukas, Guiying Li, Dr. Yunwen Lei, Dr. Peng Yang, Dr. Bo Yuan, Farhad for

either their discussions and suggestions on the research or their help and encouragement

in daily life. Thank Chengbin Hou very much for the help before my PhD viva.

Finally, I wish to thank my great parents, my parents in law and my husband for their

consistent support, care, encouragement and love. This thesis is dedicated to them.

Page 6: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CONTENTS

1 Introduction 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Scope of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3.1 Collaboration Strategies among Sub-Problem Optimisation in CE . 51.3.2 Niching Methods for Dynamic Constrained Optimisation . . . . . . 71.3.3 Competitive Co-evolution for Fast-Changing Dynamic Constrained

Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Contributions of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 111.5 Organisation of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.6 Publications Resulting from the Thesis . . . . . . . . . . . . . . . . . . . . 14

2 Background and Literature Review 162.1 Problem Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.1.1 Two Types of CE Problems . . . . . . . . . . . . . . . . . . . . . . 162.1.2 Dynamic Constrained Optimisation Problems (DCOPs) . . . . . . . 18

2.2 Introduction to EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3 Extensions to Traditional EA Model . . . . . . . . . . . . . . . . . . . . . 20

2.3.1 Niching Techniques and Multi-Population EAs . . . . . . . . . . . . 202.3.2 Cooperative Co-evolution (CooC) . . . . . . . . . . . . . . . . . . . 232.3.3 Competitive Co-evolution (ComC) . . . . . . . . . . . . . . . . . . 24

2.4 Issues in Cooperative Co-evolution . . . . . . . . . . . . . . . . . . . . . . 262.4.1 Individual Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 262.4.2 Communication Frequency in CooC . . . . . . . . . . . . . . . . . . 28

2.5 Use of Evolutionary Algorithms in MDO . . . . . . . . . . . . . . . . . . . 292.5.1 Existing MDO Methods . . . . . . . . . . . . . . . . . . . . . . . . 302.5.2 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.6 Dynamic Constrained Optimisation . . . . . . . . . . . . . . . . . . . . . . 332.6.1 Dynamic Optimisation Methods . . . . . . . . . . . . . . . . . . . . 332.6.2 Dynamic Constrained Optimisation Methods . . . . . . . . . . . . . 342.6.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3 Cooperative Co-evolution for Design Optimisation in Concurrent Engi-neering 403.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2 The Proposed Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Page 7: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

3.2.1 A New Novelty-Driven Cooperative Co-evolutionwith Stochastic Selection (NDCC-SS) . . . . . . . . . . . . . . . . . 44

3.2.2 A Novel Co-evolutionary Concurrent DesignMethod (CCDM) for Quasi-Separable MDO Problems . . . . . . . . 47

3.3 Experimental Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.3.1 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.3.2 Overview on Compared Algorithms . . . . . . . . . . . . . . . . . . 543.3.3 Comparison Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.3.4 The Effect of Communication Frequency . . . . . . . . . . . . . . . 63

3.4 Communication Frequency Self-Adaptation . . . . . . . . . . . . . . . . . . 673.4.1 The Proposed Self-Adaptive Method . . . . . . . . . . . . . . . . . 683.4.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4 A New Speciation Method for Dynamic nt ned Optimisation 744.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.2 The Proposed SELS Method . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.2.1 Deterministic Crowding (DC) and Assortative Mating (AM) . . . . 784.2.2 Feasibility Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.2.3 Local Search (LS) Strategy . . . . . . . . . . . . . . . . . . . . . . . 804.2.4 Change Detection and Diversity Introduction . . . . . . . . . . . . 82

4.3 Experimental Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.3.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.3.2 Comparison Results with Existing Algorithms on 1000 FEs . . . . . 904.3.3 Comparison Results on 500 FEs and 2000 FEs . . . . . . . . . . . . 924.3.4 The Performance Effect of Change Severity and lsnum . . . . . . . . 924.3.5 The Performance Effect of AM and LS . . . . . . . . . . . . . . . . 94

4.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5 Competitive Co-evolution for Fast-Changing Dynamic Constrained Op-timisation 965.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.2 The Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.2.1 General Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.2.2 Competitive Co-evolutionary Search for Solution Set . . . . . . . . 1005.2.3 Online Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.3 Experimental Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.3.1 Bechmark Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.3.2 Overview on Compared Algorithms . . . . . . . . . . . . . . . . . . 1125.3.3 Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . 1145.3.4 Parameter Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Page 8: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

6 Conclusions and Future Work 1256.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.1.1 Novel Collaboration Strategies When Using CooC to Solve CE Prob-lems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.1.2 A New Dynamic Handling Technique based on Speciation Methodsfor DCOPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

6.1.3 A New Dynamic Handling Technique based on ComC for Fast-Changing DCOPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Appendix: The Baseline UEM Design Problem Formulation 130

List of References 134

Page 9: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

LIST OF TABLES

3.1 Range setting for motor parameters . . . . . . . . . . . . . . . . . . . . . 513.2 A summary of compared methods on the test problems , here ‘a’ means

with an archive, 0.25, 0.50 and 0.75 mean ρ = 0.25, ρ = 0.5 and ρ = 0.75,respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.3 Comparison between CooC, NDCC-WS, NDCC-MO and NDCC-SS for sin-gle motor design problems. The minus sign (−) denotes the method in thecorresponding row is statistically worse than NDCC-SS on the test problemin the corresponding column. The best results are marked in bold. . . . . 58

3.4 Comparison between NDCC-SS with NDCC-SS with pn = 0.25, 0.50, 0.75and multi-objective trade-off for single motor design problems. The bestresults are marked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.5 Comparison between NDCC-SS with the methods that used the noveltycalculation in [66]. The best results are marked in bold. . . . . . . . . . . 60

3.6 Comparison between the baseline method, COSSOS and CCDM for over-lapping motors design. The minus sign (−) and the approximation sign(≈) denotes the compared method is statistically worse than, and similarto CCDM, respectively. The best results are marked in bold. . . . . . . . 62

3.7 Comparison between CCDM, AAO and ATC with the MO Formulation onthe geometric programming problem. The best result is marked in bod. . 62

3.8 Best communication intervals of NDCC-SS on scenario 1 without commu-nication cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.9 Best communication intervals of CCDM on scenario 2 and the geometricprogramming problem without communication cost . . . . . . . . . . . . . 65

3.10 Best communication intervals of NDCC-SS on scenario 1 with differentcommunication costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.11 Best communication intervals of CCDM on scenario 2 and the geometricprogramming problem with different communication costs . . . . . . . . . 67

3.12 Comparison between the CFS method and the compared methods for NDCC-SS on scenario 1. The best results are marked in bold. . . . . . . . . . . 71

3.13 Comparison between the CFS method and the compared methods for CCDMon scenario 2 and the geometric programming problem. The best resultsare marked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.1 Comparison results between DDECV, EBBPSO-T and SELS based on ex-perimental results of DDECV and EBBPSO-T in their original papers [4]and [16], respectively. The best result obtained on each function is markedin bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Page 10: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

4.2 Comparison results among eSELS and the other 4 repair algorithms basedon the experimental results in their original papers. . . . . . . . . . . . . 91

4.3 Comparison results between DDECV and SELS based on experimentalresults of DDECV in the original paper [4] under change frequency of 500FEs and 2000 FEs. The better results obtained on each function are markedin bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.4 Comparison results between DDECV+Repair and eSELS based on exper-imental results of DDECV+repair in the original paper [5] under changefrequency of 500 FEs and 2000 FEs. The better results obtained on eachfunction are marked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.5 Comparison results among different lsnum . The best results achieved oneach test function are marked in bold. . . . . . . . . . . . . . . . . . . . . 94

4.6 Comparison results among SELS-am-ls, SELS-ls, and SELS based on ex-perimental results implemented on DCOP test functions. Here, +, −, and≈ denote whether one algorithm is better, worse or equal to another ac-cording to Wilcoxon ranksum test with a level of 0.05. . . . . . . . . . . . 95

5.1 Environmental parameters for each test function . . . . . . . . . . . . . . 1115.2 Parameter settings for co-evolutionary search process when comparing CCDO

with SELS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.3 Parameter settings for online optimisation when comparing CCDO with

SELS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.4 Parameter settings for co-evolutionary search process when comparing the

proposed CCDO method with LTFR-DSPSO . . . . . . . . . . . . . . . . 1165.5 Parameter settings for online optimisation when comparing the proposed

CCDO method with LTFR-DSPSO . . . . . . . . . . . . . . . . . . . . . . 1165.6 Comparison results between the co-evolutionary method and the random

method with fixed environments. The better result obtained on each testfunction is marked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.7 Comparison results between SELS and the CCDO method under the changefrequency of 1000 FEs. The better result on each test function is markedin bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.8 Comparison results between the LTFR-DSPSO method and the CCDO+Repairmethod under the change frequency of 1000 FEs. The better result ismarked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.9 Comparison results between SELS and the CCDO method under the changefrequency of 500 FEs. The better result obtained on each test function ismarked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.10 Comparison results between SELS and the CCDO method under the changefrequency of 100 FEs. The better result obtained on each test function ismarked in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

1 Values of constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Page 11: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

LIST OF FIGURES

2.1 A two-disciplinary system. . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.2 ATC decomposition and coordination. . . . . . . . . . . . . . . . . . . . . 31

3.1 Parallel cooperative co-evolutionary optimisation process. . . . . . . . . . . 443.2 Evolution curves of CooC, NDCC-WS-a(0.25), NDCC-MO, and NDCC-SS

on the single objective case (a = 0.3) in Scenario 1. . . . . . . . . . . . . . 583.3 Evolution curves of CooC, NDCC-WS-a(0.25), NDCC-MO, and NDCC-SS

on the single objective case (a = 0.5) in Scenario 1. . . . . . . . . . . . . . 59

4.1 The evolutionary difference curves of SELS between different change sever-ity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.1 Representation for SP and EP . . . . . . . . . . . . . . . . . . . . . . . . 1005.2 Evolutionary curves of SELS and the CCDO method on the first 3 functions 1195.3 Evolutionary curves of SELS and the CCDO on the last 6 test functions . 120

Page 12: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

List of Abbreviations

AM Assortative matingEA(s) Evolutionary algorithm(s)ATC Analytical target cascadingATC-MO ATC with multi-objective formulationCooC Cooperative co-evolutionComC Competitive co-evolutionCCDM Co-evolutionary concurrent design methodCCDO Competitive co-evolution for dynamic optimisationCE Concurrent engineeringCFS Communication frequency self-adaptationCOSMOS Collaborative optimisation strategy for multi-objective systemCOSSOS Collaborative optimisation strategy for single-objective systemDC Deterministic crowdingDCOP(s) Dynamic constrained optimisation problem(s)DO Dynamic optimisationDOP(s) Dynamic optimisation problemsHyperM Hyper-mutationLS Local searchMDO Multidisciplinary design optimisation

MORDACEMultidisciplinary optimisation and robust design approachesapplied to concurrent engineering

NDCC Novelty-driven cooperative co-evolutionNDCC-MO NDCC with multi-objective methodNDCC-MO-a NDCC-MO with archiveNDCC-SS NDCC with stochastic selectionNDCC-WS NDCC with weighted sum methodNDCC-WS-a NDCC-WS with archiveRI Random immigrantsSELS Speciated evolution with local search

Page 13: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CHAPTER 1

INTRODUCTION

1.1 Background

Optimisation problems widely exist in the real world. Inspired from the nature, evolu-

tionary algorithms (EAs) are a class of population-based stochastic optimisation methods

[41]. Compared to traditional gradient-based optimisation methods, EAs show some spe-

cific advantages. They do not require gradient information of problems and thus can be

applied to solve non-differentiable, discontinuous, noisy, and multi-objective optimisation

problems. Moreover, they have global search ability and can be conducted in parallel.

These advantages help EAs successfully attract researchers’ and practitioners’ attentions.

EAs have become one of the most popular optimisation techniques and have achieved

a great success on a variety of real-world applications such as engineering design [115],

financial forecasting [7], job shop scheduling [51], music composition [56], drug design [65]

and so on.

However, with the rapid development of technology, real-world optimisation problems

become increasingly complex. On one hand, optimisation problems may have a lot of deci-

sion variables or involve multiple disciplines (e.g., aerodynamics, structures in car design).

One such situation occurs in the field of concurrent engineering (CE). Modern products

and their design processes have become much more complex than before. The wide use

of CE further increases the complexity of product design. As an organisation method,

1

Page 14: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CE advocates considering all parts or all aspects of the product design simultaneously

[63, 130]. This leads to various design problems that have many design variables (if the

product to design has a complex physical structure) or require different disciplinary ex-

pertise to solve. On the other hand, many optimisation problems change over time these

days due to the dynamic environments such as vehicle routing [44], load balancing [134],

service composition [57], etc.

For the above-mentioned complex problems, basic EAs tend to perform poorly and

thus simply applying them can not solve the problems. The reasons behind this are

as follows. First, problems with many decision variables usually have very large search

space. When dealing with such problems, EAs can not search efficiently due to the curse

of dimensionality. Second, for problems involving multiple disciplines, the designer might

not have access to the simulation models of each discipline, or the integration of different

analysis codes might be too difficult or costly [58, 90]. As a result, EAs can not be directly

applied to solve this kind of problems as a whole. Third, in the context of dynamic

problems, the optimum of the problem might change as time goes by. This requires EAs

to quickly find the new optimum once the problem changes. However, neither restarting

the optimisation after a change nor continuing search on the current population without

any change can react quickly to changes [93].

The difficulties in using EAs to handle these complex optimisation problems have

been drawing growing interests from academia and industry. Decomposition strategies

are introduced into traditional EAs and different types of multi-species EAs appear. A

species or sub-population denotes a group of similar individuals capable of interbreeding,

but not with individuals from a different group, which is also the definition of species

in biology [72]. In this thesis, multi-species EAs are used to denote the evolutionary

approaches that explicitly or implicitly maintain multiple species (sub-populations) to

evolve different sub-components or search different regions of the search space. According

to this definition, multi-population EAs [12, 69] and co-evolutionary EAs [52, 104] are all

considered as multi-species EAs.

2

Page 15: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

For problems with many decision variables or involving multiple disciplines, co-evolutionary

EAs have been widely applied to address them through decomposing the original problem

into smaller sub-problems and evolving each sub-problem with a species (sub-population)

of individuals [6, 79, 83, 127, 129]. In another context, multi-population EAs [12, 69] and

niching EAs [86, 100] have become one major technique of handling dynamic optimisation

problems (DOPs) [12, 69, 86, 100]. They can maintain population diversity by explicitly

or implicitly using multiple species (sub-populations) in the population to search different

regions of the decision space.

The basis of multi-species EAs is a decomposition of problem or search space. Co-

evolutionary EAs depend on the decomposition of decision variables, multi-population

EAs depend on the search space decomposition. For some problems, an appropriate

decomposition may be known beforehand with a priori knowledge [105]. For example, in

the engineering design field, the product decomposition method usually decomposes the

design problem based on the predefined product structure and the process decomposition

method breaks up the design problem according to engineering discipline [64]. However,

in most cases, the decomposed sub-problems are usually related to each other. The

performance of a candidate solution for one sub-problem might depend on the solution

for another sub-problem, or different sub-problems need to coordinate on their shared

decision variables. The difficulty that exists in these cases is how to make different sub-

problem optimisation collaborate with each other efficiently when using multi-species

EAs.

For many other problems, we may have little priori information about the roles or

the number of sub-problems [105]. For example, if the task is to locate different feasible

regions for a constrained optimisation problem, we probably will not know beforehand

the number of feasible regions or how to allocate individuals to search different feasible

regions. Moreover, when facing a dynamic optimisation problem, simply using multiple

species (sub-populations) to search different feasible regions of the decision space may not

be enough, a good balance between exploration and exploitation is also necessary.

3

Page 16: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The aforementioned issues pose a demand on efficient decomposition strategies and

collaboration strategies when applying multi-species EAs to address such complex op-

timisation problems. Motivated by this, the focus of this thesis is to develop specific

and efficient multi-species EAs with good decomposition and collaboration strategies to

address different complex optimisation problems. Section 1.2 will describe the scope of

optimisation problems considered in this thesis. In Section 1.3, the research questions of

this thesis and their motivations will be clearly explained. After this, the contributions

of this thesis will be outlined in Section 1.4. Then, in Section 1.5, the thesis organisation

is given and the content of each following chapter is summarised. Finally, Section 1.6 will

list the published or submitted papers resulting from the thesis.

1.2 Scope of the Thesis

The range of complex optimisation problems is large and diversified. However, only CE

problems and dynamic optimisation problems (DOPs) are of particular interest in this

thesis. Concretely, two typical types of CE problems are considered in this thesis. The

first problem is the design of a product that has several different parts with respect to

one discipline, and the other is the quasi-separable multi-disciplinary design optimisation

(MDO) problem in which different disciplines share parts of design variables. Quasi-

separable MDO problems are frequently encountered in MDO [123] and a general MDO

problem can be easily transformed into a quasi-separable MDO problem [82]. Dynamic

optimisation problems include unconstrained and constrained problems. In view of their

high popularity in real-world applications, dynamic constrained optimisation problems

(DCOPs) are taken into consideration in this thesis.

4

Page 17: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

1.3 Research Questions

As mentioned above, the central point of the thesis is to develop efficient multi-species

EAs with good decompositon and collaboration strategies to address CE problems and

DCOPs. In this section, a clear explanation of the research questions of the thesis around

this central point as well as their motivations is given.

1.3.1 Collaboration Strategies among Sub-Problem Optimisa-tion in CE

Design problems in CE usually have many decision variables or require different disci-

plinary expertise to solve. To enable concurrent design, decomposition-based optimisation

strategies are preferred in CE. In the literature, there exist three decomposition meth-

ods, i.e., product decomposition, process decomposition and problem decomposition [64].

Product decomposition is based on the predefined product structure, problem decompo-

sition depends on the dependencies among design parameters, and process decomposition

breaks up MDO problems according to engineering disciplines.

The divide-and-conquer idea behind CE is essentially similar to that of cooperative

co-evolution (CooC) [77]. CooC usually assumes that the design parameters are divided

into several disjoint parts and evolves each part with an EA [105]. Thus, CooC can

be directly applied to CE problems in which decomposition is based on the predefined

product structure and there are no shared design variables between different sub-problems.

In the literature, CooC has also been employed in the field of CE to deal with MDO

problems [17, 18, 90, 127].

However, some questions arise when using CooC to address CE problems. First, in

most cases, the decomposed sub-problems are usually related to each other, and the

performance of a candidate solution for one sub-problem depends on the solution for

another sub-problem. In such cases, it has been shown that CooC can easily converge

to mediocre stable states [49]. To alleviate this situation, researchers have proposed to

5

Page 18: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

use a large number of collaborators for individual evaluation [98, 49] or employ novelty

search methods [47, 49] to compare individuals based not only on their fitness but also

their novelty. However, these existing methods are computationally inefficient and thus

restricting the application of CooC in CE. This will further discussed in Section 2.4.1 in

Chapter 2.

Second, CooC assumes that the design parameters are divided into several disjoint

parts. However, such a decomposition does not necessarily hold for CE since the decom-

position is mainly motivated by the real-world applications, and it is likely that different

sub-problems share the same design parameters in MDO problems. Although in the

above-mentioned CooC work for MDO problems [17, 18, 90, 127], to make existing CooC

methods directly applicable to CE, the shared design parameters are still handled as if

they are normal parameters and decomposed into several fully disjoint parts, such a strat-

egy also introduces additional questions like which shared variable should be evolved with

which sub-problem, and restricts the efficiency of the whole CE procedure.

Third, as the decomposed sub-problems are usually related to each other, when using

CooC to solve CE problems, different sub-populations in CooC should communicate with

each other. Frequent communications help to ensure that the latest information from

other sub-populations is used in optimisation, but slow down the parallel CooC and thus

the computational efficiency. Infrequent communications help to increase parallelism and

thus computational efficiency, but can lead to optimisation in a sub-population based on

out-of-date information from other sub-populations. This brings a non-trivial issue of how

often different sub-populations should communicate with each other, which is also a very

important research topic in CE. However, although some communication strategies have

been proposed in CE [62, 74, 75, 108, 131], they were mainly developed for concurrent

design of dependent tasks rather than interdependent tasks which hold for the two CE

problems considered in this thesis. In the field of CooC, there are few studies conducted

along this direction especially for the situation in which communication incurs a cost.

For the above reasons, this thesis firstly aims to investigate CooC-based design op-

6

Page 19: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

timisation in the context of CE, and study the question of how to make sub-problems

(sub-populations) collaborate with each other efficiently when using CooC to address CE

problems, which actually includes three sub-questions: how to better evaluate individ-

uals in each sub-population when using CooC to address the first type of CE problem

(sub-question 1), how to better handle the shared design variables when using CooC

to address the second type of CE problem (sub-question 2), and how often different

sub-populations should communicate with each other in the CooC-based concurrent design

optimisation (sub-question 3).

To answer these questions, in Chapter 3, a computationally efficient novelty-driven

CooC method is proposed to address the first type of CE problem. Moreover, a novel

CooC-based concurrent design method is given to address the second type of CE problem.

Chapter 3 also studies how the communication frequency among sub-populations affects

the performance of these two CooC-based methods, and introduces a self-adaptive method

to adapt the communication frequency during the optimisation process.

1.3.2 Niching Methods for Dynamic Constrained Optimisation

In the field of concurrent engineering, changes in the design objectives or constraints usu-

ally happen suddenly due to the varying customer needs, and fast response to the changes

is required [110, 130]. This arises the appearance of dynamic constrained optimisation

problems (DCOPs) in CE. Besides, DCOPs also widely exist in other fields due to the

dynamic environments, such as vehicle routing [44], load balancing [134], service com-

position [57], etc. In a DCOP, either the objective function or the constraints, or both,

may change over time due to dynamic environments. This causes the optimum of the

problem to change as time goes by. Therefore, DCOPs require an optimisation algorithm

to quickly find the new optimum once the problem changes [93].

Addressing DCOPs needs to combine dynamic optimisation strategies and constraint

handling techniques. However, directly combining them may not perform effectively.

In [94], the authors combined two popular dynamic strategies, random-immigrants (RI)

7

Page 20: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[50] and hyper-mutation (HyperM) [22], separately with a constraint handling method,

the penalty function [87], to address DCOPs. However, it has been found that both

RI and HyperM are not so effective in solving DCOPs as in solving unconstrained or

bound-constrained dynamic optimisation problems. The reasons are as follows. Although

RI/HyperM generates and introduces some random solutions into the current population

after a change is detected, most of them are infeasible and thus being rejected by the used

penalty function. As a result, they can not perform effectively when facing DCOPs in

which the global optimum moves from one feasible region to another disconnected one as

this kind of DCOPs need an infeasible path connecting two disconnected feasible regions.

Therefore, the autors in [94] suggested using constraint handling techniques that can

accept diversified infeasible solutions or tracking moving feasible regions when dealing

with DCOPs. In the literature, researchers have carried out some work to allow diver-

sified infeasible solutions distributed in the whole search space. For example, the repair

method [112] was applied to handle constraints in DCOPs in [5, 92, 97]. In [15, 16],

constraint handling techniques that accept both feasible and infeasible solutions are em-

ployed. However, repair methods usually need a lot of feasibility checkings. The use of

constraint handling techniques that accept both feasible solutions and infeasible solutions

might not react quickly when the global optimum moves to another disconnected feasible

region especially when the feasible region in which the new global optimal solution is

located is far away.

Intuitively, if the optimisation algorithm already has individuals in a feasible region,

it will react quickly when the global optimum moves to this feasible region. One method

to implement this is to locate multiple feasible regions. The location of multiple feasible

regions can also maintain the diversity of population. This can help to react quickly in

other types of dynamic changes. However, little work has been done along this direction

before the year of 2016.

To locate multiple feasible regions, individuals in the population need to be assigned

to search different parts of the search space. This brings an issue of how to divide the

8

Page 21: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

search space and allocate individuals to different feasible regions. As the feasible regions in

DCOPs might move due to environmental changes, an optimisation algorithm for DCOPs

should also be able to track moving feasible regions. Although these issues have been

rarely studied in DCOPs, how to locate multiple optima has been studied a lot in the

field of multi-modal optimisation [70, 118, 128]. One of the major existing techniques

is the use of niching methods [113] which explicitly or implicitly maintain multiple sub-

populations to search different promising regions of the decision space. In the field of

unconstrained DOPs or DCOPs with bounded constraints, niching EAs have also been

applied to locate multiple optima and track multiple moving optima [12, 69, 86, 100].

Motivated from these, this thesis secondly aims to apply niching methods to address

DCOPs and study the question of whether niching method can help better solve DCOPs.

Research questions of how to locate multiple feasible regions in a DCOP and how to track

moving feasible regions in a DCOP are studied. As niching methods focus on exploration,

newly changed optima might not be found quickly as promising regions are not exploited

sufficiently. Therefore, the question of how to make a good trade-off between exploration

and exploitation is also investigated.

To close research gap and answer these questions, Chapter 4 introduces a speciation-

based EA to address the challenges of DCOPs. It employs deterministic crowding and

assortative mating to allocate individuals to search different feasible regions, and applies

a local search strategy to promote exploitation of the promising regions. To track moving

feasible regions, it uses the simple and parameter-free feasibility rules [85] which prefer

feasible solutions to infeasible ones to deal with constraints, and adds random solutions

into the population to introduce diversity once a change is detected.

1.3.3 Competitive Co-evolution for Fast-Changing Dynamic Con-strained Optimisation

Dynamic optimisation problems require an optimisation algorithm to quickly find the new

optimum once the problem changes. Thus, a good dynamic optimisation algorithm should

9

Page 22: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

satisfy either of the following two conditions:

1. It is able to track the moving optimum once the problem changes.

2. It can obtain good initial individuals once the problem changes.

Existing dynamic optimisation approaches can be categorised according to the two con-

ditions. Diversity-driven approaches (e.g., diversity introducing or maintaining methods

and multi-population methods [93]) satisfy the first condition. Prediction approaches and

memory approaches satisfy the second condition.

In Section 1.3.2, the idea of addressing DCOPs by locating and tracking feasible solu-

tions aims to satisfy the first condition. However, doing this can not have enough time to

locate good solutions when the problems change rapidly. Fast-changing DCOPs need op-

timisation algorithms that satisfy the second condition. In the literature, only prediction

approaches [37, 38] and memory approaches [107] that have been developed for DCOPs

satisfy the second condition. But, they are inappropriate when a DCOP is not cyclic or

predictable. Moreover, as the time for optimisation is generally very short, the memory

approaches can not have optimal solutions to archive and thus the prediction approaches

can not have useful samples for accurate prediction even when the DCOP to solve is cyclic

or predictable. These motivate the thesis to further address DCOPs with fast changes

from the aspect of satisfying the second condition.

Intuitively, suppose a set of good solutions is already obtained and stored in the system

before it goes online, and for any possible change induced by the environments, one of the

solutions can be efficiently modified to get a good solution. Then, the second condition

would be satisfied, and the DOP or DCOP in the online phase would not be as challenging

as we expected. Such an idea of preparing beforehand is just like what is done in reality.

In most real-world scenarios, one may have pretty long time to improve a system (and

the algorithm behind it) before putting it into full use. For example, a company may

spend months to polish its backend system before launching a new service composition

system, and once the system is put online, it may need to react to changes on a daily

10

Page 23: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

basis. Considering these, this thesis thirdly studies the questions of whether identifying

a set of promising solutions offline could be beneficial to online solve DCOPs and how to

achieve such a solution set.

To search a set of promising solutions for a DCOP beforehand, both the solution

space and the environment space need to be considered. One naive method is to find

the optimal solution under each environment. However, this process is time intractable.

An alternative method is to sample the environments. To have a solution set of good

coverage on all environments, the sampled environment each time should be the one that

challenges the current solution set most. Moreover, the solution set needs to be updated to

conquer the newly sampled environments. The relationship between the solution set and

environments in this way is similar to the relationship between the host population and the

parasite population in competitive co-evolution (ComC) [52, 109]. Inspired from these,

this thesis employs ComC to search a set of promising solutions for DCOPs, and studies

the question of whether decomposing DCOPs into solution population and environment

population and co-evolving them using ComC can find a better solution set.

To answer the aforementioned questions, Chapter 5 introduces a new dynamic handling

strategy, competitive coevolution for DCOPs, which searches for a set of good solutions

using ComC in an offline manner, and conducts online optimisation by using this set

for intialisation each time a change is detected. This new method is tested on a DCOP

benchmark and compared to existing methods. Furthermore, ComC is compared to a

random sampling method in searching the solution set.

1.4 Contributions of the Thesis

This thesis tries to close research gap and answer the research questions listed in Sec-

tions 1.3.1 to 1.3.3. The contributions of the thesis are summarised as follows. They will

be discussed in more details in Chapter 6.

• A new novelty-driven cooperative co-evolution (CooC) algorithm, which uses a com-

11

Page 24: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

putationally efficient novelty calculation and applies a stochastic selection strategy

to decide whether evaluating individuals based on their fitness or novelty. The pro-

posed algorithm is designed for the first type of CE problems as defined in Eq.(2.1).

In addition to its computational efficiency, it is shown to obtain designs of higher

quality on the single universal electric motor (UEM) design problem. This is an

answer to the sub-question 1 in Section 1.3.1, and can be found in Chapter 3.

• A novel CooC-based concurrent design method for quasi-separable MDO problems,

which enables concurrent design by using consistency constraints to deal with com-

mon variables among different disciplines and stochastic ranking method [111] to

handle the constraints. It achieves designs of higher quality in comparison to other

MDO methods on a general MDO problem and the design of multiple UEMs that

have common design variables. This is an answer to the sub-question 2 in Sec-

tion 1.3.1, and can be found in Chapter 3.

• A systematic study on how the communication frequency among sub-problem opti-

misation affects the final design in CE. The optimal communication frequencies un-

der different communication costs are reported for the two above-mentioned CooC-

based concurrent design methods. An effective self-adaptive method is then given

for them to adapt the communication frequency during optimisation. This is an

answer to the sub-question 3 in Section 1.3.1, and can be found in Chapter 3.

• A new approach to deal with environmental changes in DCOPs, which uses spe-

ciation methods to locate and track moving feasible regions. This approach can

react faster when the global optimum moves from one feasible region to another

disconnected feasible region in comparison to other approaches. This is an answer

to questions in Section 1.3.2, and can be found in Chapter 4.

• A novel strategy to deal with environmental changes in fast-changing DCOPs, which

uses competitive co-evolution (ComC) to find a set of good solutions beforehand and

use it for initialisation once an environmental change happens. It is shown to react

12

Page 25: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

more quickly to environmental changes especially for DCOPs with fast changes

through empirical studies. This is an answer to questions in Section 1.3.3, and can

be found in Chapter 5.

• A wider application of multi-species EAs. As far as we know, for the first time,

this thesis investigates the use of speciation methods in the context of DCOPs, and

attempts to employ ComC to address fast-changing DCOPs. This thesis widens the

application scope of multi-species EAs on complex optimisation problems.

1.5 Organisation of the Thesis

The remaining content of the thesis is organised as follows. Chapter 2 reviews the basic

information of multi-species EAs and existing research work related to this thesis. The

purpose of this chapter is to present a general review of what has been done in terms of

solving CE problems and DCOPs using EAs and multi-species EAs, and demonstrate the

challenges that still exist.

Chapter 3 introduces two new concurrent design methods based on the parallel CooC

framework to address two kinds of CE problems. Additionally, Chapter 3 studies how the

communication frequency among sub-populations affects the performance of the proposed

CooC methods, and introduces a self-adaptive method to adapt the communication fre-

quency during optimisation. Experiments conducted on universal electric motor (UEM)

design problems and a geometric programming problem are detailed in this chapter.

Chapter 4 details a new speciation-based method, SELS, to address DCOPs. The

SELS method is tested on DCOPs benchmark problems and compared to several state-

of-the-art methods. Experimental results are shown in this chapter.

Chapter 5 presents a new dynamic optimisation approach, CCDO, to deal with DCOPs

from a different aspect. The CCDO is tested on DCOPs benchmark test problems and

experimental studies are presented to show its efficiency.

Finally, in Chapter 6, contributions of the thesis are summarised and future research

13

Page 26: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

directions are suggested.

1.6 Publications Resulting from the Thesis

The published or submitted papers resulting from the thesis are listed as follows.

1. Referred or Submitted Journal Papers

[1]. X. Lu, K. Tang, B. Sendhoff and X. Yao, “A review of concurrent optimisation

methods”, Int. J. of Bio-Inspired Computation, Vol.6, No.1, pp.22 - 31, 2014.

[2]. X. Lu, S. Menzel, K. Tang and X. Yao, “Cooperative co-evolution based design opti-

misation: a concurrent engineering perspective”, IEEE Transactions on Evolutionary

Computation, Vol. 22, Issue 2, pp. 173-188, 2018.

[3]. X. Lu, K. Tang and X. Yao, “Competitive co-evolution for dynamic constrained

optimisation”, submitted to IEEE Transactions on Evolutionary Computation.

2. Referred Conference Papers

[4]. X. Lu, S. Menzel, K. Tang and X. Yao, “The performance effects of interaction

frequency in parallel cooperative coevolution”, In: Dick G. et al. (eds) Simulated

Evolution and Learning. SEAL 2014. Lecture Notes in Computer Science, vol 8886.

Springer, Cham.

[5]. X. Lu, K. Tang, X. Yao, “Speciated evolutionary algorithm for dynamic constrained

optimisation”, In: Handl J., Hart E., Lewis P., Lpez-Ibez M., Ochoa G., Paechter B.

(eds) Parallel Problem Solving from Nature PPSN XIV. PPSN 2016. Lecture Notes

in Computer Science, vol 9921. Springer, Cham.

14

Page 27: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The following lists the number of the published or submitted papers that are (partly)

presented in each chapter of the thesis:

• Chapter 2: publication [1]

• Chapter 3: publication [2,4]

• Chapter 4: publication [5]

• Chapter 5: publication [3]

15

Page 28: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CHAPTER 2

BACKGROUND AND LITERATURE REVIEW

This chapter introduces the background knowledge and reviews existing research work

related to this thesis. Section 2.1 gives the problem formulations for the two typical

types of concurrent engineering (CE) problems and dynamic constrained optimisation

problems (DCOPs) that are considered in this thesis. In Section 2.2, an introduction to

evolutionary algorithms (EAs) is given. Section 2.3 reviews different types of EA exten-

sions which include niching EAs, multi-population EAs, cooperative co-evolution (CooC)

and competitive co-evolution (ComC). In Section 2.4, the issues of individual evaluation

and communication frequency in CooC as well as existing methods are introduced and

discussed. In Section 2.5, a review of using EAs to address multi-disciplinary design op-

timisation (MDO) problems is presented, and Section 2.6 gives an overview of existing

dynamic constrained optimisation methods. Finally, Section 2.7 summarises this chapter.

2.1 Problem Formulations

2.1.1 Two Types of CE Problems

In this thesis, two types of complex design problems, which are frequently encountered in

CE, are considered. They are:

16

Page 29: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

1. Mono-disciplinary product design with multiple parts,

2. Quasi-separable MDO problems.

For the first type of design problems, assuming the product is composed of m parts

and f denotes the performance metric of the design, the problem formulation can be given

as follows:

minx1,x2,...,xm

f (x1,x2, ...,xm) (2.1)

where xi denotes the design parameter vector related to the i-th part of the product

(i = 1, 2, ...,m). Note that we assume throughout this thesis that the decomposition

of the product is known beforehand and there are no shared design variables between

different subproblems.

For the second type of design problems, different disciplines aim to find a complete

design that improves different physical aspects of a product. A quasi-separable MDO

problem [123] with m disciplines can be formulated as:

minx1,x2,...,xm,z

m∑i=1

fi(xi, z) (2.2)

where (x1,x2, ...,xm, z) denotes the design parameter vector of the entire design. z is the

shared design parameter vector among disciplines. xi denotes the local design parameter

vector with respect to the i-th discipline and fi denotes the objective of the i-th discipline

(i = 1, 2, ...,m). The final design from each discipline must be consistent on z. Note that

we assume throughout this thesis that the shared design variables are known beforehand

for this kind of problem.

17

Page 30: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

2.1.2 Dynamic Constrained Optimisation Problems (DCOPs)

Without loss of generality, the dynamic constrained optimisation problems considered in

this thesis have the following formulation:

minx

f(x,α(t))

subject to : gi(x,α(t)) ≤ 0, i = 1, 2, ..., k

(2.3)

where x denotes the decision variable vector, k denotes the number of constraints, α(t)

is the vector of environmental parameters. They vary at a certain frequency as time goes

by. This kind of dynamic problems require an optimisation method to quickly find the

new optimal solution each time it changes.

2.2 Introduction to EAs

Evolutionary algorithms (EAs), originated in 1960s, are a class of stochastic optimisa-

tion methods inspired from natural evolution [40, 41, 36]. They maintain a population

of individuals, each of which denotes a candidate solution to the optimisation problem,

and improve the population through selecting the fittest ones and generating offsprings

generation by generation. For single-objective optimisation problems, the fitness of an in-

dividual is usually set to the objective function value of the individual (if it is a maximising

problem) or the minus of the objective function value (if it is a minimising problem).

Without loss of generality, assume a single-objective optimisation problem has the

following formulation:

minx

f(x) (2.4)

where x = (x1, x2, ..., xn) denotes the vector of decision variables and n is the number of

decision variables. The range for each xi is Li and Ui, i.e. xi ∈ [Li, Ui]. f : Ω ⊆ <n → < is

the objective function of the problem. Note that a maximising optimisation problem can

be easily tranformed to a minimising optimisation problem by multiplying the objective

18

Page 31: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

function f with −1.

The process of using an EA to address this opimisation problem can be found in

Algorithm 1. The EA begins with a population of randomly generated candidate solutions,

PG = xi,G|i = 1, 2, ...,NP(G = 0). Here, G denotes the generation number, NP

is the population size, and each xi,G denotes a candidate solution to the optimisation

problem in Eq. (2.4). After initialisation, the EA iteratively uses selection, recombination,

and mutation operations at each generation G to evolve the population until a stopping

criterion is met. The best individual xi,G in the final population is the output.

Algorithm 1 The Framework of EA

1: Initialise a population PG = xi,G|i = 1, 2, ...,NP2: Evaluate PG with the objective function f3: while the stopping criterion is not met do4: Select parents Ppar from PG based on the fitness of each xi,G5: Conduct recombination operation on Ppar to get Poff

6: Conduct mutation operation on Poff to get P′

off

7: Evaluate P′

off with the objective function f8: Select NP individuals PS from PG ∪ P

off according to individuals’ fitness9: Set PG+1 = PS

10: Set G = G+ 111: end while

Note that most EAs follow a similar process as in Algorithm 1. Different EAs differ

from each other in the use of different recombination, mutation or selection operators.

The most popular EAs include genetic algorithm (GA) [55, 29], evolutionary strategy

(ES) [10, 8], evolutionary programming (EP) [42], genetic programming (GP) [61], ant

colony optimisation (ACO) [23], particle swarm optimisation (PSO) [35, 102], differential

evolution (DE) [120, 121], and so on. Each of them has been studied a lot and has various

algorithm variants nowadays.

Compared to traditional optimisation methods, e.g. gradient-based methods, hill

climbing, and simulated annealing, EAs show some specific advantages. First, they

search with a population of individuals with the aim of searching globally. Second, they

use the fitness of individuals to guide the search direction, and thus can be applied to

solve non-differentiable, discontinuous, or black-box optimisation problems. Third, the

19

Page 32: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

individual-based evolutionary operation and evaluation enable the parallel execution of

an EA. Therefore, EAs have become one of the most popular optimisation techniques and

been applied to various real-world applications, e.g. engineering design [115], financial

forecasting [7], job shop scheduling [51], music composition [56] and drug design [65].

2.3 Extensions to Traditional EA Model

However, with the rapid development of society, real-world optimisation problems become

increasingly complex. As a result, traditional EA models are not adequate to deal with

them. For example, when an optimisation problem involves many decision variables,

optimising all the decision variables as a whole might not be the best approach due to

the curse of dimensionality. Also, many engineering optimisation problems require the

location of multiple optima in the search space but traditional EA models tend to converge

to only one solution due to the genetic drift from the selection operator. To better address

these problems, various extensions to the basic EAs have been proposed. In the following

parts, different types of EA extensions will be introduced.

2.3.1 Niching Techniques and Multi-Population EAs

A niche in the optimisation is commonly referred to an area of the fitness landscape

where only one peak resides [72]. Niching methods are developed to reduce the effect of

genetic drift in the traditional EAs by altering the selection operator to provide selection

pressure within, but not across regions of the search space [118]. Through doing this,

niching methods can maintain individuals in different regions and thus permit EAs to

locate multiple optima in parallel. In the literature, there exist various means of niching

implementation.

One classic niching method is fitness sharing [46] which makes individuals share their

fitness with individuals nearby and use the shared fitness in replace of their original fitness

20

Page 33: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

for selection. The shared fitness for an individual in fitness sharing is defined as:

fshared(i) =foriginal(i)

niwhere ni =

N∑j=1

Sh(dij) (2.5)

where foriginal(i) represents the original fitness of the i-th individual, and Sh(dij) denotes

the sharing function:

Sh(dij) =

1− (

dijδshare

) if d < δshare ;

0 oterwise

(2.6)

Here, α and δshare denote the scaling factor and the niche radius, respectively. Through the

use of fitness sharing, the growth of the number of individuals near one peak is restricted

and individuals are encoraged to cluster around different peaks (niches).

Another classic method is crowding, originally proposed by De Jong in [29], which

does replacement operation between similar individuals. In crowding, for each newly

generated offspring, a sample of individuals are taken from the current population and

the most similar individual to the offspring is replaced. To reduce the replacement errors

and eliminate the setting of the sampling number, Mahfoud further improved crowding

and proposed the deterministic crowding (DC) in [80]. Algorithm 2 gives the pseudo-code

of DC.

Algorithm 2 Deterministic Crowding [80]

1: Randomly pair all individuals in the population2: for each pair of individuals, p1 and p2, do3: Generate two offspring, o1 and o2, based on EA operators4: if dist(p1, o1) + dist(p2, o2) ≤ dist(p1, o2) + dist(p2, o1) then5: p1 = fitter(p1, o1)6: p2 = fitter(p2, o2)7: else8: p1 = fitter(p1, o2)9: p2 = fitter(p2, o1)10: end if11: end for

The DC method pairs all population elements randomly and generates two offspring

for each pair based on EA operators. Selection is then operated on these four individuals,

21

Page 34: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

and a similarity measure is used to decide which offspring competes against which parent.

The offspring will replace the compared parent and enter next generation if it is fitter.

Mengshoel et al. further improved DC by employing a probabilistic replacement strategy

in [84]. Suppose individual u and v are competing against each other, in the probabilistic

replacement, the probability of u winning and replacing v is set to:

pu =fu

fu + fv(2.7)

where fu and fv are the fitness of u and v, respectively.

The authors in [70] proposed a species conservation method which partitions the pop-

ulation into a set of dominated species and marks the best individual in each species as

species seed. At the beginning, this method sorts the population from the best to the

worst. Then, for each individual that does not belong to any species, the similarity dis-

tance between this individual and the species seed in each existing species is calculated. If

the distance is less than the pre-defined radius, this individual will belong to the species.

Otherwise, this individual will be considered as a new species. In later generations, to

conserve species, each species seed will be compared with the worst offspring individual

whose distance to the species seed is less than the species radius and not replaced by

any species seed before, and then the species seed will replace it if the species seed is

better. If there does not exist such an offspring individual, the species seed will replace

the worst offspring individual that is not replaced by any species seed before. In [71], the

authors further combined this species partition method with PSO. In this species-based

PSO method, the population is partitioned into species at every generation. Each species

seed is used as the local best for individuals in the species and replaces the global best

in PSO to generate offspring individuals. This species-based PSO method has been later

applied to address dynamic optimisation problems [100].

Other niching techniques include clearing method, clustering method, restricted tour-

nament selection, and so on. More information about them can be found in [72, 118].

22

Page 35: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Note that niching EAs are actually firstly devised to maintain the population diversity of

EAs and later extended to deal with multi-modal optimisation problems [72]. Further-

more, niching EAs are nowadays extended to deal with dynamic optimisation problems

as they can maintain population diversity and permit tracking moving optima. A recent

survey paper in this direction can be found in [69].

2.3.2 Cooperative Co-evolution (CooC)

The performance of EAs deteriorates as the dimension of the problem increases [95].

To make EAs have better scalability to high-dimensional problems, Potter and De Jong

proposed cooperative co-evolution (CooC). As a biological concept, co-evolution refers to

that two or more species evolve simultaneously and affect each other’s evolution with the

coupled fitness. The interaction of different species in co-evolution can be cooperative

(e.g. mutualism) or competitive (e.g. predator and prey, host and parasite). Algorithm

3 shows the framework for CooC.

CooC was inspired from the cooperative interaction between different species. In

CooC, the decision variables are firstly decomposed into several sub-components. Then,

these sub-components will be evolved in different sub-populations, each with a specified

EA. Fitness evaluation for each sub-population individual is carried out by combining it

with representative individuals (usually the current best individuals) from the other sub-

populations. The whole optimisation process of CooC is decomposed into several cycles.

In every cycle, each sub-population is evolved with a specified EA for a fixed number of

generations. At the end of each cycle, different sub-populations communicate with each

other the best individuals in their current populations, which are used as representative

individuals to evaluate the individuals in the other sub-populations in the next cycle.

23

Page 36: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

2.3.3 Competitive Co-evolution (ComC)

Competitive co-evolution was proposed to model the competitive interaction between

different species in biology (e.g. predator and prey, host and parasite) [109]. In [52],

ComC was first applied to search a good set of test cases for sorting networks. In the

software engineering field, finding a good test suite is of the same importance as developing

the software. The work in [52] treated the sorting networks as host population and test

suite as parasite population. The fitness of each sorting network in the host population is

set to how many test samples in the parasite population it correctly sorts, and the fitness

of each test sample in the parasite population is set to how many sorting networks in

the host population that can not correctly sort this test sample. Through doing this, the

sorting networks and the test suite are then evolved simultaneously.

Algorithm 3 The Framework of CooC

1: Initialise P1,0,P2,0, ...,Pm,0 for m sub-components2: Randomly combine P1,0,P2,0, ...,Pm,0 into a big population P3: Evaluate the population P with the objective function4: Select the best individual in P denoted as: C = (xbest1 ,xbest2 , ...,xbestm )5: Set cycle = 06: while cycle <the maximum number of cycles do7: for each Pi,cycle do8: Set G = 0 and Pi,G = Pi,cycle

9: while G <the maximum number of generations do10: Conduct evolutionary operation on PG to generate PG+1

11: for each xji,G+1 in Pi,G+1 do

12: Set Ci = (xbest1 ,xbest2 , ...,xji,G+1,xbesti+1 ...,x

bestm )

13: Evaluate PG+1 with the objective function f14: end for15: Set G = G+ 116: end while17: Set xbesti as the best individual in Pi,G

18: Set Pi,cycle = Pi,G

19: end for20: Set C = xbest1 ,xbest2 , ...,xbestm 21: Set cycle = cycle+ 122: end while

Algorithm 4 gives the competitive co-evolutionary framework for two populations,

host population Ph and parasite population Pp. In Algorithm 4, the two populations are

24

Page 37: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

evolved with specific EAs. f denotes the fitness function. In this algorithm, the fitness

of an individual in one population is evaluated with how many individuals in the other

population it can defeat. Through such a competition and evolution pattern, both the

host population and parasite population will be enhanced.

Algorithm 4 The Framework of ComC

1: Initialise host population Ph,G and parasite population Pp,G

2: Set G = 03: while G <the maximum number of generations do4: for each xhi in Ph,G do5: Set count = 06: for each xpi in Pp,G do7: if xhi beats xpi then8: count = count+ 19: end if

10: end for11: Set f(xhi ) = count12: end for13: Select parent population Ph,par from Ph,G

14: Do evolutionary operation on Ph,G to get Ph,G+1

15: for each xpi in Pp,G do16: Set count = 017: for each xhi in Ph,G do18: if xpi beats xhi then19: count = count+ 120: end if21: end for22: f(xpi ) = count23: end for24: Select parent population Pp,par from Pp,G

25: Do evolutionary operation on Pp,G to get Pp,G+1

26: Set G = G+ 127: end while

The fitness of individuals in ComC is subjective. Therefore, it is suitable for problems

without known fitness function or the fitness of an individual is very hard to compute.

In addition to sorting networks, the idea of CooC has been extensively applied to solve

real-world problems such as iterated prisoner’s dilemma problem [20, 27] and playing

chess games [19]. However, one requirement that restricts the application range of ComC

is that the problem needs to be hand-decomposed into two antagonistic sub-components

25

Page 38: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

before using ComC [105].

2.4 Issues in Cooperative Co-evolution

2.4.1 Individual Evaluation

Cooperative co-evolution encounters several issues as the fitness of individuals in each sub-

population depends highly on the representative individuals (also noted as collaborative

individuals) exchanged from the other sub-populations. It can easily be misled by the cho-

sen collaborators and get trapped in the suboptimal equilibrium states in which changing

each of the team members will result in lower performance [49]. Moreover, CooC tends to

identify local optima that have large basins of attraction [14]. However, these local optima

may not correspond to global optima. This is known as relative overgeneralisation [99].

To prevent CooC from premature convergence to mediocre stable states, it is found

that each individual in CooC may need to be evaluated with a large number of collabora-

tors [98, 49]. However, this will increase the cost in evaluating each individual and thus

is inefficient especially when the CooC involves more than 3 sub-populations. To address

this issue, researchers in [49, 47] have proposed considering not only the fitness but also

the novelty of individuals when doing selection on individuals in each sub-population.

Novelty search is a recently proposed evolutionary approach to solve deceptive prob-

lems [68]. In the original novelty search method, individuals are scored based on their

behavioural novelty rather than fitness. This scheme makes the evolution continuously

explore the regions of individuals with behavioural innovation rather than converging to

one region. A novelty metric was proposed in [66] to measure how far an individual is

from other individuals in the behaviour space, which calculates the novelty score of each

individual as follows:

nov(x) =1

k

k∑i=1

dist(x,µi) (2.8)

where µi is the i-th nearest neighbour of the individual x according to the distance

26

Page 39: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

metric dist. The neighbours include the other individuals in the current population and

optionally archived past individuals. In previous studies, the archive is composed of the

most novel individuals [66, 73] or stochastically selected individuals from every generation

[67]. This kind of novelty-based optimisation has shown better performance than the

fitness-based optimisation methods in many different applications.

The use of novelty as the sole criterion does not always result in satisfying outcomes. It

has been found that selecting individuals based on only novelty scores does not necessarily

lead to a high average fitness [26]. In the literature, researchers have combined novelty and

fitness objectives together to score individuals and shown through experimental studies

that this is a more effective way to make use of novelty [49]. Different combination

methods have been proposed in the literature [26, 48, 88]. In [26], the authors achieved

the combination through a linearly weighted sum of them as shown below:

score(x) = (1− ρ) ∗ fnorm(x) + ρ ∗ novnorm(x) (2.9)

where ρ is used to control the importance of the fitness and novelty and kept fixed during

the optimisation process, and fnorm(x) and novnorm(x) denote the normalised fitness and

novelty, respectively. The normalisation process is proceeded according to:

fnorm(x) =f(x)− fmin

fmax − fmin

, novnorm(x) =nov(x)− novmin

novmax − novmin

(2.10)

where fmin and fmax are the lowest and highest fitness in the current population, respec-

tively; novmin and novmax are the corresponding lowest and highest novelty scores. In

[88], a multi-objective evolutionary algorithm, NSGA-II [31], was employed to balance

between the novelty and fitness objectives. The weighted sum method and the multi-

objective combination method were applied in CooC in [47] and [49], respectively.

However, these novelty-driven CooC methods have some disadvantages. First, the

time complexity of calculating the novelty for a whole population according to Eq. (2.8)

without considering archived individuals is O(NP2 ∗ complexity(dist) + NP2log(NP))

27

Page 40: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

(NP denotes population size), which is not computionally efficient. Second, using either a

fixed weight or multi-objective method might not be the best trade-off approach between

exploration and exploitation as it might be better to change the emphasis to exploration

or exploitation as the evolution proceeds.

2.4.2 Communication Frequency in CooC

Another issue in CooC is about the setting of interaction frequency among different sub-

populations. The interaction frequency denotes the number of maximum evolution gen-

erations before different sub-populations communicate with each other the collaborative

individuals. In the literature, one systematic study to investigate the performance effects

of interaction frequency was conducted by Popovic and De Jong in [103]. In this paper,

the sequential update scenario of CooC was studied in which sub-populations take turns

in evolution. That is, during each cycle, only one sub-population is active and the others

are frozen. At the end of each cycle, the sub-population that was evolved communicates

its best individual to the other frozen sub-populations, and then they switch roles.

By using different cycle sizes and dynamics analysis of best individuals, the work

in [103] showed that the performance effect of interaction frequency is dependent on

the problem property called best-response curves. It also gave some knowledge about

how the performance changes with the increase of cycle size on different kinds of best-

response curves. However, as this work considered the sequential scenario of CooC, the

communication cost between different sub-populations is not considered. The knowledge

learned in this study can not be applied to the use of parallel CooC in which different

sub-populations are evolved simultaneously.

Considering this, we have conducted a study on the performance effect of commu-

nication frequency in a parallel CooC in [76]. It is found that the best communication

frequency changes when the problem and communication cost change. Moreover, the best

communication frequency of the parallel CooC framework might change during the whole

optimisation process. Therefore, it is not a trivial work to find a good communication

28

Page 41: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

frequency for a new problem. How to set the communication frequency in parallel CooC

is still worth studying.

2.5 Use of Evolutionary Algorithms in MDO

A general MDO problem involves multiple disciplines that share some design variables and

are coupled by coupling variables. Fig. 2.1 shows a two-disciplinary system. In this figure,

x1 and x2 denotes local variable vectors for disciplines 1 and 2, respectively; z stands for

the common variable vector; f1 and f2 denote the corresponding objective functions of

disciplines 1 and 2; g1 and g2 stand for the constraints of disciplines 1 and 2, respectively;

fsystem and gsystem are the system objective and constraint functions, respectively. The

fsystem is a function of f1 and f2. Disciplines 1 and 2 are coupled by the coupling variables,

y12 and y21. They are the output from the corresponding analysis of disciplines 1 and

2, and needed in calculating the objective and constraint functions of disciplines 2 and

1, respectively. In this thesis, we focus on MDO problems that are coupled through

only shared design variables, i.e., quasi-separable MDO problems, which are frequently

encountered in MDO and the formulation for which is given in Eq. (2.2).

Discipline 1

Discipline 2

𝒙1

𝒚12 𝒚21

𝑓1

𝑔1

𝑓2

𝑔2

𝑓system

𝑔system

𝒛

𝒙2

Figure 2.1: A two-disciplinary system.

29

Page 42: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

2.5.1 Existing MDO Methods

A MDO problem can be solved as a single optimisation problem. However, this method

requires a high integration of different disciplines, which is impractical in some cases, e.g.,

when each disciplinary analysis is developed under a different specialised computer code

and it is hard to combine these codes [123]. To better handle MDO problems in such

situations, researchers have developed distributed optimisation methods based on decom-

position, which allow disciplinary autonomy. Since EAs can solve non-differentiable and

multimodal optimisation problems, they are more and more widely employed as optimis-

ers in MDO. Existing EA-based distributed optimisation methods for MDO include the

MORDACE method (multidisciplinary optimisation and robust design approaches ap-

plied to concurrent engineering) [45], the COSMOS method (collaborative optimisation

strategy for multi-objective systems) [106], and the ATC method (analytical target cas-

cading) with the multi-objective formulation [89]. The following paragraphs will explain

each of these methods in more details.

In the MORDACE method [45], each disciplinary optimisation is performed indepen-

dently. When disciplinary optimisation is finished, the MORDACE method employs a

compromise method on the common decision variables. As changes in common variable

values due to compromise will make each disciplinary performance worse, a robust design

approach is employed in each disciplinary optimisation to alleviate this. In the robust

design approach, in addition to disciplinary objectives, disciplinary optimisation also aims

to minimise the sensitivity of performance values to variations of common variables.

The COSMOS method [106] tries to solve multi-objective problems in a multidis-

ciplinary context. It uses a nested decomposition and includes two-level optimisation:

supervisor-level and disciplinary-level optimisation. The supervisor level optimises the

common variables and provides the values of common variables to the disciplinary level.

The disciplinary level optimises the disciplinary variables based on the given common

variable values. It then returns the best function value to the supervisor level, which will

then be used as the performance value of the given common variables. It can be seen

30

Page 43: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

that the optimisation in the disciplinary level aims to calculate the fitness of the given

common variable values, and thus is a nested optimisation. In the COSMOS method, the

supervisor level employs a MOGA to search the best common variable values based on

the performance values returned from the disciplinary-level optimisation.

The ATC method [89] is a hierarchical multi-level methodology. It propagates system

targets through a hierarchical structure and minimises the unattainability when the tar-

gets are unattainable [82]. When using ATC to solve the MDO problem in Fig. 2.1, copies

of common design variables and coupling variables as well as consistency constraints are

created to make the common variables and coupling variables consistent in the final de-

sign. ATC uses a penalty function to deal with the consistency constraints. Fig. 2.2

illustrates the decomposition and coordination of ATC in solving the problem in Fig. 2.1.

:

:

:

Figure 2.2: ATC decomposition and coordination.

In this figure, T denotes the design targets. The zu are the duplicate common variables

for system optimisation; zl1 and zl2 are for disciplinary optimisation (zu = zl1 = zl2 at

optimality). The yu12 and yu

21, and yl12 and yl

21 are duplicate coupling variables for system

and disciplinary optimisation, respectively (yu12 = yl

12 and yu21 = yl21 at optimality). The

31

Page 44: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

system-level and discipline-level optimisations are carried out in turn in a loop. When

the system-level optimisation is completed, the optimal fu1 , fu

2 , zu, yu12 and yu

21 are passed

to the discipline-level optimisation, which will pass the optimal f l1, f l

2, zl1, zl2, yl12 and

yl21 to the system-level optimisation. Then, another loop begins. In the study in [89],

the penalty function in the system optimisation in ATC was transformed into a multi-

objective formulation as shown in Eq. (2.11), and the NSGA-II [31] was applied to solve

the multi-objective problem.

min ‖fsystem − T‖22

min ‖fu1 − f l

1‖22 + ‖fu

2 − f l2‖2

2

min :

‖zu − zl1‖22 + ‖zu − zl2‖2

2 + ‖yu12 − yl

12‖22 + ‖yu

21 − yl21‖2

2

suject to : gsystem

(2.11)

A cooperative co-evolutionary algorithm was employed to address MDO problems

[90, 17, 18, 127]. In these studies, the common variables of an MDO problem are de-

composed into disjoint sets and assigned to different disciplines. The coupling variables

are handled with duplicate variables or using the implicit iteration strategy [90]. These

methods need to make decisions on how the common variables should be decomposed, but

such methods have not been developed yet. In [91], the authors suggested decomposing

common variables based on their main effects on the disciplinary objectives via orthogonal

array-based experimental design before starting optimisation.

2.5.2 Discussions

The aforementioned MDO methods have the following disadvantages. First, the MOR-

DACE method needs to spend a lot of function evaluations in calculating the sensitivity

of performance values to variations of common variables. Second, for each common vari-

32

Page 45: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

able value given by the supervisor level, the COSMOS method needs to restart a new

disciplinary-level optimisation. This slows down the concurrent design process. The ATC

method uses a point-based iterative process, it can easily converge to local optima and is

highly sensitive to the numerical inaccuracy of solutions communicated among the subsys-

tems [124]. The existing CooC methods designed for MDO problems need to decompose

the common variables but a good decomposition might not exist or needs very costly

experimental design to find out. Thus, efficient MDO methods are still needed in the field

of CE.

Through decomposition, the original design problem in MDO is divided into several

smaller sub-problems, which can be solved in a concurrent fashion. But, as the sub-

problems are usually dependent on each other, they can not be solved in isolation but

need to collaborate with each other. In the collaboration, a timely information exchange

is always needed between different sub-problem optimisation. However, communication

usually has a cost and might be different for different problems in practice. As a result,

when to exchange information is not easy to determine. In the field of CE, how to set the

communication frequency among different subtasks is a very hot topic [62, 74, 75, 108, 131]

but the communication frequency issue among different disciplinary designs has not been

studied in the literature.

2.6 Dynamic Constrained Optimisation

2.6.1 Dynamic Optimisation Methods

Dynamic constrained optimisation problems belong to the domain of dynamic optimi-

sation problems, which are composed of dynamic unbounded and dynamic constrained

optimisation problems. The challenges of dynamic optimisation problems consist in two

aspects. Firstly, restarting the optimisation after a change happens often needs long time

to find the new optimum and thus is impractical. Secondly, continuing search on the cur-

33

Page 46: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

rent population without any change is inefficient as the current population might partially

converge and thus cannot react quickly to changes.

Considering these, researchers have proposed different approaches to handle dynamic

optimisation problems. Generally, these approaches can be grouped into three categories:

diversity-driven approaches, memory approaches and prediction approaches. Diversity-

driven approaches mainly include introducing diversity after a change happens, main-

taining diversity during search, and multi-population approaches that maintain multiple

populations to locate and track multiple optima simultaneously [93]. Memory approaches

store and use good-performing solutions from previous search. They are effective for

cyclic environment. Prediction approaches predict new optimal solutions and use the pre-

dicted optimal solutions for initialisation once the problem changes. They are suitable for

predictable problems. Although a lot of studies have been done in the field of dynamic

optimisation, most of them aim at dynamic unbounded or dynamic constrained optimisa-

tion problems with bound constraints. As the majority of dynamic optimisation problems

in real world are dynamic constrained optimisation problems (DCOPs) and DCOPs have

their own characteristics, DCOPs attract researchers’ attentions recently.

2.6.2 Dynamic Constrained Optimisation Methods

Dynamic constrained optimisation problems have constraints that are different from bound

constraints, and the changes can happen on their objectives, constraints or both. There-

fore, the distribution of feasible/infeasible regions in a DCOP might change during the

optimisation process, and the global optimum might appear in a newly feasible area

without changing the existing optimum or switch from one feasible region to another

disconnected one [93].

These raise new challenges. It has been shown in [93] that simple diversity main-

taining/introducing strategies (e.g. random-immigrants (RI) [50] and hyper-mutation

(HyperM) [22]) become less effective when combined with penalty functions that prefer

feasible solutions to infeasible ones. The reasons are as follows. Although RI and Hy-

34

Page 47: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

perM introduce/generate some random solutions, most of them are likely rejected by the

used penalty function as the majority of them are infeasible. When faced with DCOPs

in which the global optimum moves from one feasible region to another disconnected one,

the RI and HyperM method can not perform effectively as this kind of problems need an

infeasible path connecting two disconnected feasible regions.

To address this problem, the authors in [94] suggested using constraint handling tech-

niques that can maintain diversified infeasible solutions or tracking the moving feasible

regions instead of tracking the moving existing optima. Existing dynamic constrained op-

timisation methods are mainly conducted along this direction. In addition, dynamic opti-

misation methods like memory methods and prediction methods have also been modified

to address DCOPs. In this section, the existing dynamic constrained optimisation meth-

ods are categorised into three classes: diversity-driven approaches, memory approaches

and prediction approaches. Each of them will be introduced in the following part of this

section.

Diversity-driven Approaches

To maintain diversified infeasible solutions, researchers have used constraint handling

techniques that accept both feasible and infeasible individuals. In [92], the authors used

the repair method [112] along with RI and HyperM, respectively. For an infeasible in-

dividual, the repair method first repairs it to obtain a feasible individual and then uses

the fitness of the obtained feasible individual as the fitness of the infeasible individual. In

this case, infeasible solutions that are able to make good feasible solutions can survive in

the selection process. Similar repair methods were also used in [96] and [5]. However, as

the repair method needs to repair every infeasible solution to a feasible one, it needs to

check the feasibility of every repaired solution and thus usually requires a lot of feasibility

checkings. As a result, the use of repair methods is not a good choice when the feasible

regions of the problem are very small or the feasibility checking is very costly.

Alternatively, the authors in [15] employed a simple ranking scheme proposed in [54].

35

Page 48: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

In this scheme, an individual is ranked with respect to each of the objective function

values, the sum of constraint violation and the number of violated constraints. The sum

of all three ranks for an individual is used to make a comparison when there exist both

feasible and infeasible individuals in the population. Through the use of this scheme,

those infeasible solutions with a small degree of constraint violation and a small objective

function value can be reserved. To maintain population diversity, the authors in [15] also

proposed to switch between a global search operator and a local search one according to

whether the diversity degree is larger than a threshold. In [16], this work was further

improved by using a more complex function of the three ranks to evaluate individuals and

employing Shannon’s index of diversity as a factor to balance the influence of global-best

and local-best search directions.

As mentioned in Section 2.3.1, niching EAs have been widely applied to track moving

optima in addressing DOPs [69, 12, 86, 100]. However, little work has been done along

the direction of tracking moving feasible regions before the year of 2016. To locate and

track feasible regions, this thesis proposed a speciation-based method, called speciated

evolution with local search (SELS) [78]. The SELS method utilises deterministic crowding

[80] to make comparisons among similar individuals and assortative mating [28] to induce

speciation. By doing these, good solutions can be maintained in different feasible regions

and thus feasible regions can be located and tracked. More details can be found in

Chapter 4. Later, the authors in [13] specifically used multiple sub-populations to locate

and track multiple feasible regions. The whole population is divided into multiple sub-

populations which are evolved simultaneously. The authors also employed a gradient-

based repair method to accelerate the location of feasible regions and adaptive local

search to exploit found feasible regions.

Memory Approaches

The authors in [107] first proposed to use a memory approach for DCOPs and modified

the abstract memory method to make it fit for DCOPs. In this work, two memories were

36

Page 49: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

built to represent the distribution of previously good individuals and feasible regions.

When using these two memories, two schemes, blending and censoring were considered.

In the blending scheme, some solutions are generated from each of the two memories and

then all of them are introduced into the population. In the censoring scheme, solutions

of more than the original size are generated from the memory for the good individuals

and then are censored by the memory for the feasible regions by selecting solutions with

the largest probability to be feasible. Only the selected solutions are introduced into the

population. The experimental results in [107] showed that the modified memory approach

with the censoring scheme can help to improve the performance for DCOPs with dynamic

fitness functions and static constraints.

To address DCOPs, the authors in [13] also introduced a memory strategy to track

previously feasible regions. This strategy maintains two memories. One memory archives

previously found local optima and the other saves previously found best solutions. Both

memories are divided into species. In the re-initialisation process, a random particle from

each species is to be retrieved so as to guarantee the diversity of retrieved individuals.

Prediction Approaches

In [37], the authors combined the infeasibility driven evolutionary algorithm (IDEA) with

a prediction method to solve DCOPs. The IDEA is an algorithm designed to deal with

static constrained optimisation problems. To approach the constraint boundary from both

feasible and infeasible sides, IDEA maintains a certain fraction of infeasible solutions in

the population by selecting the best part from both of feasible solutions and infeasible

solutions to enter the next generation. The prediction method employed in [37] assumes

that the change patterns of spatial optima locations can be fitted by an AutoRegressive

model. Under this assumption, the AutoRegressive Integrated Moving Average method

was applied to predict the location of future optima based on the obtained best individuals

from previous generation. Solutions generated from the prediction method will be used

to replace individuals in the current population. This work was further improved in [38]

37

Page 50: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

by reducing the use of memory and using a new anticipation mechanism.

Different from predicting locations of future optima, the work in [13] introduced a

species-based prediction method to predict the locations of future feasible regions. Each

species is considered as a feasible region and the prediction method uses the species seed

as a representative of the species. Based on the species seeds from past generations, this

work aims to predict locations of multiple feasible regions in parallel under the assumption

that each feasible region moves linearly. Concretely, suppose that Mem(t − 1) denotes

the species seeds at time t− 1 and Mem(t) denotes the species seeds at time t. For each

cm in Mem(t), this method considers the nearest particle pm in Mem(t − 1) to cm as

from the same moving feasible region. Then, for each pair of < pm, cm >, the predicted

location of the same feasible region at time t+ 1 is calculated as cm+ (pm− cm). These

predicted locations will be used to replace the individuals in the original population once

a change is detected.

2.6.3 Discussions

As introduced above, some dynamic constrained optimisation methods have already been

developed for DCOPs. However, the use of repair methods to accept diversified infeasible

solutions usually needs a lot of feasibility checkings. The use of constraint handling

techniques that accept both feasible solutions and infeasible solutions might not react

quickly when the global optimum moves to another disconnected feasible region especially

when the feasible region in which the new best solution located is far away. Memory

approaches and prediction approaches are not appropriate when the DCOPs are not cyclic

and predictable. Therefore, efficient dynamic constrained optimisation methods are still

required for DCOPs.

Furthermore, in the field of dynamic optimisation, DOPs with very fast changes are

the most challenging problems. When the changes happen very fast, the existing diversity-

driven approaches, memory approaches and prediction approaches can not work well. The

diversity-driven approaches do not have enough time to find a satisfactory solution. As

38

Page 51: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

the time for optimisation is very short, the memory approaches can not have optimal

solutions to archive and thus the prediction approaches can not have useful samples for

accurate prediction. However, none of the above-mentioned methods aimed to deal with

fast-changing DCOPs.

2.7 Chapter Summary

In this chapter, the background knowledge of EAs and multi-species EAs have been given.

Three types of multi-species EAs that are employed in this thesis work are introduced in

detail as well as what types of optimisation problems each of them aims to solve. The

existing co-evolution issues including individual evaluation and communication frequency

in CooC are discussed. The research gaps in addressing CE problems with EAs and the

existing dynamic constrained optimisation are analysed.

39

Page 52: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CHAPTER 3

COOPERATIVE CO-EVOLUTION FOR DESIGNOPTIMISATION IN CONCURRENT

ENGINEERING

In Chapter 2, different types of multi-species EAs (i.e. multi-population EAs, cooperative

co-evolution (CooC), and competitive co-evolution (ComC)) and the types of optimisation

problems they are suitable to solve were reviewed. In this chapter, we present two new

concurrent design methods based on parallel CooC to address the two types of concurrent

engineering (CE) problems defined in Section 2.1 in Chapter 2. Before introducing the

proposed co-evolutionary concurrent design methods, in Section 3.1, we first introduce

the motivation of this work. Then, Section 3.2 will detail the proposed co-evolutionary

methods. In Section 3.3, the comparison results between the proposed methods and

existing methods will be presented on the universal electric motor (UEM) design problems

and a commonly used MDO test problem. Then, the best communication frequencies

between different sub-populations in the proposed methods with/without communication

costs will be studied. Section 3.4 will describe a self-adaptive method for adapting the

best communication frequencies and report the comparison results. Finally, Section 3.5

will summarise this chapter and give conclusions.

40

Page 53: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

3.1 Motivation

As a well-known engineering practice, concurrent engineering (CE) considers all elements

involved in a product’s life cycle (from functionality, producibility to maintenance issues,

and finally disposal and recycling) from the early stages of product development, and

advocates executing all design tasks in parallel or with some overlaps [63, 130]. This

contrasts with traditional sequential engineering that conducts different tasks separately

and at different times. Consequently, in CE, errors can be discovered earlier when the

design is still flexible, and laborious re-designs can thus be avoided [2, 131].

However, CE also increases the complexity of the design problem. Since it considers all

parts or all aspects of the product design simultaneously, there exist two types of complex

optimistion problems in the field of CE. The first type of design optimisation problem

involves many design variables and the second type requires different disciplinary expertise

to solve. To enable concurrent design, decomposition-based optimisation strategies are

preferred in CE. The original problem is usually divided into small sub-problems so that

each sub-problem can be solved individually and simultaneously.

In the literature, there are three popular decomposition methods in CE, i.e., product

decomposition, process decomposition and problem decomposition [64]. Product decom-

position depends on the predefined product structure, and problem decomposition is based

on the dependencies between design parameters. Process decomposition is often applied in

multi-disciplinary design optimization (MDO) problems where different disciplines share

design variables and are usually coupled by coupling variables. To retain disciplinary

autonomy, process decomposition methods break up the design problem according to

engineering disciplines.

As discussed in Section 2.5 in Chapter 2, CE could benefit from evolutionary compu-

tation techniques in many aspects. For instance, when CE involves sub-problems with

non-convex or non-differentiable objective functions, EAs can be used as the sub-problem

solver in the framework of collaborative optimisation [33, 132] and ATC [89, 133]. More-

over, the divide-and-conquer idea behind CE is essentially similar to that of CooC which

41

Page 54: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

decomposes the decision variables into several disjoint sub-components and evolves each

sub-component with an EA. Thus, CooC can be directly applied to CE problems in which

decomposition is based on the predefined product structure and there are no shared design

variables between different sub-problems. In the literature, CooC has also been employed

in the field of CE to deal with MDO problems [17, 18, 90, 127].

However, some issues arise when using CooC to address CE problems. As discussed in

Section 1.3.1 in Chapter 1, first, CooC can easily converge to mediocre stable states [49].

Although researchers have proposed different individual evaluation methods to alleviate

this situation as reviewed in Section 2.4.1, the existing methods are computationally

inefficient and restrict the application of CooC in CE as discussed in Section 2.4.1. Second,

how to handle the shared design variables when applying CooC for MDO problems are

not well dealt with. Third, communication frequency between different sub-populations

can have a big performance effect on the performance of CooC. But there are few studies

conducted to answer how to set the communication frequency in CooC especially when

communication incurs a cost in a concurrent design environment.

Motivated by the potential of CooC to CE, this chapter aims to investigate CooC-

based design optimisation methods in the context of CE. Concretely, this chapter will try

to address the above-mentioned issues through using CooC to address the two types of

CE problems defined in Section 2.1.1 in Chapter 2, and answer the questions of how to

better evaluate individuals in each sub-population when using CooC to address the first

type of CE problem, how to better handle the shared design variables when using CooC

to address the second type of CE problem, and how often different sub-populations should

communicate with each other in the CooC-based concurrent design optimisation as posed

in Section 1.3.1 in Chapter 1.

To answer these questions, for the first type of CE problems in which the decomposition

of product is assumed to be known in prior, this chapter will present a new novelty-driven

CooC which uses a new and computationally efficient novelty calculation method, and

employs a stochastic selection process with an adaptive probability to adjust the trade-off

42

Page 55: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

between novelty and fitness. For the second type of CE problems (i.e. quasi-separable

MDO problems), instead of evolving the shared variables in a single sub-problem, this

chapter will introduce a novel co-evolutionary concurrent design method in which the

shared variables are handled through the use of duplicates and consistency constraints,

and the stochastic ranking method [111] with an adaptive probability is employed to deal

with the constraints. The performance of the two proposed methods is demonstrated

through comparisons to the method that optimises all parts (disciplines) as a whole as

well as other state-of-the-art methods on universal electric motor (UEM) design prob-

lems and a commonly used MDO test problem. Additionally, this chapter will introduce

a self-adaptive method to adapt the communication frequency during the optimisation

process after studying how the communication frequency among sub-populations affects

the performance of the proposed CooC methods.

3.2 The Proposed Methods

The two proposed co-evolutionary concurrent methods are based on the framework of

parallel CooC. Fig. 3.1 shows a parallel CooC optimisation process. In parallel CooC, all

sub-populations are evolved simultaneously. The optimisation process of CooC is divided

into cycles. At the end of each cycle, different sub-populations communicate with each

other through collaborative individuals in their current populations, which are used to

evaluate the fitness of the individuals in the other sub-populations in the next cycle.

In the literature, there exist different methods to select collaborative individuals [119]

and the most commonly used method is to choose the best individual in the concurrent

population. In Fig. 3.1, the decision variables are decomposed into two sub-components

(x1 and x2). f denotes the objective function. Each of t1, t2, t3 denotes a communication

time point between two sub-populations (i.e., Pop1 and Pop2).

In the following part of this section, we will first introduce a new novelty-driven CooC

method with stochastic selection (NDCC-SS for short) for the mono-disciplinary design

43

Page 56: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Figure 3.1: Parallel cooperative co-evolutionary optimisation process.

problems in Eq.(2.1), and then a novel co-evolutionary concurrent design method (CCDM

for short) for the quasi-separable MDO problems in Eq.(2.2).

3.2.1 A New Novelty-Driven Cooperative Co-evolutionwith Stochastic Selection (NDCC-SS)

As reviewed in Section 2.4.1 in Chapter 2, novelty-driven CooC selects individuals based

not only on the fitness but also on the novelty score of individuals. In the proposed

NDCC-SS method, we hope to prevent CooC from converging to suboptimal states by

maintaining the diversity of the selected individuals in the selection process because the

search direction of CooC depends on the selected individuals. Motivated by this, we

use a new novelty metric in NDCC-SS that measures individuals’ novelty score based

only on the individuals that are already selected for generating offsprings. Therefore,

novelty calculation and EA selection are conducted together in NDCC-SS. For the sake

of computational efficiency, the novelty score of one individual, x, is evaluated as follows:

nov(x) = dist(x,1

k

k∑i=1

xis) (3.1)

where xis denotes the i-th already selected individual and k denotes the number of the

individuals already selected. Moreover, the binary tournament selection is employed and

44

Page 57: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Algorithm 5 describes the selection process in NDCC-SS. Note the cost of the novelty

calculation process in Algorithm 5 increases linearly with population size, which has a time

complexity of O(NP ∗complexity(dist)) (NP denotes the population size). Thus, this new

novelty calculation is more computationally efficient than the method in Eq. (2.8) which

has a complexity of O(NP2 ∗ complexity(dist) + NP2log(NP)) even without considering

archived individuals.

Algorithm 5 Novelty Calculation and Binary Tournament Selection

1: Select the best individual in the current population as x1s

2: Set c = x1s and k = 1

3: while k ≤ NP − 1 do4: Randomly pick two individuals from the current population, x1, x2

5: Calculate the fitness of them and set the novelty of xi(i = 1, 2) as dist(xi, c)6: Set k = k + 1 and select the winner of x1 and x2 as xks7: Set c = (c ∗ (k − 1) + xks )/k8: end while9: Let all xis(i = 1, 2, ...NP ) enter next generation

In addition, we use a stochastic selection process with an adaptive probability to make

a trade-off between novelty and fitness objectives. Algorithm 6 shows the comparison

process between two individuals, x1 and x2. The novelty and fitness score of xi(i = 1, 2)

is nov(xi) and f(xi), respectively. The parameter pn denotes the probability of selecting

individuals based on their novelty scores, and rand(0, 1) is a random number generated

from a uniform distribution on [0, 1]. The value of pn in the new method is linearly

decreased from p0 to pf within a specified number of generations and is kept constant as

pf after that. In detail, pGn is calculated as follows:

pGn = p0 − (p0 − pf ) ∗ G

r∗MaxGen, if G ≤ r ∗MaxGen

pf , otherwise

(3.2)

where G is the current number of generations, r is a real number (0 ≤ r ≤ 1), and

MaxGen denotes the maximum number of generations.

Through the use of stochastic selection with an adaptive probability, NDCC-SS has

45

Page 58: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Algorithm 6 Comparison(x1, x2, f(x1), nov(x1), f(x2), nov(x2), pn)

1: if rand(0, 1) < pn then2: if nov(x1) > nov(x2) then3: x1 wins4: else5: x2 wins6: end if7: else8: if f(x1) > f(x2) then9: x1 wins

10: else11: x2 wins12: end if13: end if

two advantages in balancing the novelty and fitness objectives compared with the weighted

sum method and multi-objective method as introduced in Section 2.4.1 in Chapter 2. One

is that the stochastic selection shows a direct and explicit scheme to determine how many

comparisons are dominated by the novelty or fitness objectives, while the importance of

novelty or fitness is unclear in the weighted sum method. The other is that our adaptive

scheme changes the emphasis on the two objectives as the evolution proceeds. Through

the linear decrease of pn, it is expected that the new method will focus more on exploration

in the early stage of search and on exploitation in the later stage.

To address the mono-disciplinary design problems as defined in Eq. (2.1) in Chapter 2,

the parallel CooC framework in Fig. 3.1 with this new novelty-driven method is used in

our work. Assume a product has two parts (x1 and x2). In the beginning, an initial

sub-population for each of x1 and x2 is randomly generated. At each communication step

(e.g., t1, t2, t3), two sub-populations communicate with each other the best individual

in their current populations. Then, x1 is optimised using an EA for a fixed number of

generations (until the next communication step) using the new x2 in fitness evaluation,

meanwhile x2 is optimised using an EA with the new x1 in fitness evaluation. In each

EA optimisation, individuals in each sub-population are selected based on either novelty

(calculated using Eq. (3.1)) or fitness scores according to Algorithm 6.

46

Page 59: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

3.2.2 A Novel Co-evolutionary Concurrent DesignMethod (CCDM) for Quasi-Separable MDO Problems

For quasi-separable MDO problems as defined in Eq. (2.2) in Chapter 2, the original

problem is decomposed along disciplines and duplicate variables are introduced to deal

with the common design variables z in CCDM. The design is assumed to involve two

disciplines and each has local design parameters xi(i = 1, 2), respectively. Through the

use of duplicate variables, z1 and z2, the original problem in Eq. (2.2) can be decomposed

as follows:

1. sub-problem 1

minx1,z1

f1(x1, z1)

subject to : z1 = z2

(3.3)

2. sub-problem 2

minx2,z2

f2(x2, z2)

subject to : z2 = z1

(3.4)

After the decomposition, the CCDM optimises each sub-problem in a sub-population

with a specified EA simultaneously, as shown in Fig. 3.1. At each communication step,

they communicate with each other their best design information (i.e., zi (i = 1, 2)) in the

current population in order to evaluate their separate designs.

To deal with the equality constraints in Eqs.(3.3) and (3.4), CCDM transforms them

into an inequality constraint as follows:

∑nck=1 |z1

k − z2k|

nc≤ δ (3.5)

where nc is the number of common variables, z1k and z2

k are the k-th element of z1 and z2,

respectively; δ is the tolerance parameter, and an adaptive setting [81] is applied here. In

47

Page 60: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

this adaptive setting, δ is adapted by using the following expression:

δ(t+ 1) =δ(t)

δ(3.6)

where δ(0) is set as the median of equality constraint violations (i.e.,∑nc

k=1 |z1k−z2k|

nc) over all

individuals in the initial population, δ is the rate of decay, which is set in such a way that

the δ value will decrease to a final value δf after a specified number of generations and

will be kept as δf after that.

To deal with the generated inequality constraints and other existing inequality con-

straints, a state-of-the-art constraint handling method, the stochastic ranking method

[111] is used in CCDM. In the stochastic ranking method, when making comparisons

among feasible individuals, the individual with better objective value is selected. Oth-

erwise, the comparison is made based on the objective values of individuals with the

probability Pf and based on the constraint violation values with the probability (1−Pf ).

The constraint violation for the inequality constraint in Eq. (3.5) is defined as:

φ(z) =

0 if

∑nck=1 |z1k−z

2k|

nc≤ δ

δ −∑nc

k=1 |z1k−z2k|

ncotherwise

(3.7)

In our work, the probability parameter, Pf , is decreased linearly from Pf = 0.475 in the

initial generation to Pf = 0.25 in the final generation. After the optimisation, the mean

values of z1 and z2 in the best solutions are used as the final values of z in CCDM.

In both NDCC-SS and CCDM, the communication interval denotes the interval be-

tween the generation numbers of two sequential communications in the parallel CooC as

shown in Fig. 3.1. There will be more than two sub-populations if more than two parts

(disciplines) exist in the design, but the whole framework remains unchanged.

48

Page 61: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

3.3 Experimental Studies

3.3.1 Case Studies

To evaluate the efficacy of the two proposed methods, universal electric motor (UEM)

design problems and a geometric programming problem are used as case studies. The

UEM design problem is a widely used benchmark problem in the field of product family

design [116, 117, 59]. In this study, two scenarios of the UEM design problems are

considered.

The Baseline UEM Design Problem

One UEM has 8 design parameters, Nc, Ns, Awf , Awa, I, ro, t, L [43]. The description of

each parameter can be found in [117]. The design of an electric motor can be formulated

as a two-objective constrained optimisation problem, given as follows:

minxMass(x),−η(x)

where :

x = Nc, Ns, Awf , Awa, I, ro, t, L

subject to :

H(x) ≤ 5000[Amp · turns/m]

rot≥ 1

Mass(x) ≤ 2[kg]

η(x) ≥ 15%

P (x) = 300[W]

T (x) = a[Nm]

(3.8)

where Mass(x) and η(x) denote the weight and efficiency calculation function, respec-

tively. In Eq. (3.8), a denotes the users’ requirement of the torque output (T (x)), which

is a real number and specified a priori. In this chapter, for each design x, the penalty with

49

Page 62: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

respect to the constraints in Eq. (3.8) is calculated and added to Mass(x),−η(x). The

resulting Mass(x),−η(x) are used as the final objective values of the design. Details

about the penalty calculation and the formulations of H(x), Mass(x), η(x), P (x) and

T (x) can be found in the Appendix.

For the UEM design problem, if the preference between Mass(x) and −η(x) is known,

a weighted sum of these two objective functions can be defined as the following expression:

fweighted(x) = w1 ∗ (1− η(x)) + w2 ∗Massnormalised(x) (3.9)

where Massnormalised represents the normalised weight derived by dividing the original

weight by the maximum allowable weight: Massmax = 2 [kg]. w1 and w2 are weight

coefficients that are assumed to be equal (i.e., w1 = w2 = 0.5) in [3]. Through the

weighted sum, the UEM design problem is transformed into a single-objective optimisation

problem.

Scenario 1: Single Motor Design

This scenario aims to design a motor with respect to one torque requirement a (Eq.

(3.8)). We assume for this scenario that 8 design parameters are decomposed into two

groups Nc, Awf , I, t and Ns, Awa, ro, L in the beginning. This scenario is used as a

case study for the mono-disciplinary design problems in this chapter. For this scenario,

the single-objective case is considered. In this case, we use the weighted sum function in

Eq. (3.9) as the goal function, and set both w1 and w2 to 0.5. The range for each design

parameter of each UEM motor is set the same as in [117], which is given in Table 3.1.

Two design problems are considered by using different a values in this chapter. One is

a = 0.3, and the other is a = 0.5. When applying the proposed NDCC-SS to this scenario,

each group of the design variables of a motor is optimised simultaneously with a different

sub-population.

50

Page 63: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 3.1: Range setting for motor parameters

Parameter RangeNc [100, 1500]Ns [1, 500]Awf [0.01, 1.00] ([mm2])Awa [0.01, 1.00] ([mm2])I [0.1, 6.0] (Amp)ro [1, 10] (cm)t [0.5, 100] (mm)L [0.1, 20] (cm)

Scenario 2: Overlapping Motors Design

In this scenario, two or more overlapping motors need to be designed, each with a different

torque requirement (i.e. a in Eq. (3.8)). Here, overlapping motors refers to different

motors sharing some design parameters. This scenario captures some of the important

characteristics of designing a product family that has common components in case to

reduce production costs. In the literature, the authors in [126] considered designing

multiple aircrafts, each with a different mission but sharing the same design of the wing.

In this scenario, the overall goal is:

minx,z

m∑i=1

fweighted,i(xi, z)

i.e.,minx,z

m∑i=1

(w1 ∗ (1− η(xi, z)) + w2 ∗Massnormalised(xi, z))

(3.10)

where m is the number of motors, xi denotes the local design parameters of the i-th motor,

and z denotes the common design parameters. In this scenario, we consider designing 2

motors with different a values in Eq. (3.8) and assume they have 2 parameters (t, L) in

common. Three different test problems are studied by using different a values. For the

first problem, the a values of the two motors are set to 0.1 and 0.125, respectively. For the

second problem, they are set to 0.1 and 0.3, respectively. For the third problem, they are

0.05 and 0.5, respectively. This scenario is used as a case study for the quasi-separable

MDO problem and the proposed CCDM method.

51

Page 64: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

A Geometric Programming Problem

The original geometric programming problem [9] is formulated as follows:

minzi≥0,i=1,2,...,14

z21 + z2

2

subject to :

z−23 + z2

4 − z25 ≤ 0

z25 + z−2

6 − z27 ≤ 0

z28 + z2

9 − z211 ≤ 0

z−28 + z2

10 − z211 ≤ 0

z211 + z−2

12 − z213 ≤ 0

z211 + z2

12 − z214 ≤ 0

z21 − z2

3 − z−24 − z2

5 = 0

z22 − z2

5 − z26 − z2

7 = 0

z23 − z2

8 − z−29 − z−2

10 − z211 = 0

z26 − z2

11 − z212 − z2

13 − z214 = 0

(3.11)

52

Page 65: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

By using the equality constraints, this problem can be transformed into the following

formulation:

minzi≥0,i=4,5,7,8,...,14

2∑i=1

fi(xi, z)

where :

x1 = z4, z8, z9, z10, x2 = z7, z12, z13, z14, z = z5, z11

f1(x1, z) = z−24 + z2

5 + z28 + z−2

9 + z−210 + z2

11

f2(x2, z) = z25 + z2

7 + z211 + z2

12 + z213 + z2

14

subject to :

(z28 + z−2

9 + z−210 + z2

11)−1 + z24 − z2

5 ≤ 0

z25 + (z2

11 + z212 + z2

13 + z214)−1 − z2

7 ≤ 0

z28 + z2

9 − z211 ≤ 0

z−28 + z2

10 − z211 ≤ 0

z211 + z−2

12 − z213 ≤ 0

z211 + z2

12 − z214 ≤ 0

(3.12)

It can be seen from Eq. (3.12) that the original problem is transformed into a constrained

quasi-separable MDO problem with common variables (z5, z11). This problem is used as

another case study for the quasi-separable MDO problems and the proposed CCDM. It is

decomposed into two sub-problems with duplicate variables ((z5,1, z5,2) and (z11,1, z11,2))

as follows:

53

Page 66: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

1. sub-problem 1

minzi≥0,i=4,8,9,10,z5,1≥0,z11,1≥0

z−24 + z2

5,1 + z28 + z−2

9 + z−210 + z2

11,1

subject to :

(z28 + z−2

9 + z−210 + z2

11,1)−1

+z24 − z2

5,1 ≤ 0

z28 + z2

9 − z211,1 ≤ 0

z−28 + z2

10 − z211,1 ≤ 0

z5,1 − z5,2 = 0

z11,1 − z11,2 = 0

(3.13)

2. sub-problem 2

minzi≥0,i=7,12,13,14,z5,2≥0,z11,2≥0

z25,2 + z2

7 + z211,2 + z2

12 + z213 + z2

14

subject to :

z25,2 + (z2

11,2 + z212 + z2

13 + z214)−1 − z2

7 ≤ 0

z211,2 + z−2

12 − z213 ≤ 0

z211,2 + z2

12 − z214 ≤ 0

z5,1 − z5,2 = 0

z11,1 − z11,2 = 0

(3.14)

3.3.2 Overview on Compared Algorithms

In the experiments, our proposed methods, NDCC-SS and CCDM, are evaluated on the

aforementioned case studies, and then comparisons are made between these two methods

and some existing methods in terms of the quality of the best solution obtained with a

fixed number of function evaluations.

54

Page 67: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Existing methods used for comparison on Scenario 1 of the UEM design problem

include:

• CooC: the original CooC that evaluates individuals based only on their fitness.

• Novelty-driven CooC with the weighted sum method (NDCC-WS) [26]: this method

uses a linearly weighted sum of fitness and novelty as given in Eq. (2.9) to evaluate

individuals. The novelty metric proposed in [66] was used to calculate the novelty

scores of individuals. As given in Eq. (2.8), it calcuates the novelty of an individual

based on its nearest neighbours in the current population and optionally past indi-

viduals saved in an archive. In the experiments, two different settings for the novelty

calculation are applied. One is calculating novelty based only on the individuals in

the population. The other is based not only on the individuals in the population but

also the archive the saves previous individuals. When using the archive, λ randomly

selected individuals in the population are added to the archive at every generation.

In the experiments, novelty computation is done with k = 15 and λ = 6 according

to [26, 48]. Also, three different settings of the weight ρ in Eq. (2.9) in Chapter 2

are considered: ρ = 0.25, ρ = 0.5, and ρ = 0.75. In total, there are 6 different set-

tings of NDCC-WS used for comparison, i.e., NDCC-WS(0.25), NDCC-WS(0.50),

NDCC-WS(0.75), NDCC-WS-a(0.25), NDCC-WS-a(0.50), NDCC-WS-a(0.75) (‘a’

means with an archive, 0.25, 0.50 and 0.75 mean ρ = 0.25, ρ = 0.5 and ρ = 0.75,

respectively).

• Novelty-driven CooC with the multi-objective trade-off method (NDCC-MO) [49]:

this method considers the fitness and novelty score of individuals as two objectives,

and uses a multi-objective evolutionary algorithm, NSGA-II [31], to evolve indiviu-

dals. Both settings, with and without an archive, are considered. They are noted

as NDCC-MO and NDCC-MO-a (‘a’ means with an archive), respectively. When

implementing NDCC-MO and NDCC-MO-a in the experiments, the binary tourna-

ment selection is based on the domination between two individuals. In the selection

55

Page 68: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

process, the one that dominates the other is selected. If neither is dominated by the

other, one of the two individuals will be chosen randomly.

For Scenario 2, the proposed CCDM is compared with the baseline method that op-

timises all decision variables as a whole without any decomposition (i.e., addressing the

optimisation problem in Eq. (3.10) directly). This is to study whether the decomposi-

tion and coordination of the common variables that are used in CCDM really work. We

also implemented the COSMOS method (collaborative optimisation strategy for multi-

objective systems) [106] that uses nested decomposition on Scenario 2. The resulting

method is named collaborative optimisation strategy for single-objective system (COS-

SOS) in this chapter. COSSOS and CCDM are also compared on Scenario 2. For the

geometric programming problem, the solution quality obtained by CCDM is compared

with the result achieved by two existing methods:

• AAO (All-At-Once): it optimises all decision variables at once [60],

• ATC-MO: the ATC method with the MO formulation [49].

Table 3.2 summarises the compared methods on each test scenario considered in the

experimental study. For all the methods that use the parallel CooC framework, different

sub-populations communicate with each other at every generation in the implementation.

Table 3.2: A summary of compared methods on the test problems , here ‘a’ means withan archive, 0.25, 0.50 and 0.75 mean ρ = 0.25, ρ = 0.5 and ρ = 0.75, respectively.

Problem Method

Scenario 1

CooC, NDCC-WS(0.25),NDCC-WS(0.50), NDCC-WS(0.75),

NDCC-WS-a(0.25), NDCC-WS-a(0.50),NDCC-WS-a(0.75), NDCC-MO,

NDCC-MO-a, NDCC-SSScenario 2 CCDM, Baseline Method, COSSOS

Geometric programming CCDM, AAO, ATC-MO

For all the experiments, except for AAO and ATC-MO for which the original results

in [60] and [49] are used for comparison, we use a real-coded genetic algorithm with

56

Page 69: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

binary tournament selection and standard operators for simulated binary crossover and

polynomial mutation [30] as the underlying EA. The crossover probability is set to 0.9

and the mutation probability is set to 1n

(where n is the number of decision variables).

The distribution indices [30] for the crossover and mutation operators are set to 15 and

20, respectively. Note that the same EA is used in implementation to allow for a fair

comparison.

3.3.3 Comparison Results

Scenario 1: Single Motor Design

For this scenario, the proposed NDCC-SS is compared to CooC, NDCC-WS(0.25), NDCC-

WS(0.50), NDCC-WS(0.75), NDCC-WS-a(0.25), NDCC-WS-a(0.50), NDCC-WS-a(0.75),

NDCC-MO and NDCC-MO-a on the two single motor design problems. For all these 10

methods, the population size for each sub-population was set to 50, and the maximum

number of function evaluations was set to 80000. The parameters of NDCC-SS, p0, pf

and r, were set to 0.45, 0 and 0.4, respectively. Each method was run for 100 times

independently. Then, we used Wilcoxon rank-sum test at a significance level of 0.05 to

make comparisons between NDCC-SS and each other method.

Table 3.3 gives the means and standard deviations of the best function values over

100 runs obtained by each method as well as the statistical comparison results between

NDCC-SS and each of the other methods on the two test problems. It can be seen from

Table 3.3 that the proposed NDCC-SS performed the best among all algorithms on both

test problems. The evolutionary curves of CooC, NDCC-WS-a(0.25), NDCC-MO, and

NDCC-SS on the single motor design problems with a = 0.3 and a = 0.5 are given in

Figs. 3.2 and 3.3, respectively.

To investigate whether both the new novelty calculation and the adaptive stochas-

tic selection process contribute to the good performance of NDCC-SS, we furthermore

conducted comparative experiments. In the first experiment, we fixed the probability

57

Page 70: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 3.3: Comparison between CooC, NDCC-WS, NDCC-MO and NDCC-SS for singlemotor design problems. The minus sign (−) denotes the method in the corresponding rowis statistically worse than NDCC-SS on the test problem in the corresponding column.The best results are marked in bold.

Method a = 0.3 a = 0.5CooC 4.90e−01±1.41e−01 − 1.87e+01±6.00e+01 −

NDCC-WS(0.25) 8.73e−01±2.72e+00 − 4.56e+00±1.22e+01 −NDCC-WS(0.50) 6.67e−01±9.93e−01 − 4.26e+00±1.37e+01 −NDCC-WS(0.75) 2.93e+00±1.97e+01 − 6.35e+00±2.14e+01 −

NDCC-WS-a(0.25) 5.08e−01±5.45e−01 − 1.00e+00±1.86e+00 −NDCC-WS-a(0.50) 1.14e+00±1.40e+00 − 1.56e+00±3.94e+01 −NDCC-WS-a(0.75) 1.20e+01±2.36e+01 − 5.97e+01±9.01e+01 −

NDCC-MO 5.01e−01±4.96e−02 − 1.94e+00±3.55e+00 −NDCC-MO-a 5.35e−01±6.93e−02 − 1.92e+00±4.38e+00 −

NDCC-SS 4.43e−01±3.59e−02 5.66e−01±3.72e−02

0 1 2 3 4 5 6 7 8x 104

-2

0

2

4

6

8

10

The Number of FEs

CCNDCC-WS-a(0.25)NDCC-MONDCC-SS

Figure 3.2: Evolution curves of CooC, NDCC-WS-a(0.25), NDCC-MO, and NDCC-SS onthe single objective case (a = 0.3) in Scenario 1.

58

Page 71: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

0 1 2 3 4 5 6 7 8x 104

0

2

4

6

8

10

12

The Number of FEs

CCNDCC-WS-a(0.25)NDCC-MONDCC-SS

Figure 3.3: Evolution curves of CooC, NDCC-WS-a(0.25), NDCC-MO, and NDCC-SS onthe single objective case (a = 0.5) in Scenario 1.

parameter pn in the stochastic selection in NDCC-SS and conducted experiments on the

single motor design problems. Three values for pn were used, which are 0.25, 0.50, and

0.75. We also replaced the adaptive selection process in NDCC-SS with the multi-objective

trade-off method and tested the performance of the resulting method on the single motor

design problems. Table 3.4 summarises the statistical comparison results between NDCC-

SS with these methods. The minus sign (−) in Table 3.4 denotes that the method in the

corresponding row is statistically worse than NDCC-SS on the test problem in the corre-

sponding column. From Table 3.4, we can observe that NDCC-SS performed better than

any other method. This demonstrates the advantage of the use of the adaptive strategy

in NDCC-SS.

Table 3.4: Comparison between NDCC-SS with NDCC-SS with pn = 0.25, 0.50, 0.75 andmulti-objective trade-off for single motor design problems. The best results are markedin bold.

Method a = 0.3 a = 0.5pn = 0.25 5.13e−01±4.45e−02 − 6.14e−01±4.64e−02 −pn = 0.50 5.41e−01±4.68e−02 − 6.48e−01±5.29e−02 −pn = 0.75 1.54e+00±2.05e+00 − 7.73e+00±1.54e+01 −

Multi-objective 5.29e−01±4.14e−02 − 6.63e−01±1.41e−01 −NDCC-SS 4.43e−01±3.59e−02 5.66e−01±3.72e−02

59

Page 72: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

In the second experiment, we replaced the proposed novelty calculation in NDCC-SS

by the novelty calculation proposed in [66] and tested the performance of the resulting

method on single motor design problems. In the experiments, we considered two settings

of archive in the novelty calculation, with and without an archive, and used four settings

of p0, pf and r values in the adaptive stochastic selection. Table 3.5 summarises the

statistical comparison results between NDCC-SS with these methods. The minus sign

(−) in Table 3.5 denotes the method in the corresponding row is statistically worse than

NDCC-SS on the test problem in the corresponding column. It can be seen from Table 3.5

that NDCC-SS outperformed the methods that used the existing novelty calculation. This

demonstrates that the new novelty calculation strategy has better performance in addi-

tion to computational efficiency. This might be because that the new novelty calculation

strategy can help maintain better diversity of the selected individuals than the existing

novelty calculation strategy since the new novelty calculation is based on only the in-

dividuals that are already selected in the selection process of CooC while the existing

one calculates individuals novelty based on the whole population before selection. As the

selected individuals decide the search direction of CooC, the new novelty strategy may

help preserve more diversity in the population and this is good for preventing CooC from

converging to suboptimal states.

Table 3.5: Comparison between NDCC-SS with the methods that used the novelty cal-culation in [66]. The best results are marked in bold.

Methoda = 0.3 a = 0.5

Archive/No(p0,pf ,r)No(0.5,0.25,0.4) 1.32e+00±4.58e+00 − 3.96e+00±8.72e+00 −No(0.5,0.25,0.8) 5.42e−01±3.41e−01 − 4.98e+00±1.16e+01 −No(0.25,0,0.4) 5.45e−01±7.14e−01 − 1.57e+01±3.18e+01 −No(0.25,0,0.8) 1.36e+00±7.09e+00 − 1.34e+01±3.53e+01 −

Archive(0.5,0.25,0.4) 4.98e−01±2.22e−01 − 1.15e+00±3.01e+00 −Archive(0.5,0.25,0.8) 4.93e−01±1.07e−01 − 1.16e+00±2.33e+00 −Archive(0.25,0,0.4) 5.77e−01±6.02e−01 − 1.66e+01±3.06e+01 −Archive(0.25,0,0.8) 4.86e−01±1.06e−01 − 7.58e+00±1.95e+01 −

NDCC-SS 4.43e−01±3.59e−02 5.66e−01±3.72e−02

In short, the additional comparative experimental results above showed the efficacy

60

Page 73: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

of both the new novelty calculation and the adaptive stochastic selection in NDCC-SS.

This is the reason why NDCC-SS performed well on single motor design problems, as

shown in Table 3.3. Note that the new novelty calculation might not be the optimal

novelty calculation method as it considers computational efficiency. More work is still

worth doing along the direction of developing better novelty-driven CooC algorithms.

Scenario 2: Overlapping Motors Design

For this scenario, the proposed CCDM is compared to the baseline method that puts all

design variables in a single optimisation process on the three overlapping motor design

problems. For the baseline method, the population size was set to 100 in the experiments.

For CCDM, the population size was set to 50. The maximum number of function eval-

uations was set to 32000 for the baseline method. While the baseline method needs to

evaluate both f1 and f2 in a function evaluation on the overlapping motor design prob-

lems with two motors, CCDM only needs to evaluate one fi (i = 1, 2). Therefore, the

maximum number of function evaluations was set to 32000*2 for CCDM. The parameter

δf in CCDM was set to 0.005, and the ratio of the specified number of generations to the

maximum number of generations was set to 0.8.

We also compare CCDM with the COSSOS method. For the COSSOS method,

the population size for the supervisor-level optimisation was set to 20, and for each

disciplinary-level optimisation, it was set to 50. As the COSSOS method also needs

to evaluate only one fi (i = 1, 2) in a function evaluation in one disciplinary-level opti-

misation, the maximum number of function evaluations was set to 32000*2 for COSSOS.

Besides, the maximum number of function evaluations for each disciplinary-level opti-

misation in COSSOS was set to 800 in our experiments. We independently ran each

experiment for 100 times, and then used Wilcoxon rank-sum test at a significance level of

0.05 to conduct comparisons.

Table 3.6 presents the means and standard deviations of the best function values over

100 runs obtained by the three methods. The statistical comparison results between the

61

Page 74: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 3.6: Comparison between the baseline method, COSSOS and CCDM for overlappingmotors design. The minus sign (−) and the approximation sign (≈) denotes the comparedmethod is statistically worse than, and similar to CCDM, respectively. The best resultsare marked in bold.

a values Baseline Method COSSOS method CCDMa1 = 0.10, a2 = 0.125 3.69e+00±1.61e+01 − 2.13e+00±6.63e+00 − 6.87e−01±3.42e−01a1 = 0.10, a2 = 0.30 4.30e+00±1.66e+01 − 3.68e+00±6.81e+00 − 2.47e+00±1.48e+01a1 = 0.05, a2 = 0.50 1.57e+01±3.46e+01 ≈ 1.51e+01±2.67e+01 − 1.51e+01±1.02e+02

two compared methods and CCDM are also given in this table. It can be seen from this

table that both decomposition-based methods (i.e., CCDM and COSSOS) outperformed

the baseline method. Between the two decomposition methods, the proposed CCDM

achieved better solutions on all test problems.

The Geometric Programming Problem

For the geometric programming problem, the best solutions obtained by the all-at-once

method (AAO) in [60] and the ATC with multi-objective formulation (ATC-MO) in [89]

are: (z1, z2, z3, z4, z5, z6, z7, z8, z9, z10, z11, z12, z13, z14)=(2.84, 3.09, 2.36, 0.76, 0.87,

2.81, 0.94, 0.97, 0.87, 0.8, 1.3, 0.84, 1.76, 1.55) and (2.77, 3.14, 2.28, 0.76, 0.88, 2.86, 0.94,

0.96, 0.95, 0.85, 1.35, 0.84, 1.79, 1.58), respectively. Table 3.7 shows the corresponding f

values and the constraint violation values of g and h constraints for these two solutions.

Table 3.7: Comparison between CCDM, AAO and ATC with the MO Formulation on thegeometric programming problem. The best result is marked in bod.

AAO [60] ATC-MO [89] CCDM

f 17.6137 17.5325 17.3364

g1 0.0002 0 0.0505

g2 0 0.0131 0.0473

g3 0.0078 0.0016 0.0393

g4 0.0128 0 0.0462

g5 0.0096 0.0356 0.0318

g6 0 0.0317 0.0193

h1 0.0078 0.0312 0

h2 0.0115 0.0220 0

h3 0.0550 0.0378 0

h4 0.0004 0.0490 0

62

Page 75: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

It can be seen that both AAO and ATC-MO are unable to satisfy all constraints and

the maximum violation tolerance for each constraint is 0.055 (see h3 in the column of

AAO). In our experiments, CCDM was implemented with the same setting (0.055) for

the maximum constraint violation tolerance. The population size for CCDM was set to

100, and the maximum number of function evaluations was set to 10000. The parameter

δf was set to 0.01 and the ratio of the specified number of generations to the maximum

number of generations was set to 0.3 for CCDM.

The best solution achieved by CCDM over 100 runs on this test problem was recorded.

It is: (z1, z2, z3, z4, z5, z6, z7, z8, z9, z10, z11, z12, z13, z14)=(2.88, 3.00, 2.40, 0.73, 0.81,

2.77, 0.86, 0.99, 0.81, 0.76, 1.26, 0.97, 1.62, 1.59). Its f value and the constraint violation

values of g and h constraints are given in Table 3.7. It can be seen that CCDM found a

solution with better f value compared with AAO and ATC-MO.

3.3.4 The Effect of Communication Frequency

In this section, the best communication frequency among sub-populations in the proposed

methods, with and without communication costs, is studied experimentally in the two

scenarios of motor design problems and the geometric programming problem.

Best Communication Frequency without Communication Cost

In the first part, we study in both NDCC-SS and CCDM the best communication frequen-

cies that obtain solutions of highest quality within a fixed number of function evaluations

without considering communication costs. We ran NDCC-SS and CCDM on the test

problems with each communication interval independently for 100 times and recorded the

function values of the best solutions obtained using each communication interval over 100

runs.

As we wish to know which communication frequencies lead to significantly good per-

formance, the Kruskal-Wallis test and a post-hoc test proposed by Conover and Iman

63

Page 76: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[24, 101] were applied with a significance level at 0.05. The Kruskal-Wallis test is a non-

parametric multiple comparison test that compares more than two populations based on

random samples by using rank sums. In our study, the Kruskal-Wallis test was applied on

each test problem. The function value of the best solution achieved in one run was con-

sidered as a sample, and the 100 samples obtained using one communication frequency

over 100 runs were considered as a population. The null hypothesis (H0) we used in

the Kruskal-Wallis test states that all communication frequencies achieve the equivalent

performance. If the null hypothesis is rejected, the communication frequency with the

minimum rank sum will be selected as the control frequency, and the post-hoc test will be

used to perform a comparison between the control frequency and every other communica-

tion frequency. To control the family-wise error rate (i.e., the probability of making one or

more false discoveries when performing multiple hypotheses tests [34]), we further applied

Finner’s adjustment of p-values [39] in this multiple hypothesis testing. Any communica-

tion frequency whose performance is not significantly different from the control frequency

will be recorded as the best communication frequency along with the control frequency.

The parameter settings of NDCC-SS and CCDM are the same as those in Section 3.3.3

except for the setting of communication frequencies. In this study, we considered 12

different communication intervals for the two scenarios of motor design problems and

12 different communication intervals for the geometric programming problem we used.

They are 1, 2, 4, 5, 8, 10, 16, 20, 32, 40, 80, 160 and 1, 2, 4, 5, 8, 10, 20, 25, 40, 50, 80, 100,

respectively. Here, each number denotes the difference between the generation numbers

of two sequential communications in the co-evolutionary process.

Table 3.8 gives the best communication intervals for NDCC-SS on the single motor

design problems. It shows that it is best to exchange information among sub-populations

every 1 or 2 generations when using NDCC-SS to solve the single motor design problems.

Table 3.9 presents the best communication intervals for CCDM on the overlapping

motor design problems and the geometric programming problem. As it can be seen in Ta-

ble 3.9, it is best to make sub-populations communicate with each other every generation

64

Page 77: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 3.8: Best communication intervals of NDCC-SS on scenario 1 without communica-tion cost

a value Best communication intervalsa = 0.3 1,2a = 0.5 1,2

Table 3.9: Best communication intervals of CCDM on scenario 2 and the geometric pro-gramming problem without communication cost

Problem Best communication intervalsa1 = 0.10, a2 = 0.125 1a1 = 0.10, a2 = 0.3 1,2a1 = 0.05, a2 = 0.50 1,2

Geometric programming 1,2,4,5,8,10,20,25,40,50,80,100

on the first design problem, every 1 or 2 generations on the second and third problems.

For the geometric programming problem, all communication intervals performed compar-

atively well.

Our general conclusion based on Tables 3.8 and 3.9 is that sub-populations should

communicate with each other as frequently as possible when applying the proposed meth-

ods to solve the test design problems without considering the communication cost.

Best Communication Frequency with Communication Cost

In the second part, we investigate the best communication frequency considering differ-

ent communication costs. The communication cost is considered as a fixed number of

function evaluations in our study. We varied the communication cost from 0 ∗ NP to

40 ∗NP (NP denotes population size) function evaluations for all test problems, and car-

ried out experiments using each communication interval from 1 to 20 generations for every

communication cost. The other experimental settings are the same as in Section 3.3.3.

We recorded the function values of the best solutions achieved by every communication

interval over 100 runs. Then, the Kruskal-Wallis test with Conover’s and Iman’s post-

hoc test as well as Finner’s adjustment of p-values was used to give the best-performing

communication frequency.

65

Page 78: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Tables 3.10 and 3.11 summarise the best communication intervals for NDCC-SS on

the single motor design problems, and CCDM on the overlapping motor design problems

and the geometric programming problem, respectively. A general observation from Tables

3.10 and 3.11 is that the best communication frequency varies as the design problem and

the communication cost vary.

When comparing Tables 3.10-3.11 with Tables 3.8-3.9, we can see that the advantage

of frequent communication was affected by the existence of communication cost. That is,

one-generation communication is no longer always the best option when the communica-

tion cost is considered. This is because that more frequent communication will cause more

time to be spent on communications rather than on optimisation. However, the situation

is different for each problem. It can be seen from Table 3.10 that one generation is always

among the best communication intervals for the design problem with a = 0.5 even when

the communication cost is increased to 40 ∗ NP function evaluations. The reason might

be that the correlation between the decomposed sub-components needs the corresponding

sub-populations to communicate with each other frequently even at a high cost.

Table 3.10: Best communication intervals of NDCC-SS on scenario 1 with different com-munication costs

Communication cost a = 0.3 a = 0.52 ∗ NP 2-8 1-44 ∗ NP 1-20 1-8,10-116 ∗ NP 1-20 1-208 ∗ NP 1-20 1-2010 ∗ NP 3-20 1-2012 ∗ NP 1-20 1-2014 ∗ NP 2,5-20 1-2016 ∗ NP 4-20 1-2018 ∗ NP 1-20 1-2020 ∗ NP 3-20 1-2040 ∗ NP 1-20 1-20

Comparing each column in Tables 3.10-3.11, we can observe that larger communication

intervals are more likely to be better when the communication cost becomes larger. Note

that sometimes there is the trend that higher communication frequencies become better

66

Page 79: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 3.11: Best communication intervals of CCDM on scenario 2 and the geometricprogramming problem with different communication costs

Costa1 = 0.10 a1 = 0.10 a1 = 0.05 Geometrica2 = 0.125 a2 = 0.30 a2 = 0.50 programming

2 ∗ NP 2-5 2-5 1-5 1-204 ∗ NP 2-7 2-7 4-5 1-206 ∗ NP 3-7 3-7 3-5 1-208 ∗ NP 4-8 2-8 2,4-9 1-2010 ∗ NP 2-9,11 2-8 3-7,9 1-2012 ∗ NP 3-11 3-6 4-8,10 1-20

14 ∗ NP4-6, 2-7, 3-8,

1-209-10 9-10 12-13,20

16 ∗ NP 3-9,18,20 5,7,8,19,20 3-11,16,18-20 1-20

18 ∗ NP 8,17,20 3,5,203-5,7-10,

1-2012,14-20

20 ∗ NP5-9,12,14, 3-8,10,12, 4-7,12,

1-2016-17,20 13, 15-20 14,16-20

40 ∗ NP 18-20 17-20 16-20 3-6, 8-20

again as communication cost becomes larger (e.g. one-generation communication is no

longer the best at the cost of 16 ∗ NP but becomes best again at the cost of 18 ∗ NP in

Table 3.10 where a = 0.3). As we use a fixed maximum number of function evaluations

for different communication costs, the available number of function evaluations to do

optimisation is different for different communication costs. Furthermore, the difference

of the available function evaluations between different communication intervals becomes

smaller as the communication cost increases. That means, the effect of communication

cost on frequent communication becomes smaller as the communication cost increases.

Consequently, frequent communication can become better again as the communication

cost increases.

3.4 Communication Frequency Self-Adaptation

As the best communication frequencies for both proposed methods vary with the problem

and communication cost, it is not a trivial task to find a good communication frequency for

a new problem. Motivated by this, we propose a communication frequency self-adaptation

67

Page 80: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

method (CFS) to automatically adjust a good communication frequency during the con-

current design process of NDCC-SS and CCDM.

3.4.1 The Proposed Self-Adaptive Method

The CFS method is inspired by the adaptation method proposed in [53], which is used

to adapt the population size of an EA. The algorithm framework of the CFS method is

given in Algorithm 7.

Algorithm 7 Communication Frequency Self-adaptation (CFS)

1: Randomly generate a communication interval: p0

2: Generate p1 and p2 that satisfy p1 < p0 < p2

3: while the stopping criterion is not met do4: Use each of p0,p1,p2 for a specified number of function evaluations5: Calculate the performance improvement of each of p0,p1,p26: if p1 achieves the largest improvement then7: set p′1 = max(bp1/2c, pL), p′0 = max(p1, p

′1 + 1), p′2 = max(b(p1 + p0)/2c, p′0 + 1).

8: else if p2 achieves the largest improvement then9: set p′2 = min(p2 ∗ 2, pU), p′0 = min(p2, p

′2 − 1), p′1 = min(d(p0 + p2)/2e, p′0 − 1).

10: else if p0 achieves the largest improvement then11: set p′0 = p0, p′1 = min(d(p0 + p1)/2e, p′0 − 1), p′2 = max(b(p0 + p2)/2c, p′0 + 1).12: else if all pi (i=1,2,3) achieve the same performance improvement then13: set p′0 = p0, p′1 = max(2 ∗ p1 − p0, pL), p′2 = min(2 ∗ p2 − p0, pU).14: else if no improvement is achieved by any pi (i=1,2,3) then15: set p′1 = max(bp1/4c, pL), p′0 = max(bp1/2c, p′1 + 1), p′2 = max(b(p′0 + p1)/2c, p′0 +

1)16: end if17: Set p0 = p′0, p1 = p′1, and p′2 = p2

18: end while

In the beginning, the CFS method randomly initialises a communication interval value

p0 from the pre-specified inteval range [pL, pU ]. Then, two neighbors p1, p2 in different

directions are generated for p0. That is, p1 < p0 < p2. The whole design process is divided

into cycles in CFS. In each cycle, each of p0,p1,p2 is used as the communication interval

for a specified number of function evaluations. In this process, when pi (i = 0, 1, 2) is in

turn, sub-populations in both NDCC-SS and CCDM communicate with each other every

pi generations, and then the obtained improvement is calculated and recorded. After this

process, the value of p0 is set to the pi (i = 0, 1, 2) that obtained the largest improvement.

68

Page 81: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Then, two new neighbors p1, p2 in different directions are generated for p0 and the

next cycle begins. The update of p0 and the re-generation of two neighbors p1, p2 for

the updated p0 are based on the following greedy rules (for clarity, the new value of pi

(i = 0, 1, 2) is denoted by p′i (i = 0, 1, 2)):

• If p1 achieves the largest improvement, then let p′1 = max(bp1/2c, pL), p′0 = max(p1, p′1+

1), and p′2 = max(b(p1 + p0)/2c, p′0 + 1).

• If p2 achieves the largest improvement, then let p′2 = min(p2∗2, pU), p′0 = min(p2, p′2−

1), and p′1 = min(d(p0 + p2)/2e, p′0 − 1).

• If p0 has the largest improvement, then let p′0 = p0, p′1 = min(d(p0 + p1)/2e, p′0− 1),

and p′2 = max(b(p0 + p2)/2c, p′0 + 1).

• If all pi (i=0,1,2) achieve the same improvement, then let p′0 = p0, p′1 = max(2 ∗

p1 − p0, pL), and p′2 = min(2 ∗ p2 − p0, pU). This is to enlarge the search region.

• If no improvement is achieved by any pi (i=0,1,2), then let p′1 = max(bp1/4c, pL),

p′0 = max(bp1/2c, p′1+1), and p′2 = max(b(p′0 + p1)/2c, p′0+1) to make sub-populations

communicate more frequently to obtain improvement.

Note that each of p′1, p′0, and p′2 needs to be kept as an integer and the relationship of

p′1 < p′0 < p′2 needs to be maintained during this process.

The degree of improvement of the performance is measured by the ratio of the im-

provement over either the best objective function value or constraint violation value.

Furthermore, the improvement is normalised according to the efforts (the number of

function evaluations) spent on it. For NDCC-SS on the single motor design problem, the

improvement for each pi (i = 0, 1, 2) is defined as follows:

I = (f tbest − f t+1best)/(f

tbest ∗ (Evals t+1 − Evals t)) (3.15)

where f jbest (j = t, t + 1) is the best objective function value at time j, and Evalsj is the

number of function evaluations spent until time j. For CCDM on the overlapping motor

69

Page 82: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

design problems, if the best solution satisfies the constraints, the improvement calculation

is the same as in Eq. (3.15). Otherwise, the improvement for each pi (i = 0, 1, 2) is

calculated as the ratio of improvement on the constraint violation value normalised by

the used number of function evaluations.

Note that it is not easy to determine how many function evaluations to run for each

pi in one cycle. First, we can not have a fixed value for all pi values as it needs to be a

multiple of all (pi ∗ popnum ∗ NP + cost) where popnum is the number of sub-populations,

NP is the size of sub-populations, and cost is the communication cost. Otherwise, a fixed

pi value would be too large and thus, the number of cycles would be too small to do an

effective self-adaptation. Second, if we can not have a fixed number of function evaluations

for all pi, this method would have some biases. For example, if p0 and p1 always get less

function evaluations than p2 does in every cycle, the method might perform badly on

problems that prefer small communication intervals. To minimise the bias, we run p2 for

(p2 ∗ popnum ∗ NP + cost) function evaluations, and run each pi (i = 0, 1) for a randomly

selected number of function evaluations between b(p2 ∗ popnum ∗NP + cost)/(pi ∗ popnum ∗

NP + cost)c ∗ (pi ∗ popnum ∗NP + cost) and d(p2 ∗ popnum ∗NP + cost)/(pi ∗ popnum ∗NP +

cost)e ∗ (pi ∗ popnum ∗ NP + cost) in each cycle.

3.4.2 Experimental Results

In this section, we assume a fixed communication cost and study the performance of the

proposed CFS method by comparing it with a random method and a method of constant

communication frequency in both NDCC-SS and CCDM. In the random method, at each

communication step, it randomly decides the next communication time from the specified

communication interval range. In fixed communication method, a fixed communication

interval of 1 generation is used.

In the experiments, we set the communication cost to popnum ∗NP and the communi-

cation interval range to [1, 50]. The maximum number of function evaluations (MaxFEs)

was set to 2 ∗ 105, and the specified number of generations in both NDCC-SS and CCDM

70

Page 83: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

was set to bMaxFEs/(popnum∗NP +cost)c. Other settings are the same as in Section 3.3.3.

Tables 3.12 and 3.13 summarise the means and standard deviations of the function val-

ues of the best solutions obtained over 100 runs by using each of the random method, the

fixed method and the adaptive method in NDCC-SS and CCDM. We used the Wilcoxon

rank-sum test at a significance level of 0.05 to make comparisons between the CFS method

and the two compared methods over each test problem, and the statistical test results

are also given in Tables 3.12 and 3.13. The signs ‘≈’ and ‘−’ in Tables 3.12 and 3.13

denote the compared method is similar to and significantly worse than the proposed CFS

method, respectively.

According to the statistical test results, the CFS method performed similarly to the

random method on the geometric programming problem in CCDM, and performed better

than it on the single motor design problems in NDCC-SS and on the overlapping motors

design problems in CCDM.

When compared to the fixed method, the CFS method performed better on one test

problem and similarly on the other test problem in NDCC-SS. When applied in CCDM,

the CFS method and the fixed method obtained similar results on the geometric program-

ming problem. However, the CFS method performed better than the fixed method on the

other test problems.

Table 3.12: Comparison between the CFS method and the compared methods for NDCC-SS on scenario 1. The best results are marked in bold.

Problem Random method Fixed method CFS methoda = 0.3 1.34e+00±5.71e+00 − 4.87e−01±4.13e−02 − 4.54e−01±4.77e−02a = 0.5 1.65e+01±3.62e+01 − 5.80e−01±3.94e−02 ≈ 2.97e+00±1.33e+01

Table 3.13: Comparison between the CFS method and the compared methods for CCDMon scenario 2 and the geometric programming problem. The best results are marked inbold.

Problem Random method Fixed method CFS methoda1 = 0.10, a2 = 0.125 1.47e+00±3.83e+00 − 1.02e+00±2.04e+00 − 1.01e+00±1.89e+00a1 = 0.10, a2 = 0.3 4.09e+00±1.42e+01 − 1.77e+01±1.17e+02 − 9.99e−01±2.38e+00a1 = 0.05, a2 = 0.5 3.10e+01±2.59e+02 − 1.26e+01±6.42e+01 − 2.36e+00±1.33e+01

Geometric programming 4.20e+01±2.43e+01 ≈ 4.97e+01±2.25e+01 ≈ 4.78e+01±2.54e+01

71

Page 84: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

According to these comparison results, we can see clearly that the CFS method is

an effective method for both NDCC-SS and CCDM when information about how to set

the communication frequency is unavailable beforehand. The results also show that all

of the random method, the fixed method, and the CFS method performed similarly on

the geometric programming problem. This is because the use of different communication

intervals led to similar results on this problem. The CFS method can not bring any

performance improvement on such problems.

3.5 Chapter Summary

This chapter developed efficient CooC-based design optimisation methods to address two

types of CE problems and to answer questions of how to better evaluate individuals in

each sub-population when using CooC to address the first type of CE problem, how to

better handle the shared design variables when using CooC to address the second type of

CE problem, and how often different sub-populations should communicate with each other

in the CooC-based concurrent design optimisation .

As an answer to the first question, this chapter proposed a new novelty-driven CooC

method, NDCC-SS, to address the first type of CE problems. Novelty-driven CooC eval-

uates individuals based not only on their fitness but also on their novelty. In NDCC-SS,

a computationally efficient novelty calculation was used and a stochastic selection process

with an adaptive probability was used to adjust a good trade-off between novelty and

fitness when making comparisons between individuals. The experimental results on the

UEM design problems have shown the new novelty calculation strategy along with the

adaptive trade-off strategy provides a good choice to evaluate individuals and compare

them.

As an answer to the second question, this chapter proposed CCDM method which

used duplicate variables and the stochastic ranking method to deal with the common

variables for quasi-separable MDO problems. The superior performance of CCDM has

72

Page 85: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

been demonstrated through the experimental comparison with other MDO methods on

the UEM design problems and a general MDO problem.

As an answer to the third question, this chapter studied the performance effect of

the communication frequency on NDCC-SS and CCDM. By using different problems and

communication costs, the experimental results showed that the best frequency varies as

the problem and communication cost change. Motivated by this, a self-adaptive method,

CFS, was developed to adjust a good communication frequency automatically during

the concurrent design process. The experimental studies showed that CFS is a very

competitive communication strategy for both NDCC-SS and CCDM.

73

Page 86: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CHAPTER 4

A NEW SPECIATION METHOD FOR DYNAMICNT NED OPTIMISATION

The CE problems considered in Chapter 3 are static. In the field of CE, changes in the de-

sign objectives or constraints usually happen suddenly due to the varying customer needs,

and fast response to the changes is required [110, 130]. Besides, such dynamic constrained

optimisation problems (DCOPs) also widely exist in other fields due to the dynamic envi-

ronments, such as vehicle routing [44], load balancing [134], service composition [57], etc.

This chapter focuses on how to address DCOPs efficiently.

In Chapter 3, we assume how to decompose the two types of CE problems are known

beforehand, and based on this we studied how to better co-evolve the decomposed sub-

problems. However, in many real-world applications, we may not have any prior knowledge

about how to decompose the problem, e.g., how to decompose a DCOP. This chapter

aims to address this issue on DCOPs through the use of niching EAs. Before giving the

details about the proposed method in Section 4.2, the motivation of this work will be first

elaborated in Section 4.1. In Section 4.3, experimental studies to validate the efficacy of

the proposed method will be presented. Finally, Section 4.4 will summarise this chapter

and give conclusions.

74

Page 87: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

4.1 Motivation

In the real world, many optimisation problems are changing over time due to dynamic

environments [25], [92]. These problems require an optimisation algorithm to quickly find

the new optimum once the problem changes [93]. As a class of nature-inspired optimisation

methods, evolutionary algorithms (EAs) promise a potentially good adaptation capability

to changing environments, and thus have been widely studied in the field of dynamic op-

timisation (DO). Many evolutionary DO approaches have been developed, which include

maintaining/introducing diversity strategies, memory approaches, prediction approaches,

multi-population approaches, and so on [93].

As revealed in [94, 93], most existing studies of DO focus on unconstrained DOPs or

DCOPs with box constraints, and few consider DCOPs that involve linear or nonlinear

constraints despite their high popularity in real-world applications. In a DCOP, a change

may occur in either constraints or objective functions or both. Therefore, DCOPs have

some specific characteristics compared to unconstrained or bound-constrained DOPs. For

a DCOP, the distribution of infeasible/feasible solutions might change, and the global

optima might move from one feasible region to another disconnected feasible region, or

appear in a new region without changing the current optima due to the dynamics of

environments [94, 92].

The specific characteristics of DCOPs pose difficulties for some existing DO strategies

and constraint handling (CH) techniques in addressing them. As discussed in [92] and

[94], maintaining/introducing diversity methods such as random-immigrants (RI) [50]

and hyper-mutation (HyperM) [22] become less effective on DCOPs than on DOPs, when

combined with a penalty function that prefers feasible solutions to infeasible ones. The

reasons are as follows. Although random individuals are generated and introduced into

the current population after a change happens, as most of them are likely infeasible, they

will be abandoned by the penalty function which prefers feasible solutions. As a result,

the diversity of population can not be maintained or increased. However, without enough

population diversity, the RI and HyperM method can not deal well with the situation

75

Page 88: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

in which the global optimum moves from one feasible region to another disconnected

feasible region, as this situation needs an infeasible path from the previous optimum to

the current optimum. Moreover, some adaptive/self-adaptive CH techniques also face

challenges in solving DCOPs as they need the knowledge of problem that is unavailable

in a dynamic environment or historical information that might be outdated once the

problem changes [94].

To address these difficulties, the authors in [94] suggested using constraint handling

techniques that can accept diversified infeasible solutions or tracking moving feasible

regions when dealing with DCOPs. Researchers have also carried out some work following

this suggestion. As introduced in Section 2.6.2 in Chapter 2, to allow diversified infeasible

solutions distributed in the whole search space, the authors in [92, 97, 5] applied the

repair method [112] to handle constraints along with RI/HyperM to deal with changes.

The repair method repairs every infeasible solution to a feasible one, and uses the fitness

of the feasible solution to evaluate it. Consequently, infeasible solutions that can make

a good feasible solution will be kept in the population and thus the population diversity

can be maintained. However, these methods need a lot of feasibility checkings, and thus

cannot be applied to DCOPs in which the ratio of feasible solutions is very low and a

feasibility checking is computationally costly.

A simple ranking scheme in [54] was applied to handling constraints in [15]. To main-

tain population diversity, the method monitored the population diversity and switched

between a global search and a local search operator according to whether the diversity

degree is larger than a threshold. As a result, this method will highly depend on the

setting of the threshold. To avoid the setting of the threshold, the authors in [16] used

the Shannon’s index of diversity as a factor to balance the influence of the global-best and

local-best search directions. However, the population tends to converge in this method,

and it might be ineffective to make the partially converged population to re-diversify to

track the switched global optimum.

The authors in [4] employed simple feasibility rules [85] as the CH strategy along

76

Page 89: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

with RI and combined differential evolution (DE) variants to introduce diversity after

each change. However, this method may not maintain diversity well during the run as

infeasible solutions are still likely abandoned by feasibility rules. Except DO strategies to

introduce/maintain diversity were considered, other DO strategies such as memory and

prediction methods were also combined with CH techniques to deal with DCOPs. The

study in [107] adapted abstract memory method, and the infeasibility driven evolutionary

algorithm (IDEA) was combined with prediction method (to predict the future optima)

in [37, 38] to handle DCOPs. However, both memory and prediction methods are only

applicable to particular dynamic problems (i.e., cyclic and predictable dynamic problems,

respectively).

In the aforementioned methods, none of them were done along the direction of tracking

moving feasible regions. However, in the field of unconstrained DOPs or DCOPs with

bounded constraints, using niching EAs to locate and track moving optima has become

one of the most important techniques [69, 12, 86, 100]. Motivated by the capability of

niching EAs in solving unconstrained DOPs or DCOPs with bounded constraints, this

chapter aims to apply niching methods to address DCOPs and study the question of

whether niching method can help better solve DCOPs. As introduced in Section 2.3.1 in

Chapter 2, niching EAs can locate multiple optima by explicitly or implicitly maintaining

multiple sub-populations to search different promising regions of the decision space. In

this chapter, niching EAs are employed to locate and track multiple feasible regions.

Research questions of how to locate multiple feasible regions in a DCOP and how to track

moving feasible regions in a DCOP are studied. As niching methods focus on exploration,

newly changed optima might not be found quickly as promising regions are not exploited

sufficiently. Therefore, the question of how to make a good trade-off between exploration

and exploitation is also investigated.

To answer these questions, this chapter introduced a new speciation-based method,

called speciated evolution with local search (SELS), to address the challenges of DCOPs.

To locate multiple feasible regions, SELS employs deterministic crowding (DC) [80] and

77

Page 90: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

assortative mating (AM) [28] to induce speciation in the population. DC and AM make

recombination and comparisons between similar individuals, respectively. Moreover, SELS

uses the simple and parameter-free feasibility rules [85] which prefer feasible solutions

to infeasible solutions to deal with constraints. Consequently, good solutions can be

maintained in different feasible regions. To track moving feasible regions, SELS adds

random immigrants into the population to introduce diversity once a change is detected.

Furthermore, to make a good trade-off between exploration and exploitation, a local

search strategy is employed in SELS to promote exploitation of the promising regions to

quickly find the changed optimum. Finally, the performance of SELS is demonstrated on

11 DCOP benchmark test functions.

4.2 The Proposed SELS Method

Algorithm 8 presents the framework of SELS. It begins with a randomly generated popula-

tion of candidate solutions and utilises genetic algorithm (GA) to evolve this population.

At every generation, some individuals in the population will be re-evaluated to detect

the changes. If the change happens, the whole population will be re-evaluated and some

random immigrants will be introduced. Otherwise, the whole population will be evolved

using DC, AM and local search operations. The parameter lsnum in Algorithm 8 denotes

the maximum number of fitness evaluations permitted for each local search. Note that

NP needs to be set to an even number.

4.2.1 Deterministic Crowding (DC) and Assortative Mating (AM)

In this work, the DC method [80] is chosen among existing niching techniques. Details

about DC can be found in Algorithm 2 in Section 2.3.1 in Chapter 2. The DC method

pairs all population elements randomly and generates two offspring for each pair based on

EA operators. Selection is then operated on each group of two parents and two offspring,

and a similarity measure is used to decide which offspring competes against which parent.

78

Page 91: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Algorithm 8 The Framework of SELS

1: Evaluate a randomly generated population P = xi|i = 1, 2, ...,NP, NP is even.2: while computational resources are not used up do3: repeat4: Randomly select an unpaired individual x from the population5: Calculate Euclidean distance between x and each other unpaired individual6: Pair x with the individual that has smallest positive Euclidean distance to x7: until all individuals in the population are paired8: for i← 1, 2, ...,NP do9: if i or i+ 1 is one detection index then10: Re-evaluate xi or xi+1

11: end if12: if the change is detected then13: Re-evaluate solutions in P and introduce diversity, go to Step 2014: else15: Generate two offspring o1,o2 from xi and xi+1 with crossover and mutation16: Do deterministic crowding(xi, xi+1, o1, o2) according to Algorithm 2 in Section

2.3.1 in Chapter 217: Set i = i+ 218: end if19: end for20: Do local search on the best solution in P for lsnum fitness evaluations21: end while

The offspring will replace the compared parent and enter next generation if it is fitter.

Compared to other niching techniques, DC does not need any parameter setting. This is

why DC is chosen in SELS.

As DC can maintain good solutions on different peaks or in different feasible regions,

intuitively, we would not like to conduct crossover between solutions on different peaks or

in different feasible regions as doing this will likely generate solutions in the valley or infea-

sible regions. To avoid this, the AM method [28] is used, which mates individuals with the

most similar non-identical partner in the population. Through doing crossover between

individuals in proximity, species will be automatically generated, and the exploitation of

the corresponding search area will also be enhanced.

In both DC and AM, the similarity between two individuals, xi and xj, is measured

by the Euclidean distance between them. The smaller the distance between them is, the

more similar they are. Eq. (4.1) shows the calculation process of the Euclidean distance

79

Page 92: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

between xi and xj:

Dist(xi,xj) =

√√√√ n∑k=1

(xi,k − xj,k)2 (4.1)

where n represents the dimension of xi and xj.

4.2.2 Feasibility Rules

To deal with constraints and make comparisons between individuals, one of the most

popular constraint handling techniques, feasibility rules [85], is used in SELS to determine

the fitter one in each pair of parent and offspring in the deterministic crowding. Compared

to other constraint handling techniques, the method of feasibility rules is very simple and

parameter-free. It makes comparison between two individuals according to the following

three selection criteria:

1. If both of the individuals are feasible solutions, the one with the higher fitness value

is selected.

2. If both of the individuals are infeasible solutions, the one with the lower sum of

constraint violation wins.

3. If one of the individuals is feasible and the other is infeasible, the feasible individual

is preferred.

4.2.3 Local Search (LS) Strategy

In this work, the local search framework, local evolutionary search enhancement by ran-

dom memorizing (LESRM) [125], is applied to the best solution (xbest) in the current

population at each generation. LESRM applies an EA that adaptively adjusts its mu-

tation step size to evolve the current individual and adjusts the search direction based

on individuals encountered before. Thus, it can do efficient exploitation in the area

that the best solution is located in. In this work, we instantiate LESRM with a (1+1)-

80

Page 93: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

evolutionary strategy (ES). Algorithm 9 gives the pseudo-code for LESRM which uses a

memory Memory to archive the previous best solutions.

Algorithm 9 Evolutionary Search in LESRM in SELS (xbest)

1: Set Memory = ∅2: Set the beam search factor b = 23: Find the closest non-identical solution xnear in the current population to xbest

4: Set δ = Dist(xnear ,xbest)5: Set local search generation counter lsg = 06: repeat7: repeat8: Set lsg = lsg + 19: Generate a offspring xoffspring = xbest + δ*randn(1,D);10: if mod(lsg, period)==0 then11: Calculate the success ratio of δ12: Set δ = δ ∗ 2 if the success ratio of δ is larger than 0.513: Set δ = δ/2 if the success ratio of δ is less than 0.514: end if15: until xoffspring is fitter than xbest

16: Update the Memory with xbest

17: Set xbest = xoffspring

18: if Memory is not empty then19: Randomly draw a solution mr from the Memory20: Calculate the direction s from mr to xbest , s = (xbest −mr)/||xbest −mr||21: Set ba = 122: repeat23: ba = ba ∗ b24: xnew = xbest + s ∗ ba ∗ δ25: if xnew is fitter than xbest then26: Update the Memory with xbest

27: Set xbest = xnew

28: end if29: until xbest is fitter than xnew

30: end if31: Update the Memory with xnew

32: until lsnum function evaluations are used up

At each iteration in the local search, LESRM first uses ES to evolve the current

individual until a better individual is generated (see Steps 7-15). During the evolutionary

process of ES, the mutation step size δ is adjusted based on its success ratio. The success

ratio of δ is calculated as the ratio of the offspring generated using δ that are better

than the current individual among all the offspring generated using δ in a period. If the

81

Page 94: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

success ratio is bigger than 0.5, then δ is doubled to extend the search range (see Step

12). Otherwise, the δ is set to δ/2 to narrow the search range (see Step 13). In this work,

period is set to 2.

Then, the current individual is replaced by the better generated offspring (see Step

17). After this, LESRM randomly selects one individual from the memory Memory , and

calculates the direction from this randomly selected individual to the current individual

(see Steps 19-20). The calculated direction is then used to generate new individuals until

the newly generated individual is worse than the current individual (see Steps 22-29). In

this process, if the newly generated individual is better than the current individual, the

current individual will be replaced by the new individual (see Steps 25-27).

The above process is iterated until lsnum function evaluations are used up. The memory

depth is set to dm = 2 ∗ n ∗ n2 ∗ n3 (n denotes the dimension) in [125]. Once the memory

arrives at the maximum depth, the oldest individuals will be first replaced by the newly

added individuals.

4.2.4 Change Detection and Diversity Introduction

To detect the change in time, at every generation, SELS re-evaluates the (NP/k)-th,

(2 ∗ NP/k)-th, ((k − 1) ∗ NP/k)-th, ..., (NP )-th individual in the population. Here, k

is the number of individuals for detection and NP is the size of population. Once the

fitness or the constraint violation values of the detection individual changes, the whole

population will be re-evaluated. After this, to introduce diversity into the population, NI

individuals are first randomly generated. They are then used to randomly replace some

individuals in the population except the best individual to do partial re-initialisation. In

this work, NI is set according to the severity of change as follows. Here, for each pair

of individuals xi and xj in the population, Compare(xi,xj) is used to denote whether

xi is better than xj. If xi is better than or the same to xj according to the feasibility

rules, Compare(xi,xj) is equal to 1. Otherwise, Compare(xi,xj) is equal to 0. After

the re-evaluation, if Compare(xi,xj) changes, then this is recorded as one reverse order.

82

Page 95: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The degree of change severity after re-evaluation is estimated as the ratio of the number

of reverse order over all pairs of individuals in the population. Finally, NI is set to

max(dreverse ratio ∗ NPe, 2).

4.3 Experimental Studies

4.3.1 Experimental Setup

To assess the efficacy of SELS, we conducted experiments on 11 DCOP benchmark test

functions proposed in [94]. There are actually 18 test functions in total in the benchmark.

However, only 11 of them satisfy the definition of DCOPs given in Eq. (2.3). The 11 test

functions are G24-l(dF,fC), G24-2(dF,fC), G24-3(dF,dC), G24-3b(dF,dC), G24-4(dF,dC),

G24-5(dF,dC), G24-6a(dF,fC), G24-6c(dF,fC), G24-7(fF,dC), G24-6d(dF,fC), and G24-

8b(fC,dF), respectively. Here, ‘dF’, ‘fF’, ‘dC’ and ‘fC’ mean dynamic objective function,

fixed objective function, dynamic constraint functions, and fixed constraint functions,

respectively.

All of the 11 test functions have two decision variables: x1 and x2. The range of

them are [0, 3] and [0, 4], respectively. They were developed based on two basic objective

functions and six constraint functions. The two basic objective functions are:

f1 = −(X1 +X2)

f2 = −3 exp(−√√

X21 +X2

2 )

(4.2)

83

Page 96: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

and the six constraint functions are:

g1 = −2Y 41 + 8Y 3

1 − 8Y 21 + Y2 − 2

g2 = −4Y 41 + 32Y 3

1 − 88Y 21 + 96Y1 + Y2 − 36

g3 = 2Y1 + 3Y2 − 9

g4 =

−1 if (0 ≤ Y1 ≤ 1) or (2 ≤ Y1 ≤ 3)

1 otherwise

g5 =

−1 if (0 ≤ Y1 ≤ 0.5) or (2 ≤ Y1 ≤ 2.5)

1 otherwise

g6 =

−1 if [(0 ≤ Y1 ≤ 1) or (2 ≤ Y2 ≤ 3)] or (2 ≤ Y1 ≤ 3)

1 otherwise

(4.3)

where Xi (i = 1, 2) and Yi (i = 1, 2) satisfy:

Xi(x, t) = pi(t)(xi + qi(t))

Yi(x, t) = ri(t)(xi + si(t))

(4.4)

The parameters pi(t), qi(t), ri(t), si(t) are the environmental parameters that changes as

the time t goes by.

The definitions of the 11 test functions based on the two basic objective functions and

six constraint functions are as follows:

1. G24-l(dF,fC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.5)

84

Page 97: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-l(dF,fC) are:

p1(t) = sin(kπt+ π/2), p2(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, si(t) = 0, i = 1, 2

(4.6)

where k denotes the change severity. Different change severity can be considered by

setting k to different values.

2. G24-2(dF,fC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.7)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-2(dF,fC) are:

if (t mod 2 = 0)

p1(t) = sin(kπt/2 + π/2)

p2(t) =

p2(t− 1), if t > 0

p2(0) = 0, if t = 0

if (t mod 2 6= 0)

p1(t) = sin(kπt/2 + π/2)

p2(t) = sin(kπ(t− 1)/2 + π/2)

qi(t) = 0, ri(t) = 1, si(t) = 0, i = 1, 2

(4.8)

3. G24-3(dF,dC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.9)

85

Page 98: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-3(dF,dC) are:

pi(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, s1(t) = 0, s2(t) = 2− t ∗ (xmax2 − xmin2 )/S, i = 1, 2

(4.10)

where S denotes the change severity. Different change severity can be considered

by setting S to different values. xmax2 and xmin2 denote the upper and lower bounds

of x2, respectively.

4. G24-3b(dF,dC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.11)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-3b(dF,dC) are:

p1(t) = sin(kπt+ π/2), p2(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, s1(t) = 0, s2(t) = 2− t ∗ (xmax2 − xmin2 )/S, i = 1, 2

(4.12)

5. G24-4(dF,dC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.13)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-4(dF,dC) are:

p1(t) = sin(kπt+ π/2), p2(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, s1(t) = 0, s2(t) = t ∗ (xmax2 − xmin2 )/S, i = 1, 2

(4.14)

86

Page 99: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

6. G24-5(dF,dC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.15)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-5(dF,dC) are:

if (t mod 2 = 0)

p1(t) = sin(kπt/2 + π/2)

p2(t) =

p2(t− 1), if t > 0

p2(0) = 0, if t = 0

if (t mod 2 6= 0)

p1(t) = sin(kπt/2 + π/2)

p2(t) = sin(kπ(t− 1)/2 + π/2)

qi(t) = 0, ri(t) = 1, s1(t) = 0, s2(t) = t ∗ (xmax2 − xmin2 )/S, i = 1, 2

(4.16)

7. G24-6a(dF,fC):

minx

f1(x)

subject to : g3(x) ≤ 0, g6(x) ≤ 0

(4.17)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-6a(dF,fC) are:

p1(t) = sin(kπt+ π/2), p2(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, si(t) = 0, i = 1, 2

(4.18)

8. G24-6c(dF,fC):

minx

f1(x)

subject to : g3(x) ≤ 0, g4(x) ≤ 0

(4.19)

87

Page 100: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

and the dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-6c(dF,fC) are:

p1(t) = sin(kπt+ π/2), p2(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, si(t) = 0, i = 1, 2

(4.20)

9. G24-6d(dF,fC):

minx

f1(x)

subject to : g5(x) ≤ 0, g6(x) ≤ 0

(4.21)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-6d(dF,fC) are:

p1(t) = sin(kπt+ π/2), p2(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, si(t) = 0, i = 1, 2

(4.22)

10. G24-7(fF,dC):

minx

f1(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.23)

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-7(fF,dC) are:

pi(t) = 1, qi(t) = 0, i = 1, 2

ri(t) = 1, s1(t) = 0, s2(t) = t ∗ (xmax2 − xmin2 )/S, i = 1, 2

(4.24)

11. G24-8b(fC,dC):

minx

f2(x)

subject to : g1(x) ≤ 0, g2(x) ≤ 0

(4.25)

88

Page 101: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The dynamic changes of pi(t), qi(t), ri(t), si(t) in G24-8b(fC,dC) are:

pi(t) = 1, q1(t) = −(c1 + ra ∗ cos(kπt)), q2(t) = −(c2 + ra ∗ sin(kπt))

ri(t) = 1, si(t) = 0, i = 1, 2

(4.26)

where c1 = 1.470561702, c2 = 3.442094786232 and ra = 0.858958496.

In the experiments, for all test functions, the number of changes is set to 12 (i.e., t

changes from 0 to 11), the change severity is medium (i.e., k = 0.5, and S = 20), and the

change frequency is set to 500, 1000, 2000 fitness evaluations (FEs), respectively.

To evaluate the efficacy of SELS, 6 state-of-the-art algorithms were used for com-

parisons. They are, dGArepairRIGA [92], dGArepairHyperM [92], GSA + Repair [96],

DDECV + Repair [5], DDECV[4] and EBBPSO-T [16]. Note that the first 4 of the com-

pared algorithms use a repair scheme, so they need a lot of feasibility checking but they

ignore the cost. To make a fair comparison, we run SELS only evaluating the feasibility

for an infeasible solution and do not count in the number of used fitness evaluations. The

resulting algorithm is denoted as eSELS and compared to the 4 repair algorithms. When

compared to DDECV and EBBPSO, SELS evaluates both the feasibility and objective

function value for every individual, no matter whether it is feasible or infeasible, which is

the same to what DDECV and EBBPSO do.

To evaluate the performance of each algorithm, the modified offline error [92] averaged

at every function evaluation is used. The modified offline error averaged at every function

evaluation is defined as follows:

EMO =1

num of eval

num of eval∑j=1

eMO(j) (4.27)

where num of eval denotes the maximum number of function evaluations, and eMO(j)

denotes the error of the best feasible solution obtained at j-th evaluation. If there are no

feasible solutions at the j-th evaluation, the worst possible value that a feasible solution

can have will be taken. The error value of a feasible solution means the difference between

89

Page 102: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

its function value and the best possible value that a feasible solution can have. The smaller

the modified offline error is, the better the algorithm performs. Here, the best possible

value and the worst possible value that a feasible solution can have were approximated

by experiments for each test function.

In the experiments, for both SELS and eSELS, intermediate crossover with pc = 1.0,

and Gaussian mutation with pm = 1/D and scale = 0.1 are used, respectively. In the

mutation, at least one variable is mutated every time. The number of LS objective

function evaluations (lsnum) is set to 16, and 4 individuals are used for change detection

every generation. Each algorithm is run 50 times on each test function.

4.3.2 Comparison Results with Existing Algorithms on 1000FEs

Table 4.1 summarises the mean and standard deviation of the modified offline error values

over 50 runs obtained by DDECV, EBBPSO-T and SELS as well as the performance

rank of each algorithm on each test function (in case of ties, average ranks are assigned)

based on the mean values. We applied the Friedman test and further a Holm’s post-

hoc procedure [32], which was used for multiple comparison of algorithms, to investigate

whether SELS performed best on the set of test functions. The analysis shows that SELS

has a significant improvement over DDECV and EBBPSO-T at the confidence level of

0.05. For test functions G24-3 and G24-7, SELS performed worse than EBBPSO-T. This

is because that the global optima of them do not move to another feasible region but stay

in the same feasible region as time goes.

Table 4.2 gives the performance rank of eSELS and the other 4 repair algorithms

on each test function. We also applied the Friedman test and further a Holm’s post-

hoc procedure [32] to do a multiple-problem comparison among the 5 algorithms. The

statistical test results show that eSELS performed significantly better than any other

algorithm on the test function set at the confidence level of 0.05.

90

Page 103: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 4.1: Comparison results between DDECV, EBBPSO-T and SELS based on exper-imental results of DDECV and EBBPSO-T in their original papers [4] and [16], respec-tively. The best result obtained on each function is marked in bold.

Func DDECV[rank] EBBPSO-T[rank] SELS[rank]G24-l 0.109±0.033[3] 0.084±0.041[2] 0.025±0.008[1]G24-2 0.126±0.030[2] 0.136±0.013[3] 0.050±0.015[1]G24-3 0.057±0.018[3] 0.032±0.005[1] 0.044±0.022[2]

G24-3b 0.134±0.033[3] 0.104±0.015[2] 0.052±0.018[1]G24-4 0.131±0.032[2] 0.138±0.022[3] 0.082±0.021[1]G24-5 0.126±0.030[2.5] 0.126±0.019[2.5] 0.054±0.014[1]G24-6a 0.215±0.067[3] 0.116±0.099[2] 0.055±0.009[1]G24-6c 0.128±0.025[2] 0.251±0.061[3] 0.052±0.008[1]G24-6d 0.288±0.055[2] 0.312±0.203[3] 0.041±0.007[1]G24-7 0.106±0.022[3] 0.045±0.009[1] 0.087±0.016[2]

G24-8b 0.151±0.058[2] 0.312±0.086[3] 0.055±0.022[1]

Table 4.2: Comparison results among eSELS and the other 4 repair algorithms based onthe experimental results in their original papers.

Func dRepairRIGA[rank] dRepairHyperM[rank] GSA+Repair[rank] DDECV+Repair[rank] eSELS[rank]G24-l 0.082±0.015[3] 0.093±0.023[4] 0.132±0.015[5] 0.061±0.010[2] 0.013±0.007[1]G24-2 0.162±0.021[3] 0.171±0.026[4] 0.182±0.019[5] 0.062±0.006[2] 0.030±0.008[1]G24-3 0.029±0.004[4] 0.027±0.005[2] 0.028±0.004[3] 0.046±0.006[5] 0.018±0.004[1]

G24-3b 0.058±0.007[2] 0.071±0.014[3] 0.076±0.009[4] 0.084±0.006[5] 0.021±0.004[1]G24-4 0.140±0.028[5] 0.059±0.010[2] 0.073±0.012[3] 0.088±0.011[4] 0.036±0.009[1]G24-5 0.152±0.017[4] 0.131±0.019[3] 0.153±0.013[5] 0.078±0.008[2] 0.027±0.024[1]G24-6a 0.366±0.033[5] 0.358±0.049[4] 0.033±0.003[1] 0.036±0.005[2] 0.038±0.006[3]G24-6c 0.323±0.037[4] 0.326±0.047[5] 0.045±0.004[3] 0.041±0.010[2] 0.040±0.007[1]G24-6d 0.315±0.029[5] 0.286±0.035[4] 0.037±0.007[2] 0.079 ±0.006[3] 0.029±0.004[1]G24-7 0.154±0.031[5] 0.067±0.014[3] 0.018±0.002[1] 0.107±0.011[4] 0.035±0.045[2]

G24-8b 0.341±0.053[5] 0.257±0.042[4] 0.192±0.034[3] 0.074±0.025[2] 0.025±0.006[1]

91

Page 104: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

4.3.3 Comparison Results on 500 FEs and 2000 FEs

In the experiments, we also tested the performance of SELS and eSELS on the 11 test

functions with the change frequency of 500 FEs and 2000 FEs. As only DDECV and

DDECV + Repair have complete results on the change frequency of 500 FEs and 2000

FEs, the peformance of SELS and eSELS are compared to that of DDECV and DDECV

+ Repair, respectively.

Tables 4.3 and 4.4 show the mean and standard deviation of the modified offline

error values over 50 runs obtained by SELS, eSELS, DDECV and DDECV + Repair on

each test function under each change frequency of 500 FEs and 2000FEs. The better

results obtained on each test function are marked in bold. It can be seen that SELS

and eSELS performed better than DDECV and DDECV + Repair on more than 9 of 11

test functions, respectively. This demonstrates the robustness of SELS and eSELS to the

change frequency of DCOPs.

Table 4.3: Comparison results between DDECV and SELS based on experimental resultsof DDECV in the original paper [4] under change frequency of 500 FEs and 2000 FEs.The better results obtained on each function are marked in bold.

F Func500 FEs 2000 FEs

DDECV SELS DDECV SELSG24-l G24-l(dF,fC) 0.227±0.067 0.068±0.028 0.066±0.018 0.011±0.003G24-2 G24-2(dF,fC) 0.162±0.032 0.095±0.018 0.071±0.016 0.025±0.009G24-3 G24-3(dF,dC) 0.087±0.024 0.101±0.018 0.032±0.008 0.024±0.009G24-3b G24-3b(dF,dC) 0.225±0.070 0.119±0.032 0.078±0.015 0.023±0.005G24-4 G24-4(dF,dC) 0.233±0.081 0.143±0.031 0.073±0.014 0.053±0.013G24-5 G24-5(dF,dC) 0.195±0.033 0.093±0.015 0.081±0.011 0.033±0.009G24-6a G24-6a(2DR,hard) 0.267±0.114 0.110±0.014 0.103±0.032 0.030±0.003G24-6c G24-6c(2DR,easy) 0.173±0.048 0.112±0.019 0.063±0.013 0.030±0.004G24-6d G24-6d(2DR,hard) 0.414±0.083 0.081±0.014 0.139±0.027 0.021±0.004G24-7 G24-7(fF,dC) 0.156±0.038 0.132±0.020 0.062±0.014 0.047±0.009G24-8b G24-8b(fC,OICB) 0.332±0.108 0.113±0.030 0.078±0.032 0.025±0.016

4.3.4 The Performance Effect of Change Severity and lsnum

To evaluate the performance of SELS on different changes, we also conducted experiments

on small change severity (i.e., k = 1.0 and S = 10) and large severity (i.e., k = 0.25

92

Page 105: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 4.4: Comparison results between DDECV+Repair and eSELS based on experimen-tal results of DDECV+repair in the original paper [5] under change frequency of 500 FEsand 2000 FEs. The better results obtained on each function are marked in bold.

F Func500 FEs 2000 FEs

DDECV+Repair eSELS DDECV+Repair eSELSG24-l G24-l(dF,fC) 0.117±0.015 0.032±0.015 0.036±0.010 0.005±0.002G24-2 G24-2(dF,fC) 0.137±0.008 0.070±0.018 0.035±0.007 0.018±0.006G24-3 G24-3(dF,dC) 0.062±0.008 0.041±0.015 0.036±0.002 0.009±0.003

G24-3b G24-3b(dF,dC) 0.147±0.012 0.050±0.014 0.063±0.009 0.010±0.003G24-4 G24-4(dF,dC) 0.143±0.012 0.100±0.090 0.057±0.005 0.021±0.004G24-5 G24-5(dF,dC) 0.140±0.014 0.094±0.078 0.041±0.004 0.016±0.005G24-6a G24-6a(2DR,hard) 0.083±0.016 0.075±0.012 0.020±0.004 0.020±0.003G24-6c G24-6c(2DR,easy) 0.090±0.011 0.084±0.013 0.022±0.004 0.021±0.004G24-6d G24-6d(2DR,hard) 0.196±0.015 0.058±0.009 0.044±0.003 0.015±0.002G24-7 G24-7(fF,dC) 0.134±0.023 0.193±0.164 0.057±0.005 0.020±0.004

G24-8b G24-8b(fC,OICB) 0.189±0.033 0.052±0.014 0.041±0.012 0.018±0.006

and S = 50). Figure 4.1 gives the evolutionary curves of the normalised offline error

differences between medium and small severity, and between large and small severity

on two representative functions, G24-2 and G24-8b. The X axis denotes the number of

objective function evaluations, and the Y axis denotes the difference of the normalised

offline error at each evaluation, which is normalised on each problem in one test function.

In general, we found that SELS performed best on small severity, second best on medium,

and worst on large severity.

0 2000 4000 6000 8000 10000 12000−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

The Number of FEs

Diff

eren

ce o

f Nor

mal

ised

Offl

ine

Err

or

large−meidummedium−small

(a) G24-2

0 2000 4000 6000 8000 10000 12000−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

The Number of FEs

Diff

eren

ce o

f Nor

mal

ised

Offl

ine

Err

or

large−meidummedium−small

(b) G24-8b

Figure 4.1: The evolutionary difference curves of SELS between different change severity

We also conducted experiments to study the sensitivity of SELS to the parameter

lsnum (the maximum fitness evaluations permitted for each local search). Six settings of

93

Page 106: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

lsnum (4, 8, 12, 16, 20, 24) were tested and compared on the 11 test functions. The mean

and standard deviation of the modified offline error obtained with each setting on the 11

test functions are shown in Table 4.5. It can be seen from Table 4.5 that the performance

of SELS depends on the value of lsnum . The first 9 test functions prefer lsnum = 16, while

the last two test functions prefer lsnum = 24.

Table 4.5: Comparison results among different lsnum . The best results achieved on eachtest function are marked in bold.

lsnum G24-l G24-2 G24-3 G24-3b4 3.68e−02±1.20e−02 7.36e−02±1.43e−02 7.67e−02±1.51e−02 9.02e−02±1.60e−028 2.92e−02±8.70e−03 5.68e−02±1.19e−02 5.52e−02±1.37e−02 6.11e−02±1.32e−0212 2.73e−02±8.63e−03 5.16e−02±1.23e−02 4.90e−02±1.80e−02 5.49e−02±1.47e−0216 2.49e−02±8.16e−03 5.00e−02±1.50e−02 4.41e−02±2.17e−02 5.19e−02±1.79e−0220 2.97e−02±9.04e−03 4.60e−02±1.04e−02 4.07e−02±1.34e−02 5.08e−02±1.47e−0224 2.55e−02±9.59e−03 4.91e−02±1.47e−02 4.08e−02±1.45e−02 4.94e−02±1.61e−02

lsnum G24-4 G24-5 G24-6a G24-6c4 1.20e−01±2.19e−02 8.09e−02±1.40e−02 9.36e−02±1.11e−02 9.17e−02±1.19e−028 9.18e−02±1.69e−02 6.08e−02±1.15e−02 6.44e−02±8.29e−03 6.27e−02±9.16e−0312 9.84e−02±1.82e−02 5.61e−02±1.07e−02 7.49e−02±9.73e−03 7.50e−02±1.15e−0216 8.17e−02±2.09e−02 5.41e−02±1.35e−02 5.46e−02±9.09e−03 5.22e−02±7.57e−0320 1.03e−01±1.93e−02 5.51e−02±1.24e−02 8.15e−02±1.23e−02 7.83e−02±1.22e−0224 1.11e−01±2.26e−02 5.68e−02±1.22e−02 8.72e−02±8.05e−03 8.90e−02±9.31e−03

lsnum G24-6d G24-7 G24-8b ∼4 7.15e−02±1.02e−02 1.08e−01±1.54e−02 1.40e−01±1.72e−02 ∼8 5.01e−02±9.34e−03 8.21e−02±1.13e−02 8.12e−02±1.55e−02 ∼12 5.50e−02±7.62e−03 8.90e−02±1.05e−02 6.13e−02±1.67e−02 ∼16 4.14e−02±6.68e−03 8.65e−02±1.61e−02 5.53e−02±2.18e−02 ∼20 5.85e−02±9.10e−03 9.06e−02±1.15e−02 4.46e−02±1.30e−02 ∼24 6.59e−02±1.09e−02 7.03e−02±1.12e−02 4.32e−02±1.80e−02 ∼

4.3.5 The Performance Effect of AM and LS

We further conducted experiments to check whether the use of AM and LS really help in

SELS, and comparisons were made among (1) SELS without AM or LS (SELS-am-ls for

short), (2) SELS without LS (SELS-ls for short), and (3) SELS.

Table 4.6 summarises the mean and standard deviation of the modified offline error

for each of them along with the comparison results. The Wilcoxon rank-sum test at a

confidence level of 0.05 was applied to compare them on each test function. It can be seen

94

Page 107: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

from Table 4.6 that SELS-ls overall outperformed SELS-am-ls, and the used LS further

improves SELS-ls. This demonstrates the benefits of using AM and LS.

Table 4.6: Comparison results among SELS-am-ls, SELS-ls, and SELS based on experi-mental results implemented on DCOP test functions. Here, +, −, and ≈ denote whetherone algorithm is better, worse or equal to another according to Wilcoxon ranksum testwith a level of 0.05.

Func SELS-am-ls SELS-ls vs SELS-am-ls SELS vs SELS-lsG24-l 0.133±0.037 0.069±0.019 + 0.025±0.008 +G24-2 0.186±0.017 0.121±0.021 + 0.050±0.015 +G24-3 0.124±0.038 0.118±0.025 ≈ 0.044±0.022 +

G24-3b 0.233±0.039 0.144±0.021 + 0.052±0.018 +G24-4 0.199±0.027 0.171±0.030 + 0.082±0.021 +G24-5 0.162±0.022 0.117±0.017 + 0.054±0.014 +G24-6a 0.318±0.040 0.163±0.020 + 0.055±0.009 +G24-6c 0.284±0.030 0.152±0.017 + 0.052±0.008 +G24-6d 0.198±0.033 0.128±0.023 + 0.041±0.007 +G24-7 0.121±0.017 0.138±0.018 − 0.087±0.016 +

G24-8b 0.387±0.044 0.242±0.031 + 0.055±0.022 +

4.4 Chapter Summary

This chapter applied niching methods to address DCOPs and study the question of

whether niching method can help better solve DCOPs. To answer this question, this

chapter introduced a novel speciation-based method, SELS, to deal with DCOPs. SELS

employs deterministic crowding (DC) [80], assortative mating (AM) [28], diversity intro-

ducing and feasibility rules to locate and track multiple feasible regions. Additionally,

SELS uses a local search strategy to promote exploitation of the promising regions to

quickly find the new optimum. The experimental studies in Section 4.3 demonstrated

that SELS generally outperformed the state-of-the-art algorithms on a DCOP benchmark.

The performance effects of AM and LS on DCOPs were also studied by experiments and

the experimental results validated the benefits of using AM and LS in SELS.

95

Page 108: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CHAPTER 5

COMPETITIVE CO-EVOLUTION FORFAST-CHANGING DYNAMIC CONSTRAINED

OPTIMISATION

The previous chapter aims to address DCOPs through the use of niching EAs which

decompose the problem based on the search space. In this chapter, we focus on fast-

changing DCOPs and propose to deal with them from a different consideration. The

proposed method decomposes a DCOP into two antagonistic sub-components, environ-

ment and solution, and applies competitive co-evolution (ComC) to co-evolve the two

sub-components. The goal is to find a set of solutions in an offline way that are good

for sub-problems in a DCOP. After this, the solution set is used to optimise the corre-

sponding DCOP online by using the solution set for initialisation each time a change is

detected. During the online optimisation, the solution set is updated according to the en-

countered environments. In Section 5.1, the motivation for this work is given. Section 5.2

will detail the proposed method. In Section 5.3, the experiments on dynamic constrained

optimisation benchmark problems will be presented. Finally, Section 5.4 will conclude

this chapter.

96

Page 109: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.1 Motivation

DCOPs is a sub-category of dynamic optimisation problems (DOPs), for which an opti-

miser is expected to efficiently identify the new optimum once the problem has changed.

To address DCOPs, researchers have proposed some methods mainly by making modifi-

cations to existing dynamic optimisation methods. Concretely, most of the modifications

are made on the diversity-driven approaches. The authors in [96] employed the repair

method to accept diversified infeasible individuals. To achieve a good trade-off between

global search and local search, a diversity-based balancing strategy was proposed in [16].

In the previous chapter, we applied a speciation method to maintain good solutions in

different feasible regions during the optimisation process. The authors in [13] proposed

to locate and track feasible regions by combining a multi-population approach and a

gradient-based repair method.

Being able to efficiently react to the change is one of the most desirable features of

an algorithm for DOPs. In the context of DCOPs, this might be even more important

since in a real-world application, getting a feasible solution in a hard time budget might

be more important than finding the global optimum. For example, a dynamic service

composition process must satisfy users’ requirements within very short time limits or it

becomes impractical [122]. In such cases, people might even be willing to sacrifice the

solution quality (in terms of the objective function value) for efficiency. However, as

discussed in Section 1.3.3, none of the above-mentioned methods for DCOPs can well

deal with these scenarios, i.e., DCOPs with very fast changes.

In previous studies on DOPs and DCOPs, it is usually implicitly assumed that the

investigated algorithm is applied to DOPs or DCOPs from scratch, i.e., their initial so-

lutions are randomly generated. In contrast, in most real-world scenario, one may have

pretty long time to improve a system (and the algorithm behind it) before putting it into

full use. For example, a company may spend months to polish its backend system before

launching a new service composition system, and once the system is put online, it may

need to react to changes on a daily basis.

97

Page 110: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Inspired from this, suppose we can maintain a set of solutions, finding them before

going online and updating them online, and for any possible change induced by the envi-

ronments, one of the solutions can be efficiently modified to get a good solution. Then, the

DOP or DCOP in the online phase would not be as challenging as we expected. However,

this issue was largely overlooked in the previous studies on DOPs and DCOPs, and thus

motivated the research questions concerned in this chapter, namely, whether identifying

a set of promising solutions offline and updating them online would be beneficial to solve

DCOPs online and how to achieve such solutions, as posed in Section 1.3.3.

The first issue is non-trivial. Ideally, it would be very good if the optimal solution

for each sub-problem of a DCOP can be found beforehand. A sub-problem for a DCOP

denotes the optimisation under a fixed environment, which is a static problem. How-

ever, what is the environmental variable value for each sub-problem is hard to know in

advance. Instead, one can search the set by finding the optimal solution with respect

to every possible environmental parameter value. But, this process is time intractable.

An alternative method is to sample the environments. To have a solution set of good

coverage on all environments, the sampled environment each time should be the one that

challenges the current solution set most. Moreover, the solution set needs to be updated

to conquer the newly sampled environments. The relationship between the solution set

and environments in this way is similar to the relationship between the host population

and the parasite population in competitive co-evolution (ComC). Motivated from this, we

propose in this paper to employ ComC to search a good solution set.

After finding a set of good solutions, the online optimisation will be started. Specifi-

cally, the found set will be used as initial solutions once a change is detected, local search

strategy will be applied on the set to do further optimisation, and meanwhile the solu-

tion set is updated. The proposed method is called competitive co-evolution for dynamic

constrained optimisation (CCDO). In the following sections, the CCDO method will be

detailed first. Then, the experimental studies will be presented to answer the question of

whether identifying a set of promising solutions in advance would be beneficial to DCOPs

98

Page 111: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

and the question of whether decomposing DCOPs into solution population and environ-

ment population and co-evolving them using ComC can find a better solution set that are

posed in Section 1.3.3.

5.2 The Proposed Approach

5.2.1 General Framework

Algorithm 10 gives the general framework for CCDO. ComC is applied to find a solution

set for the sub-problems in a DCOP in an offline way. After finding a set of good solutions,

the online optimisation will be started. Specifically, the found set will be used as initial

solutions once a change is detected, local search strategy will be applied on the set to do

further optimisation. In the following sub-sections, the two parts will be detailed.

Algorithm 10 General Framework for CCDO

Require:The search space for the solution: RDx

The range for the environmental parameters: RDa

The maximum number of generations: Gmax

The number of individuals to detect changes: detectkThe memory to store locally best soluiton: membest

1: Do competitive co-evolutionary search based on RDx and RDa

for Gmax generations2: Archive solution set S found by ComC according to Algorithm 133: Initialise a population pop based on S4: Evaluate pop with the current f5: while stopping criteria is not satisfied do6: Do local search on pop according to Algorithm 147: Re-evaluate detectk sentinel solutions to detect changes8: if change is detected then9: Update the solution set S10: Re-intialise the population according to Algorithm 1511: end if12: end while

99

Page 112: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.2.2 Competitive Co-evolutionary Search for Solution Set

The framework of ComC can be found in Algorithm 4 in Chapter 2. When employing

ComC to search the solution set, the host population is used to represent the solution set

and the parasite population is used to represent the environmental parameters. The host

population is denoted as SP (solution population) and the parasite population is denoted

as EP (environment population). Figure 5.1 shows the representation for SP and EP.

𝒙"𝒙#𝒙$

𝒙%

𝜶"𝜶#𝜶$

𝜶'

SPrepresentation

EPrepresentation

Figure 5.1: Representation for SP and EP

In SP, each individual is denoted as xi(i = 1, 2, ...,m). In EP, each individual is de-

noted as αj(j = 1, 2, ..., n). Each xi represents a candidate solution for the problem. Each

αj represents a fixed environment and the values of αj denote the values of environmental

parameters. The problem under the environment of αj is a static problem. The sizes of

SP and EP are recorded as m and n, respectively. Each xi is a candidate solution for the

dynamic problem under the environment of αj. The evolution of SP is to obtain better

and better solutions for individuals in EP and the evolution of EP is to obtain more and

more challenging environments for individuals in SP. The evaluation of each individual

in SP (EP) depends on the individuals in EP (SP). The lines with an arrow are used to

show such a testing and being tested relationship between SP and EP.

As discussed in Section 5.1, we would like to find a solution set that has a good overall

coverage on all environments. Thus, solutions in SP should not only conquer more and

more challenging environments in EP but also maintain good performance on previously

conquered environments. To keep good performance of SP on previously well addressed

100

Page 113: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

environments, we introduce an external archive (denoted as EAr) to store individuals in

EP that SP peforms well on in previous generations. When evaluating SP, both individuals

in EP and EAr are considered.

Evolution of SP

The evolution of SP is to obtain a good solution set for a dynamic problem under envi-

ronments represented by individuals in EP and EAr. Thus, the fitness of an individual in

SP depends on how much it can help SP in solving individuals in EP and EAr. In this

work, the fitness of xi in SP is defined as the drop of the performance of SP on EP and

EAr after deleting xi from SP.

Considering a dynamic constrained optimisation problem defined as in Eq.(2.3), f(x,α)

denotes the objective function under the environment of α. Here, the performance of SP

on the dynamic problem under the environment α is denoted as F (SP ,α), which is de-

fined as the fitness of xbest that performs best among all xi in SP under the environment

of α. If xbest is feasible, then F (SP ,α) = f(xbest ,α); otherwise, F (SP ,α) is set to the

sum of constraint violation values of xbest on α.

Considering the problem scaling, we define the fitness of xi in SP as the number of α

in EP and EAr on which the performance changes if deleting xi from SP. That is:

Fit(xi, SP ,EP ∪ EAr)

=∑

α∈EP∪EAr

[F (SP ,α) 6= F (SP/xi,α)](5.1)

When comparing two individuals in SP that have the same Fit(xi, SP ,EP ∪ EAr), one

individual is randomly selected.

Algorithm 11 shows the evolutionary process of SP for one generation. At every

generation g, two individuals x1 and x2 are randomly selected from SP at the current

generation (i.e. SP g). Then, a new individual xnew is generated by doing evolutionary

operation on x1 and x2 and added to SP g to get SP ′g. After this, the fitness of all

101

Page 114: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

individuals in SP ′g will be evaluated using the fitness function defined in Eq. (5.1). The

worst individual xworst that has the smallest fitness will be deleted from SP ′g and the

remaining individuals in SP g will go to next generation as SP g+1. If more than one

individual has the smallest fitness, xworst is selected randomly from them.

Algorithm 11 SP Evolution for One Generation

1: Evaluate the fitness of each x in SP g based on EP g ∪ EAr2: Randomly select x1 and x2 from SP g

3: Generate xnew based on x1 and x2 using mutation and crossover4: Set SP ′g = SP g ∪ xnew

5: Evaluate the fitness of each x in SP ′g6: Set SP g+1 = SP ′g/xworst

Evolution of EP

The evolution of EP is to obtain more and more challenging environments for individuals in

SP. Thus, if individuals in SP perform better on a dynamic problem under the environment

α in EP, then the environment α is less challenging and the fitness of α should be lower.

To evaluate the fitness of an individual αj in EP, we randomly generate a set of solutions

(denoted as I) and use the improvement obtained by SP over I on αj to evaluate it.

According to whether the best solutions for αj in I and SP are feasible and whether an

improvement is investigated, an individual αj can be categorised into the following four

cases:

1. The best solution for αj in SP is infeasible. In this case, the farther the best solution

is from the feasible boundary, the more challenging αj is. Thus, the fitness of αj is

defined as the sum of constraint violation values of the best solution.

2. The best solution for αj in SP is feasible, but it is worse than the best solution

for αj in I. In this case, the improvement obtained by I over SP on αj is used to

evaluate it. That is,

Fit(αj) =minx∈SPF (x, αj)−minx∈IF (x, αj)

max(|minx∈IF (x, αj)|, |minx∈SPF (x, αj)|)(5.2)

102

Page 115: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

The larger the improvement is, the more challenging αj is.

3. The best solution for αj in SP is feasible and the best solution in I is infeasible. In

this case, the improvement is hard to calculate. In this work, the fitness of αj is

defined as the function value of the best solution in SP for αj (i.e. minx∈SPF (x, αj)).

4. Both the best solutions for αj in SP and I are feasible. αj is considered less

challenging if a larger improvement is obtained by SP over I. Thus, the fitness of

αj is defined as the minus of the improvement obtained by SP over I on αj. That

is,

Fit(αj) =minx∈SPF (x, αj)−minx∈IF (x, αj)

max(|minx∈IF (x, αj)|, |minx∈SPF (x, αj)|)(5.3)

For each case, the larger the fitness is, the more challenging αj is. When comparing

individuals belonging to different cases, an individual that belongs to case 1 is considered

more challenging than an individual that belongs to case 2, an individual belonging to case

2 is considered more challenging than an individual belonging to case 3 and an individual

belonging to case 4. When comparing an individual belonging to case 3 and an individual

belonging to case 4, a random one is selected.

Algorithm 12 gives the evolutionary process of EP for one generation. At every gener-

ation g, a new population EPnew is generated based on EP g using evolutionary operation.

Then, EP g and EPnew are combined to obtain EP ′g and the fitness of individuals in EP ′g

will be evaluated with the above-mentioned method. After this, EP g and EPnew will be

compared using a pair-wise comparison. All winners will enter next generation as EP g+1

and all losers will be used to update the external archive EAr. Here, losers mean that

SP performs better on them. As SP is expected to perform well under all environments,

we would like to maintain the performance of SP on these losers while improving SP to

conquer the winners. Thus, all losers are used to update the archive which is used to

evaluate SP together with the winners.

103

Page 116: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Algorithm 12 EP Evolution for One Generation

1: Generate a new population EPnew based on EP g using crossover and mutation2: Set EP g = EP g ∪ EPnew

3: Evaluate each α in EP′

g

4: for i=1,2,...,size of EP do5: if EP i

g is more challenging than EP inew then

6: EP ig+1 = EP i

g

7: EP iworse = EP i

new

8: else9: EP i

g+1 = EP inew

10: EP iworse = EP i

g

11: end if12: end for13: Using EPworse to update the archive EAr

To Maintain and Update External Archive

The external archive EAr is to store previously good environments. In the evolution of

EP, the individuals that do not enter next generation will be saved into EAr as they are

less challenging. At the beginning, all α individuals that do not enter next generation

will be saved directly into EAr. When the number of added individuals achieves the size

of EAr, each new individual α will be added according to the following replacement rules:

1. Set EAr ′ = EAr ∪α,

2. Calculate the diversity contribution of each individual in EAr ′,

3. Delete the individual that has the least contribution.

The replacement rules delete the individuals that contribute least to the diversity of

the archive. This is to maintain the diversity of the archive. In this work, the diversity

contribution of each individual is calculated as the reduction of the diversity of the archive

if deleting this individual. The diversity of the archive is calculated as the mean value

of the difference between each pair of individuals in the archive. The difference of two

individuals αi and αj is calculated as follows:

Diff (αi,αj) = −Spearman(Ranks(SP ,αi),Ranks(SP ,αj)) (5.4)

104

Page 117: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

where Spearman denotes the Spearman’s rank correlation coefficient or Spearman’s rho

[114] between Ranks(SP ,αi) and Ranks(SP ,αj). Ranks(SP ,αi)/Ranks(SP ,αj) denotes

the performance rank vector of all individuals in SP on αi/αj. If xi is the individual in

SP that peforms best on α, then its rank is 1. If it performs second best, its rank is 2,

and so on.

Competitive Co-evolutionary Search Algorithm

Algorithm 13 details the evolutionary process to search the solution set by ComC. In this

algorithm, Dx and Da are the dimension of the solution space and environment space for

a DCOP, respectively. The solution population obtained in the final generation is used as

the solution set on which to do online optimisation. The found solution set for a dynamic

problem will be later used for the online optimisation to address the dynamic problem.

Algorithm 13 Competitive Co-evolutionary Search Algorithm

Require:The search space for the solution: RDx

The range for the enviromental parameters: RDa

The maximum number of generations: Gmax

The size for the initial solution set I: isize

The maximum generation number to evolve SP g: espEnsure:

The final solution population, SPGmax

1: Set g = 02: Initialise solution population SP0 based on RDx

3: Initialise environment population EP0 based on RDa

4: Set EAr = ∅5: while g ≤ Gmax do6: Evolve SP g for esp generations to get SP g+1 based on EP g ∪ EAr according to

Algorithm 117: Randomly sample a solution set I with isize

8: Evolve EP g for one generation based on SP g+1 and I according to Algorithm 129: Update the archive EAr10: Set g = g + 111: end while12: Output SPGmax

105

Page 118: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.2.3 Online Optimisation

When the solution set is found by ComC, it will be used to optimise the correspond-

ing DCOP online. Concretely, the found set will be used as the initial individuals at

the beginning and each time the change is detected. During the interval between two

consecutive changes, local search operations will be conducted on the individuals in the

population to address the current sub-problem.

Local Search Strategy

In this work, we use sequential quadratic programming (SQP) [11] to do local search

in the online optimisation. SQP is an iterative method which deals with a constrained

nonlinear optimisation problem by solving a sequence of relatively simple optimisation

sub-problems. Suppose a constrained nonlinear optimisation problem has the following

formulation:

minimisex

f(x)

s.t. gi(x) ≤ 0, i = 1, 2, ..., k

hi(x) = 0, j = 1, 2, ..., l

(5.5)

where x = [x1, x2, ..., xDx ]T and Dx is the dimension of the decision space. Then, we can

get the Lagrangian function for this problem as:

L(x,λ,µ) = f(x) + λT ∗G(x)− µT ∗H(x) (5.6)

where G(x) = gi(x)|i = 1, 2, ..., k, H(x) = hi(x)|i = 1, 2, ..., l, λ = [λ1, λ2, ..., λk]T

and µ = [µ1, µ2, ..., µl]T are vectors of Lagrangian multipliers.

Suppose the solution point at the k-th iteration is xk, then the search direction dk =

106

Page 119: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[d1, d2, ..., dDx ]T is a solution to the quadratic programming sub-problem as follows:

minimisedk

f(xk) +∇f(xk)Tdk +

1

2dTk∇2

xxL(x,λ,µ)dk

s.t. gi(xk) +∇gi(xk)Tdk ≤ 0, i = 1, 2, ..., k

hi(xk) +∇hi(xk)Tdk = 0, j = 1, 2, ..., l

(5.7)

Here, each of ∇f(xk), ∇gi(xk), and ∇hj(xk) is estimated by the forward difference for-

mula as follows:

∇f(xk) =1

η[f(xk + e1)− f(xk), f(xk + e2)− f(xk), ..., f(xk + eDx)− f(xk)]

T

∇gi(xk) =1

η[gi(x+ e1)− gi(x), gi(x+ e2)− gi(x), ..., gi(x+ eDx)− gi(x)]T

∇hj(xk) =1

η[hi(x+ e1)− hi(x), hi(x+ e2)− hi(x), ..., hi(x+ eDx)− hi(x)]T

(5.8)

where η is a very small positive value (set as 1.49e-8 in this work) and ei is a vector

in which the i-th element is η and the other elements are all 0. The ∇2xxL(x,λ,µ) is a

positive definite approximation of the Hessian matrix of L(x,λ,µ) on x. It is initialised

as a positive definite matrix, and updated based on ∇f(x), ∇gi(x), and ∇hj(x) using

Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [1].

After dk is found, xk+1 is set to (xk + dk) and the (k+1)-th iteration starts. The

starting point for the SQP iteration at the beginning is the solution on which local search

is conducted. To implement SQP, the Matlab optimisation toolbox and fmincon function

was used in this chapter. More implementation details can be found in [1].

At every generation, each individual in the population can undergo local search. How-

ever, doing local search on similar individuals can not bring improvements but cost more

computational resources. To avoid conducting local search on similar individuals, indi-

viduals that have undergone local search are recorded in the memory, memls. At every

generation, if the distance between one individual and the nearest individual in the memls

is less than 0.01, local search will not be conducted on this individual. This strategy is

similar to the adaptive local search strategy used in [13]. At the end of every generation,

107

Page 120: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

individuals whose distance to the nearest individual in the memls is less than 0.01 will be

replaced by random individuals.

In addition to the SQP operation, the Gaussian mutation operator is also used to

mutate every individual. The mutant individual will replace the current individual if it is

better. Algorithm 14 shows the details of the local search process.

Algorithm 14 Local Search Process

1: for each solution xi in the population popg do2: Re-evaluate xi to detect whether a change happens3: if a change is detected then4: Re-intialise the population according to Algorithm 155: else6: if the distance between xi and memls is larger than 1e-2 then7: Do SQP local search on xi to get xbesti

8: Archive xi into memls

9: Archive xbesti into membest

10: Set xi = xbesti

11: end if12: Generate x

′i = xi + δ ∗ randn(1, Dx)

13: if f(x′i) is better than f(xi) then

14: Set xi = x′i

15: end if16: end if17: end for

Change Detection

At each generation, individuals in the current population will be re-evaluated to detect

whether a change happens before local search operation. In some situations, the popu-

lation might converge and this detection strategy can not detect changes that happen in

other regions. Considering this, we generate several random individuals and use them as

sentinels. After each local search operation, the randomly generated individuals will also

be re-evaluated to check whether there is a change.

108

Page 121: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Update Mechanism

As the environments of a DCOP that do appear in the online optimisation are of top con-

cern, the solution set obtained by ComC should be updated according to the encountered

environments before being used as initial solutions. One direct way to implement this to

include every newly encountered environment into the environment population EP or the

archive EAr and run ComC online to update the solution set S. Each time a change is

detected, the current solutions in the solution population SP are picked out for intialisa-

tion. However, running ComC online will cost extra computing resources. Considering

this, we use a very simple mechanism to update the solutions set S. That is, the best

solution found in last environment will be added to S when a change is detected.

Re-initialisation Process

To deal with the DCOPs in which the optimum switches between feasible regions, the

optimal solutions obtained by last local search operations are used as the initial individuals

together with the solution set S when a change is detected. Algorithm 15 shows the details

of the re-initialisation process.

Algorithm 15 Population Re-initialisation Process

Require:The memory to store locally best soluiton: membest

The solution set S found by ComC1: Set popinitial = S ∪membest

2: Set pop = ∅3: for each xi in popinitial do4: if the distance between xi and pop is larger than 1e-2 then5: pop = pop ∪ xi6: end if7: end for8: Do local search on pop according to Algorithm 14

109

Page 122: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.3 Experimental Studies

To evaluate whether identifying a set of promising solutions offline would be beneficial to

DCOPs, a DCOP benchmark was used to test the performance of CCDO and comparisons

are made between it and two state-of-the-art dynamic constrained optimisation methods.

At the cost of efforts spent in the competitive co-evolutionary search process, we expect

CCDO to show a fast adaptation to DCOPs with fast changes. Therefore, different

change frequencies of DCOPs were considered to test the performance of CCDO in the

experimental studies.

5.3.1 Bechmark Problems

In the literature, there exist two suites of DCOP benchmark test functions, one was

proposed in [94] and the other was proposed in [13]. The latter one contains test functions

that have smaller feasible regions. However, DCOPs with small feasible regions will prefer

methods that deal with constraints using a repair mechanism.

Considering this, only the DCOP test problems in the former suite were used in this

study. In the experiments, 9 DCOP benchmark test functions proposed in [94] were used.

They are G24-l (dF,fC), G24-2 (dF,fC), G24-3 (dF,dC), G24-3b (dF,dC), G24-4 (dF,dC),

G24-5 (dF,dC), G24-6a (dF,fC), G24-6c (dF,fC), G24-7 (fF,dC). Here, ‘dF’, ‘fF’, ‘dC’,

and ‘fC’ mean dynamic objective function, fixed objective function, dynamic constraint

functions, and fixed constraint functions, respectively. All of them have the following

form:

minx

f(x,α(t))

subject to : gi(x,α(t)) ≤ 0, i = 1, 2, ..., k

(5.9)

where x denotes the decision variable vector, k denotes the number of constraints, α(t) is

the vector of environmental parameters. They vary at a certain frequency as time goes by.

Note that we assume in this chapter that the variation range of α(t) is known beforehand.

110

Page 123: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Further details about them can be found in Section 4.3.1 of Chapter 4.

In the experiments, the change frequency for each test function is set to 100, 500 and

1000 objective function evaluations, respectively. The change severity is set to medium

(i.e., k = 0.5, and S = 20). The environmental parameters for the 9 DCOP test functions

are the variables in them that changes as time goes by. The details of the environmental

parameters and their ranges in each test function are given in Table 5.1. The range for

each environmental parameter is set as the interval between the minimum value and the

maximum value that the parameter can have when the number of problem changes is set

to 12.

Table 5.1: Environmental parameters for each test function

Function Parameters Ranges Solution SizeG24-l (dF,fC) p1 [-1,1] 2G24-2 (dF,fC) p1, p2 [-1,1],[-1,1] 5G24-6a (dF,fC) p1 [-1,1] 2G24-6c (dF,fC) p1 [-1,1] 2G24-3 (dF,dC) s2 [-0.2,2] many

G24-3b (dF,dC) p1, s2 [-1,1], [-0.2,2] manyG24-4 (dF,dC) p1, s2 [-1,1], [0,2.2] manyG24-5 (dF,dC) p1, p2, s2 [-1,1],[-1,1],[0,2.2] manyG24-7 (fF,dC) s2 [0,2.2] many

When the value of environmental parameter changes, the optimal solution might

change or not. When the environmental parameter changes in the range, an ideal so-

lution set should be the one that owns an optimal solution for the sub-problem in any

environment. Table 5.1 also gives the smallest size that an ideal solution set can have for

each test function in the column of ‘Solution size’. According to this, the 9 test functions

can be classified into two groups. The first group includes G24-l (dF,fC), G24-2 (dF,fC),

G24-6a (dF,fC), G24-6c (dF,fC), for which the smallest ideal solution set has a limited

size. The second group includes G24-3 (dF,dC), G24-3b (dF,dC), G24-4 (dF,dC), G24-5

(dF,dC), G24-6d (dF,fC), and G24-7 (fF,dC), for which the smallest ideal solution set

has an unlimited size.

111

Page 124: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.3.2 Overview on Compared Algorithms

The proposed CCDO method was compared with two state-of-the-art algorithms, SELS

[78] and LTFR-DSPSO [13]. Different from SELS, LTFR-DSPSO applied a gradient-

based repair method [21] to handle the constraints. Moreover, all constraint calculation

costs of LTFR-DSPSO were ignored in [13]. To make a fair comparison, the CCDO

method was modified by using the same gradient-based repair method when compared to

LTFR-DSPSO. The gradient-based repair process is as follows:

Suppose a constrained nonlinear optimisation problem has the same formulation as in

Eq. (5.5). According to first-order Taylor expansion, we can have at the point x0:

gi(x) ≈ gi(x0) +∇Tgi(x0)(x− x0), i = 1, 2..., k

hj(x) ≈ hj(x0) +∇Thj(x0)(x− x0), j = 1, 2..., l

(5.10)

where ∇gi and ∇hj are estimated as in Eq. (5.8).

Let GH(x) = [g1(x), g2(x), ..., gk(x), h1(x), h2(x), ..., hl(x)]T , and

JGH = [∇Tg1(x0);∇Tg2(x0); ...;∇Tgk(x0);∇Th1(x0);∇Th2(x0); ...;∇Thl(x0)]. According

to Eq. (5.10), we can have:

GH(x) ≈ GH(x0) + JGH ∗ (x− x0) (5.11)

Let ∆gi(x) = max(0, gi(x)) and ∆hi(x) = max(0, gi(x)). Then,

∆GH(x) = [∆g1(x), ...,∆gk(x),∆h1(x), ...,∆hl(x)] is the constraint violation vector of

x.

Suppose x0 is infeasible, to repair x0 to a feasible solution x1 (i.e. ∆GH(x1) = 0),

we can have according to Eq. (5.11):

−∆GH(x0) = JGH ∗ (x1 − x0) (5.12)

112

Page 125: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Then, we can get:

x1 = x0 − J−1GH ∗∆GH(x0) (5.13)

When the matrix JGH is not invertible, the inverse matrix J−1GH is approximated by the

Moore-Penrose inverse in the experiments.

In this CCDO+Repair method, only the feasible solution and the sentinel solutions will

be evaluated with the objective function. For each infeasible solution, a repair operation

will be conducted. The repair-based local search process in CCDO+Repair is presented

in Algorithm 16. If the solution obtained from local search is still infeasible, the repair

operation is also conducted on this solution.

Algorithm 16 Repair-based Local Search Process

1: for each soluiton xi in the population popg do2: Re-evaluate the feasibility of xi to detect whether a change happens3: if a change is detected then4: Re-intialise the population according to Algorithm 155: else6: if not isfeasible(xi) then7: Repair xi to xrepairi

8: if xrepairi is better than xi then9: Set xi = xrepairi

10: end if11: end if12: if the distance between xi and memls is larger than 1e-2 then13: Do SQP local search on xi to get xbesti

14: Archive xi into memls

15: Archive xbesti into membest

16: if not isfeasible(xbesti ) then17: Repair xbesti to xrepairi

18: if xrepairi is better than xbest then19: Set xbesti = xrepairi

20: end if21: end if22: Set xi = xbesti

23: end if24: Generate x

′i = xi + δ ∗ randn(1, Dx)

25: if f(x′i) is better than f(xi) then

26: Set xi = x′i

27: end if28: end if29: end for

113

Page 126: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.3.3 Performance Measures

For the SELS method and the LTFR-DSPSO method, different performance measures

were considered to assess the efficacy of these two algorithms in their original papers

[78, 13]. Considering this, different performance measures were also applied in this chapter

when making different comparisons.

When comparing the CCDO with SELS, the modified offline error averaged at every

function evaluation is used to evaluate the performance of each algorithm. The modified

offline error averaged at every function evaluation is defined as follows:

EMO =1

num of eval

num of eval∑j=1

eMO(j) (5.14)

where num of eval denotes the maximum number of the function evaluations, and eMO(j)

denotes the error of the best feasible solution obtained at j-th evaluation. If there are no

feasible solutions at the j-th evaluation, the worst possible value that a feasible solution

can have will be taken. The error value of a feasible solution means the difference between

its function value and the best possible value that a feasible solution can have. The smaller

the modified offline error is, then the better the algorithm performs.

When comparing CCDO+Repair with LTFR-DSPSO, the modified offline error av-

eraged at every generation is used to evaluate the performance of each algorithm. The

modified offline error averaged at every generation is defined as follows:

EMO =1

num of gen

num of gen∑j=1

eMO(j) (5.15)

where num of gen denotes the maximum number of the function evaluations, and eMO(j)

denotes the error of the best feasible solution obtained at the j-th generation. If there

are no feasible solutions at the j-th generation, the worst possible value that a feasible

solution can have is taken.

114

Page 127: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

5.3.4 Parameter Settings

Comparison with SELS

When compared to SELS, the number of changes is set to 12, which is the same as in

[78]. The parameter setting for the competitive co-evolutionary search process and online

optimisation is given in Table 5.2 and Table 5.3, respectively. The parameter values for

SELS were set to the same as in its original paper. For the parameter setting of SQP, the

other parameters were set as default except for the parameters mentioned in Table 5.3.

The parameter setting for SELS is the same as in its original paper [78].

Table 5.2: Parameter settings for co-evolutionary search process when comparing CCDOwith SELS

Parameter ValueGmax 50SP size 10

EAr size 10EP size 10isize 5esp 50

SP evolutionGaussian muation with scale = 0.1, rate = 0.5

Intermediate Crossover with rate = 0.5EP evolution Gaussian muation with scale = 0.05, rate = 0.5

Table 5.3: Parameter settings for online optimisation when comparing CCDO with SELS

Parameter Valuedetectk 4

SQPConstraintTolerance = 0

HonorBounds = trueMaxFunctionEvaluations = 20

Mutation Gaussian muation with scale = 0.1

115

Page 128: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Comparison with LTFR-DSPSO

When compared to LTFR-DSPSO, the number of changes is set to 10, which is the same

as in [13]. The parameter setting for the competitive co-evolutionary search process and

online optimisation is given in Table 5.4 and Table 5.5, respectively. For the parameter

setting of SQP, the other parameters are set as default except the parameters mentioned

in Table 5.5. The experimental results of LTFR-DSPSO given in [13] were used for

comparison.

Table 5.4: Parameter settings for co-evolutionary search process when comparing theproposed CCDO method with LTFR-DSPSO

Parameter ValueGmax 50SPsize 20EArsize 20EPsize 20isize 5esp 50

SP evolutionGaussian muation with scale = 0.1, rate = 0.5

Intermediate Crossover with rate = 0.5EP evolution Gaussian muation with scale = 0.05, rate = 0.5

Table 5.5: Parameter settings for online optimisation when comparing the proposedCCDO method with LTFR-DSPSO

Parameter Valuedetectk 4

SQPConstraintTolerance = 0

HonorBounds = trueMaxFunctionEvaluations = 50

Mutation Gaussian muation with scale = 0.1

5.3.5 Experimental Results

Before comparing the CCDO method and each of SELS and LTFR-DSPSO on the test-

ing DCOPs, we first conducted experiments to check whether evolution of environment

116

Page 129: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

population in the co-evolutionary search process can help find a set of solutions with a

better coverage.

The Effectiveness of Environmental Evolution

In this experiment, we randomly generated the same number of environments at the

beginning but fixed them during the co-evolutionary search process in Algorithm 13. The

solution set found by this fixed process is compared to the solution set found by the co-

evolutionary search process. The parameter setting for this experiments is set the same

as in Table 5.2.

To evaluate the performance of the solution set, 50 sub-problems with randomly gen-

erated environmental parameter values were used as test sub-problems for each test func-

tion. The performance of the solution set on each sub-problem equals to the error value

of the best solution in the solution set for this sub-problem. The error value of a feasible

solution means the difference between its function value and the best possible value that

a feasible solution can have. When there is no feasible solution in the solution set for a

sub-problem, the worst possible objective function value that a feasible solution can have

on this sub-problem will be taken. The average performance of the solution set on these

50 sub-problems over 50 runs was used for comparison. The Wilcoxon rank-sum test with

a confidence level at 0.05 was used to make a comparison between the random method

with fixed environments and the co-evolutionary method on each test function.

Table 5.6 summarises the mean and standard deviation of the average best error

obtained on 50 sub-problems over 50 runs for each test function. The better results are

marked in bold. It can be seen from Table 5.6 that the evolution of environments can

help obtain a competitive or better solution set on the second group of test functions in

which the smallest ideal solution set has an unlimited size. But, on the first group of test

functions in which the smallest ideal solution set has a limited size, the random method

with fixed environments performed better. The analysis about the reason is as follows.

For the first group of test problems, the coverage on all possible sub-problems can

117

Page 130: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

be easily achieved by random environment sampling as the smallest ideal solution set

has a limited size. As a comparison, the evolution of environments can not bring too

much improvement on the coverage but needs more function evaluations. Consequently,

the co-evolutionary method can not perform as well as the one without the evolution of

environments.

For the second group of test problems, although the evolution of environments also

costed more function evaluations, it is hard for several randomly generated environments

to have a good coverage on this kind of problems. In this case, the evolution of en-

vironment population takes effect and thus the co-evolutionary method can show some

advantages on this group of test problems.

Table 5.6: Comparison results between the co-evolutionary method and the randommethod with fixed environments. The better result obtained on each test function ismarked in bold.

Func Solution Size Fixed Environments Co-evolutionaryG24-l 2 4.06e−02±6.25e−02 7.98e−02±1.15e−01G24-2 5 5.19e−02±8.57e−02 6.50e−02 ±8.64e−02G24-6a 2 4.52e−02±1.90e−01 7.46e−02±2.63e−01G24-6c 2 4.45e−02±4.51e−02 3.38e−02±3.21e−02G24-3 many 6.46e−01±5.47e−01 2.79e−01±3.14e−01

G24-3b many 6.03e−01±2.04e−01 5.85e−01±1.58e−01G24-4 many 6.40e−01±2.14e−01 6.04e−01±1.75e−01G24-5 many 2.86e−01±1.25e−01 2.81e−01±9.12e−02G24-7 many 4.11e−01±1.97e−01 2.50e−01±6.42e−02

Comparison with SELS

Table 5.7 summarises the mean and standard deviation of the modified offline error over

50 runs obtained by the proposed CCDO method and SELS under the change frequency

of 1000 FEs. We applied the Wilcoxon rank-sum test at a confidence level of 0.05 to

test whether the CCDO method performed significantly better than SELS on each test

function. In Table 5.7, the better result obtained on each test function is marked in bold.

Figs. 5.2 and 5.3 plot the evolutionary curves for both CCDO and SELS.

118

Page 131: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Table 5.7: Comparison results between SELS and the CCDO method under the changefrequency of 1000 FEs. The better result on each test function is marked in bold.

Func SELS CCDOG24-l (dF,fC) 2.49e−02±8.16e−03 9.38e−03±7.65e−03G24-2 (dF,fC) 5.00e−02±1.50e−02 1.53e−02±6.13e−03G24-3 (fF,dC) 4.41e−02±2.17e−02 1.51e−02±7.36e−03

G24-3b (dF,dC) 5.19e−02±1.79e−02 3.86e−02±2.76e−02G24-4 (dF,dC) 8.17e−02±2.09e−02 1.21e−01±2.97e−02G24-5 (dF,dC) 5.41e−02±1.35e−02 8.25e−02±2.08e−02G24-6a (dF,fC) 5.46e−02±9.09e−03 3.33e−02±7.69e−03G24-6c (dF,fC) 5.22e−02±7.57e−03 3.07e−02±7.23e−03G24-7 (fF,dC) 8.65e−02±1.61e−02 8.26e−02±1.34e−02

0 2000 4000 6000 8000 10000 12000The Number of FEs

-1

0

1

2

3

4

5

Bes

t Err

or

func-2

SELSCCDO

(a) G24-l

0 2000 4000 6000 8000 10000 12000The Number of FEs

-0.5

0

0.5

1

1.5

2

2.5

3

Bes

t Err

or

func-5

SELSCCDO

(b) G24-2

0 2000 4000 6000 8000 10000 12000The Number of FEs

-0.5

0

0.5

1

1.5

2

2.5

3

3.5

Bes

t Err

or

func-7

SELSCCDO

(c) G24-3

Figure 5.2: Evolutionary curves of SELS and the CCDO method on the first 3 functions

119

Page 132: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

0 2000 4000 6000 8000 10000 12000The Number of FEs

-0.5

0

0.5

1

1.5

2

2.5

3

3.5

Bes

t Err

or

func-8

SELSCCDO

(a) G24-3b

0 2000 4000 6000 8000 10000 12000The Number of FEs

-1

0

1

2

3

4

5

6

Bes

t Err

or

func-10

SELSCCDO

(b) G24-4

0 2000 4000 6000 8000 10000 12000The Number of FEs

-0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Bes

t Err

or

func-11

SELSCCDO

(c) G24-5

0 2000 4000 6000 8000 10000 12000The Number of FEs

0

1

2

3

4

5

6B

est E

rror

func-12

SELSCCDO

(d) G24-6a

0 2000 4000 6000 8000 10000 12000The Number of FEs

0

1

2

3

4

5

6

Bes

t Err

or

func-14

SELSCCDO

(e) G24-6c

0 2000 4000 6000 8000 10000 12000The Number of FEs

-1

0

1

2

3

4

5

6

Bes

t Err

or

func-16

SELSCCDO

(f) G24-7

Figure 5.3: Evolutionary curves of SELS and the CCDO on the last 6 test functions

120

Page 133: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

It can be seen from Table 5.7 that the CCDO method achieved better results than

SELS except for only two test functions, G24-4 (dF,dC), G24-5 (dF,dC). The evolution-

ary curves in Figs. 5.2 and 5.3 explain that the better results achieved on these 7 test

functions are attributed to that the CCDO can react quickly once a change is detected.

The CCDO method did not perform so well on test functions G24-4 (dF,dC) and G24-5

(dF,dC) because the solution sets found by the competitive co-evolutionary process on

them are not good enough. In future work, the performance can be further enhanced

by an improved co-evolutionary search process. Moreover, random individuals can be

introduced for intialisation in case that the solution set found is not good enough.

Comparison with LTFR-DSPSO

Table 5.8 summarises the mean and standard deviation of the modified offline error over

50 runs obtained by the proposed CCDO+Repair and LTFR-DSPSO under the change

frequency of 1000 FEs. In Table 5.8, the better result obtained on each test function is

marked in bold. It can be seen from Table 5.8 that the CCDO+Repair achieved better

results than LTFR-DSPSO except for only three test functions, G24-4 (dF,dC), G24-6a

(dF,fC) and G24-6c (dF,fC).

Table 5.8: Comparison results between the LTFR-DSPSO method and the CCDO+Repairmethod under the change frequency of 1000 FEs. The better result is marked in bold.

Func LTFR-DSPSO CCDO+RepairG24-l (dF,fC) 6.09e−06±4.24e−05 7.09e−08±1.96e−07G24-2 (dF,fC) 8.54e−04±3.38e−03 6.15e−04±3.07e−03G24-3 (fF,dC) 3.64e−05±1.13e−04 7.87e−11±3.04e−10

G24-3b (dF,dC) 3.82e−05±1.17e−04 7.80e−08±3.78e−07G24-4 (dF,dC) 5.45e−06±3.79e−05 1.81e−02±9.07e−02G24-5 (dF,dC) 7.00e−05±4.89e−04 9.92e−07±4.94e−06G24-6a (dF,fC) 2.21e−18±9.28e−18 5.91e−04±9.80e−04G24-6c (dF,fC) 1.91e−18±8.82e−18 6.39e−04±1.35e−03G24-7 (fF,dC) 5.11e−06±3.55e−05 1.28e−13±1.17e−13

121

Page 134: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Comparison Results under Change Frequency of 500 FEs and 100 FEs

To test whether CCDO can show a fast adaptation to fast changes, experiments were

conducted under two other change frequencies in addtion to 1000 FEs to investigate

the effect on the performance of compared algorithms. In the experiments, the change

frequency was set to 100 FEs and 500 FEs, respectively. The frequency of 100 FEs

represents that the change happens very fast. The proposed CCDO method was tested

on the 9 test functions under each change frequency. As no results were found for LTFR-

DSPSO under the change frequency of 100 FEs, CCDO was compared only to SELS in

this experiment. Tables 5.9 and 5.10 present the comparison results between SELS and

CCDO under change frequencies of 500 FEs and 100 FEs, respectively.

Table 5.9: Comparison results between SELS and the CCDO method under the changefrequency of 500 FEs. The better result obtained on each test function is marked in bold.

Func SELS CCDOG24-l (dF,fC) 6.18e−02±1.65e−02 2.42e−02±4.11e−02G24-2 (dF,fC) 8.98e−02±1.48e−02 3.14e−02±1.48e−02G24-3 (fF,dC) 8.75e−02±2.94e−02 3.00e−02±1.67e−02

G24-3b (dF,dC) 1.12e−01±2.86e−02 9.73e−02±7.24e−02G24-4 (dF,dC) 1.52e−01±3.70e−02 2.34e−01±9.20e−02G24-5 (dF,dC) 9.90e−02±1.54e−02 1.51e−01±4.08e−02G24-6a (dF,fC) 1.08e−01±1.45e−02 6.54e−02±1.30e−02G24-6c (dF,fC) 1.09e−01±1.74e−02 6.28e−02±1.43e−02G24-7 (fF,dC) 1.36e−01±2.02e−02 1.48e−01±2.68e−02

It can be seen by comparing Table 5.7 and Table 5.9 that the advantage of CCDO

over SELS does not change too much when the change frequency changes from 1000 FEs

to 500 FEs. However, when the change frequency changes from 1000 FEs or 500 FEs

to 100 FEs, it can be seen by comparing Table 5.7 or Table 5.9 and Table 5.10 that the

advantage of CCDO over SELS becomes more obvious. For all 9 test functions, CCDO

obtained better results than SELS. The comparison between Table 5.7 or Table 5.9 and

Table 5.10 can demonstrate the potential advantage of CCDO, that is, a fast adaptation

to fast changes. As none of the existing algorithms can peform well on DCOPs with very

122

Page 135: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

fast changes, CCDO offers a direction to optimise such problems.

Table 5.10: Comparison results between SELS and the CCDO method under the changefrequency of 100 FEs. The better result obtained on each test function is marked in bold.

Func SELS CCDOG24-l (dF,fC) 4.05e−01±9.64e−02 1.10e−01±1.08e−01G24-2 (dF,fC) 3.96e−01±5.56e−02 1.42e−01±4.23e−02G24-3 (fF,dC) 4.87e−01±1.35e−01 1.51e−01±6.97e−02

G24-3b (dF,dC) 5.43e−01±8.92e−02 4.91e−01±1.72e−01G24-4 (dF,dC) 8.54e−01±1.29e−01 7.49e−01±1.23e−01G24-5 (dF,dC) 5.25e−01±8.99e−02 4.73e−01±1.12e−01G24-6a (dF,fC) 5.53e−01±9.16e−02 2.64e−01±9.16e−02G24-6c (dF,fC) 5.10e−01±7.52e−02 3.05e−01±9.40e−02G24-7 (fF,dC) 9.43e−01±2.05e−01 6.43e−01±1.10e−01

5.4 Chapter Summary

This chapter aimed to investigate whether identifying a set of promising solutions in

advance would be beneficial to DCOPs and how to achieve such solutions. To answer these

two questions, we proposed a new dynamic constrained optimisation approach, CCDO,

which dynamically maintains a solution set that are obtained by the use of ComC at the

beginning, and conducts online local search by using this set as initial solutions once a

change has been detected. During the online optimisation process, the solution set is

updated by the best solutions obtained on encountered environments.

To verify the efficacy of the proposed CCDO method, 9 DCOP benchmark test func-

tions were used and two state-of-the-art optimisation methods, SELS and LTFR-DSPSO,

were used for comparison. Experimental studies have shown that the CCDO method

generally performed better than SELS and LTFR-DSPSO on the 9 test functions. This

demonstrates that identifying a set of promising solutions in advance is more beneficial

to DCOPs than searching with randomly generated initial solutions. We also conducted

further experiments to check the effectiveness of the environmental evolution in the co-

123

Page 136: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

evolutionary search process. The experimental results demonstrated that the evolution

of environments is more suitable for problems in which the smallest ideal solution set has

an unlimited size compared to the random method with fixed environments.

In the field of dynamic optimisation, DOPs with very fast changes are the most chal-

lenging problems. When the changes happen very fast, the existing diversity-driven ap-

proaches, memory approaches and prediction approaches can not work well. The diversity-

driven approaches do not have enough time to find a satisfying solution. As the time for

optimisation is generally very short, the memory approaches can not have optimal solu-

tions to archive and the prediction approaches can not have useful samples for accurate

prediction. However, the proposed CCDO offers a way to address fast-changing DCOPs

by identifying a set of promising solutions beforehand. Identifying a set of promising

solutions beforehand can shorten the time needed to find a satisfying solution under each

change. The experimental results under different change frequencies also demonstrated

that CCDO performed well especially under very fast changes.

124

Page 137: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CHAPTER 6

CONCLUSIONS AND FUTURE WORK

The thesis at least partially answers the research questions in Sections 1.3.1-1.3.3. This

chapter summarises the contributions of the thesis and gives directions for future work

related to the work presented in this thesis.

6.1 Contributions

This thesis is dedicated to develop specific and efficient multi-species EAs with good de-

composition and collaboration strategies to address different complex optimisation prob-

lems. The main contributions of the thesis are as follows.

6.1.1 Novel Collaboration Strategies When Using CooC to SolveCE Problems

In Chapter 3, two novel collaboration strategies were developed when using cooperative

co-evolution (CooC) to address two typical types of CE problems. The first CE problem is

the design of a product that has several different parts, and the other is the quasi-separable

multidisciplinary design optimisation (MDO) problem in which different disciplines share

some design variables. For both design problems, the decomposition is assumed to be

known beforehand. The first CE problem is decomposed based on the product structure

and the second CE problem is decomposed based on the discipline.

125

Page 138: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

For the first problem, a new novelty-driven CooC algorithm, NDCC-SS, was proposed

to evolve the decomposed sub-problems. In NDCC-SS, a computationally efficient novelty

calculation is used and a stochastic selection strategy with an adaptive probability are

employed to decide whether evaluating individuals based on their fitness or novelty. The

efficacy of the proposed novelty-driven CooC algorithm was studied on the single universal

electric motor (UEM) design problem. It was shown by experiments that NDCC-SS helped

obtain designs of higher quality compared to normal CooC and other novelty-driven CooC

methods.

For the second problem, a novel CooC-based concurrent design method, CCDM, was

proposed to evolve the decomposed sub-problems. The CCDM method handles the shared

variables through the use of duplicates and consistency constraints, in which the stochastic

ranking method with an adaptive probability was employed to deal with the constraints.

It achieved designs of higher quality in comparison to other MDO methods on a general

MDO problem and the design of multiple UEMs that have common design variables.

Additionally, Chapter 3 systematically investigated how the communication frequency

among sub-populations affects the performance of NDCC-SS and CCDM. By using dif-

ferent problems and communication costs, the experimental results showed that the best

frequency varies as the problem and communication cost change. Motivated by this, a

self-adaptive method, CFS, was developed to adjust a good communication frequency au-

tomatically during the concurrent design process. The experimental studies showed that

CFS is a very competitive communication strategy for both NDCC-SS and CCDM.

6.1.2 A New Dynamic Handling Technique based on SpeciationMethods for DCOPs

In Chapter 4, a novel speciation-based method, called speciated evolution with local

search (SELS), was proposed to address DCOPs by locating and tracking feasible regions.

DCOPs have specific characteristics like the switch of global optima between discon-

nected feasible regions that do not exist in dynamic optimisation problems with bounded

126

Page 139: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

constraints or without constraints. This poses difficulties for some existing dynamic op-

timisation strategies.

To address DCOPs, SELS uses speciation methods to make comparisons and does

crossover between similar individuals. Through doing this, individuals can automatically

search different fesible regions in the search space, and good solutions can be maintained

in different feasible regions. This can help SELS react quickly when the global optimal

solution switches to another feasible region. Additionally, SELS uses a local search strat-

egy to promote exploitation of the promising regions to quickly find a new optimum. The

experimental studies in Chapter 4 demonstrated that SELS generally reacts faster to en-

vironmental changes when compared to the state-of-the-art algorithms on a benchmark

set of DCOPs.

6.1.3 A New Dynamic Handling Technique based on ComC forFast-Changing DCOPs

In the field of dynamic optimisation, DOPs with very fast changes are the most challenging

problems. When the changes happen very fast, the existing diversity-driven approaches,

memory approaches and prediction approaches can not work well. The diversity-driven

approaches do not have enough time to find a satisfactory solution. As the time for opti-

misation is generally very short, the memory approaches can not have optimal solutions

to archive and thus the prediction approaches can not have useful samples for accurate

prediction.

Considering these, in Chapter 5, a novel dynamic handling strategy for fast-changing

DCOPs, CCDO, was proposed, which uses competitive co-evolution (ComC) to find a

set of good solutions beforehand and then uses the solution set for initialisation once an

environmental change happens. In CCDO, a DCOP is first decomposed into the solution

space and environment space. Then, the ComC method is applied to co-evolve them to

find a set of promising solutions for all environments. The CCDO method offers a way

to address fast-changing DCOPs by identifying a set of promising solutions beforehand,

127

Page 140: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

which can shorten the time needed to find a satisfactory solution under each change. The

experimental results under different change frequencies demonstrated that CCDO reacts

faster than other methods for DCOPs especially under very fast changes.

6.2 Future Work

In this section, we would like to point out some possible research directions in the future.

• The work in Chapter 3 only considered UEM design problems and a general MDO

problem. In future work, the two proposed CooC-based concurrent optimisation

methods will be tested on more benchmark and real-world problems. The experi-

mental studies in Chapter 3 only considered single objective optimisation. In reality,

many CE problems are multi-objective design optimisation problems. When devel-

oping methods for multi-objective design problems, a different novelty calculation

method and common variable handling technique might be required. This can be

another direction of future work.

• The work in Chapter 4 only tested the proposed SELS method on 11 DCOP bench-

mark functions. In future work, the real-world DCOP problems will be considered.

In the proposed SELS method, the choice and the length of local search strategy

as well as on which individual to do local search can have a big effect on the pefor-

mance of SELS. Future studies will be conducted along this direction. Moreover,

other multi-species EAs can also be applied to better address DCOPs. The Eu-

clidean distance might not work when the number of decision parameters increases,

other distance metrics that work well in a high-dimensional space should be applied

or developed.

• The work in Chapter 5 is only the first step to validate whether the proposed

CCDO approach is able to react quickly to environmental changes. In future work,

the CCDO method will be tested on more dynamic test problems. Note that the

128

Page 141: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

CCDO method can be generally applied to dynamic optimisation problems (dynamic

constraint satisfaction problems, dynamic optimisation without constraints or with

bounded constraints). As the cost in co-evolutionary set search process was ignored

in the experiments, in future work, the performance of CCDO will be evaluated

with the co-evolutionary search cost being counted in. Moreover, the competitive

co-evolutionary search process will be improved to have a better coverage. It should

be noted that every component of the CCDO method (SP evaluation, EP evaluation,

re-intialisation, local search, online update, etc.) can be replaced by other better

mechanisms.

• Some open issues also arise from Chapter 5. First, for a problem in which the

smallest ideal solution set has an unlimited size, whether there exists a limited

solution set on which conducting local search can find the optimal solution for the

sub-problem under any environment needs some theoretical investigation. Second,

the issue of a suitable set size in the proposed dynamic optimisation framework needs

to be studied. Third, finding a set of good solutions for a class of sub-problems is

similar to a set covering problem. For such a problem, whether and when ComC is

the most effective method needs further investigation.

129

Page 142: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

APPENDIX

THE BASELINE UEM DESIGN PROBLEMFORMULATION

This appendix gives the details in calculating H(x), Mass(x), η(x), P (x), T (x) and

the penalty with respect to the 8 design parameters, Nc, Ns, Awf , Awa, I, ro, t, L [117].

Table 1 gives the values of some constants used in the calculation.

Table 1: Values of constantsConstant Value Constant Value

V 115[V] ρ 1.69 ∗ 10−8[Ω ∗m]lgap 0.0007[m] pfield 2

ρsteel 7850[ kgm3 ] ρcopper 8960[ kg

m3 ]µair 1 π 3.14159

Magnetizing Intensity: H(x)

H =2 ∗Ns ∗ I

lc + lr + 2 ∗ lgap(1)

where:

lc = π ∗ (2 ∗ ro + t)

2

lr = 2 ∗ (ro − t− lgap)

130

Page 143: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

Weight: Mass(x)

Mass = Mstator +Marmature +Mwindings (2)

where:

Mstator = π ∗ (r2o − (ro − t)2) ∗ L ∗ ρsteel

Marmature = π ∗ (ro − t− lgap)2 ∗ L ∗ ρsteel

Mwindings = ((2 ∗ L+ 4 ∗ (ro − t− lgap)) ∗Nc ∗ Awa

+ (2 ∗ L+ 4 ∗ (ro − t)) ∗ 2 ∗Ns ∗ Awf ) ∗ ρcopper

Power: P (x)

P = Pin − Plosses = V ∗ I − Plosses (3)

where:

Plosses = Pbrush + Pcopper

Pbrush = 2 ∗ I

Pcopper = I2 ∗ (Ra +Rs)

Ra =ρ ∗ (2 ∗ L+ 4 ∗ (ro − t− lgap)) ∗Nc

Awa

Rs =ρ ∗ pfield ∗ (2 ∗ L+ 4 ∗ (ro − t)) ∗Ns

Awf

Efficiency: η(x)

η =P

Pin=

P

V ∗ I(4)

Torque: T (x)

T = K ∗ φ ∗ I =Nc

π∗ φ ∗ I (5)

131

Page 144: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

where:

φ =S

R

S = Ns ∗ I

R = Rs +Rr + 2 ∗Ra

Rs =lc

2 ∗ µsteel ∗ µ0 ∗ t ∗ L

Rr =lr

µsteel ∗ µ0 ∗ lr ∗ L

Ra =lgap

µair ∗ µ0 ∗ lr ∗ L

and

µsteel =

−0.22791 ∗H2 + 52.411 ∗H + 3115.8, H ≤ 220

11633.5− 1486.33 ∗ ln(H), 220 < H ≤ 1000

1000, H > 1000

Penalty

The penalty is calculated as the sum of the penalty with respect to each constraint in

Eq. 3.8. Details are given as follows:

penalty = pH + prt + pM + pη + pP + pT (6)

where:

pH =

0, if H ≤ 5000[Amp ∗ turns/m]

1.0 + ((H − 5000) ∗ 0.01)2, otherwise

prt =

0, if ro ≥ t

1.0 + (t− ro)2, otherwise

132

Page 145: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

pM =

0, Mass ≤ 2[kg]

1.0 + ((Mass − 2) ∗ 100)2, otherwise

pη =

0, η ≥ 0.15

1.0 + ((0.15− η) ∗ 100)2, otherwise

pP =

0, |P − 300[W ]| ≤ 5[W ]

((|300− P | − 5) ∗ 0.1)2, otherwise

and

pT =

0, |T − a| ≤ 0.02 ∗ T

((|T − a| − 0.02 ∗ T ) ∗ 1000)2, otherwise

133

Page 146: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

LIST OF REFERENCES

[1] MATLAB. (2017b). Constrained nonlinear optimization algo-rithms. Optimization Toolbox Documentation (online). Available:https://cn.mathworks.com/help/optim/ug/constrained-nonlinear-optimization-algorithms.html, 2017.

[2] R. Addo-Tenkorang. Concurrent engineering: A review literature report. In Proceed-ings of the World Congress on Engineering and Computer Science, pages 1074–1080.IEEE, 2011.

[3] Satish VK Akundi, Timothy W Simpson, and Patrick M Reed. Multi-objectivedesign optimization for product platform and product family design using geneticalgorithms. In ASME 2005 International Design Engineering Technical Confer-ences and Computers and Information in Engineering Conference, pages 999–1008.American Society of Mechanical Engineers, 2005.

[4] Maria-Yaneli Ameca-Alducin, Efren Mezura-Montes, and Nicandro Cruz-Ramirez.Differential evolution with combined variants for dynamic constrained optimization.In 2014 IEEE Congress on Evolutionary Computation (CEC’14), pages 975–982.IEEE, 2014.

[5] Marıa-Yaneli Ameca-Alducin, Efren Mezura-Montes, and Nicandro Cruz-Ramırez.A repair method for differential evolution with combined variants to solve dynamicconstrained optimization problems. In Proceedings of the Annual Conference onGenetic and Evolutionary Computation (GECCO’15), pages 241–248. ACM, 2015.

[6] Luis Miguel Antonio and Carlos A Coello Coello. Use of cooperative coevolution forsolving large scale multiobjective optimization problems. In Proceedings of the 2013IEEE Congress on Evolutionary Computation (CEC’13), pages 2758–2765. IEEE,2013.

[7] C. Aranha and H. Iba. The memetic tree-based genetic algorithm and its applicationto portfolio optimization. Memetic Computing, 1(2):139–151, 2009.

134

Page 147: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[8] Thomas Back and Hans-Paul Schwefel. An overview of evolutionary algorithms forparameter optimization. Evolutionary Computation, 1(1):1–23, 1993.

[9] Charles-S Beightler, Donald-T Phillips, Ron-S Dembo, G-V Reklaitis, and R-E-DWoolsey. Applied geometric programming. John Wiley & Sons, Inc., Distributed byInstrument Society of America, 1975.

[10] Hans-Georg Beyer and Hans-Paul Schwefel. Evolution strategies–a comprehensiveintroduction. Natural Computing, 1(1):3–52, 2002.

[11] Paul T Boggs and Jon W Tolle. Sequential quadratic programming. Acta numerica,4:1–51, 1995.

[12] Jurgen Branke, Thomas Kaußler, Christian Smidt, and Hartmut Schmeck. A multi-population approach to dynamic optimization problems. In Evolutionary Design andManufacture, pages 299–307. Springer, 2000.

[13] Chenyang Bu, Wenjian Luo, and Lihua Yue. Continuous dynamic constrained op-timization with ensemble of locating and tracking feasible regions strategies. IEEETransactions on Evolutionary Computation, 21(1):14–33, 2017.

[14] Anthony Bucci and Jordan B Pollack. On identifying global optima in cooperativecoevolution. In Proceedings of the 7th Annual Conference on Genetic and Evolu-tionary Computation (GECCO’05), pages 539–544. ACM, 2005.

[15] Mario Campos and Renato Krohling. Bare bones particle swarm with scale mixturesof gaussians for dynamic constrained optimization. In Proceedings of the 2009 IEEECongress on Evolutionary Computation (CEC’14), pages 202–209. IEEE, 2014.

[16] Mauro Campos and Renato A Krohling. Entropy-based bare bones particle swarmfor dynamic constrained optimization. Knowledge-Based Systems, 97:203–223, 2016.

[17] Qifeng Chen and Jinhai Dai. Distributed coevolutionary multidisciplinary designoptimization: A flexible approach. In The 9th AIAA/ISSMO Symposium on Mul-tidisciplinary Analysis and Optimization, 4-6 September 2002, Altanta, Georgia,2002.

[18] Qifeng Chen and Jinhai Dai. A distributed multi-objective evolutionary algorithmand application in missile design. In The 2002 International Conference on Controland Automation (ICCA’02), pages 197–197. IEEE, 2002.

135

Page 148: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[19] Siang Y Chong, Mei K Tan, and Jonathon David White. Observing the evolutionof neural networks learning to play the game of othello. IEEE Transactions onEvolutionary Computation, 9(3):240–251, 2005.

[20] Siang Yew Chong and Xin Yao. Behavioral diversity, choices and noise in the iteratedprisoner’s dilemma. IEEE Transactions on Evolutionary Computation, 9(6):540–551, 2005.

[21] Piya Chootinan and Anthony Chen. Constraint handling in genetic algorithms usinga gradient-based repair method. Computers and Operations Research, 33(8):2263–2281, 2006.

[22] Helen G Cobb. An investigation into the use of hypermutation as an adaptiveoperator in genetic algorithms having continuous, time-dependent nonstationaryenvironments. Technical report, DTIC Document, 1990.

[23] A Colorni, M Dorigo, and V Maniezzo. Distributed optimization by ant colonies.In Proceedings of the 1st European Conference on Artificial Life (ECAL’91), pages134–142, Amsterdam, The Netherlands, 1991. Elsevier.

[24] W J Conover and R L Iman. On multiple-comparisons procedures. informal report.Technical report, Los Alamos Scientific Laboratory., 1979.

[25] Carlos Cruz, Juan R Gonzalez, and David A Pelta. Optimization in dynamicenvironments: a survey on problems, methods and measures. Soft Computing,15(7):1427–1448, 2011.

[26] Giuseppe Cuccu and Faustino Gomez. When novelty is not enough. In Euro-pean Conference on the Applications of Evolutionary Computation, pages 234–243.Springer, 2011.

[27] Paul J Darwen and Xin Yao. Does extra genetic diversity maintain escalation in aco-evolutionary arms race. In International Journal of Knowledge-Based IntelligentEngineering Systems, volume 4, pages 191–200. Citeseer, 2000.

[28] Susmita De, Sankar K Pal, and Ashish Ghosh. Genotypic and phenotypic assortativemating in genetic algorithm. Information Sciences, 105(1):209–226, 1998.

[29] Kenneth Alan De Jong. Analysis of the behavior of a class of genetic adaptivesystems. PhD thesis, University of Michigan, 1975.

136

Page 149: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[30] Kalyanmoy Deb and Ram B Agrawal. Simulated binary crossover for continuoussearch space. Complex Systems, 9(3):1–15, 1994.

[31] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fastand elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolu-tionary Computation, 6(2):182–197, 2002.

[32] Janez Demsar. Statistical comparisons of classifiers over multiple data sets. TheJournal of Machine Learning Research, 7:1–30, 2006.

[33] Philippe Depince, Benoıt Guedas, and Jerome Picard. Multidisciplinary and multi-objective optimization: Comparison of several methods. In 7th World Congress onStructural and Multidisciplinary Optimization, 2007.

[34] Alex Dmitrienko, Ajit C Tamhane, and Frank Bretz. Multiple testing problems inpharmaceutical statistics. CRC press, 2009.

[35] Russell Eberhart and James Kennedy. A new optimizer using particle swarm theory.In Proceedings of the Sixth International Symposium on Micro Machine and HumanScience (MHS’95), pages 39–43, Piscataway, NJ, USA, 1995. IEEE.

[36] Agoston E Eiben, James E Smith, et al. Introduction to Evolutionary Computing.Springer-Verlag, Berlin/Heidelberg, Germany, 2003.

[37] Patryk Filipiak and Piotr Lipinski. Infeasibility driven evolutionary algorithm withfeed-forward prediction strategy for dynamic constrained optimization problems. InApplications of Evolutionary Computation, pages 817–828. Springer, 2014.

[38] Patryk Filipiak and Piotr Lipinski. Making idea-arima efficient in dynamic con-strained optimization problems. In Applications of Evolutionary Computation, pages882–893. Springer, 2015.

[39] H Finner. On a monotonicity problem in step-down multiple test procedures. Jour-nal of the American Statistical Association, 88(423):920–923, 1993.

[40] David B Fogel. Evolutionary Computation: the Fossil Record. Wiley-IEEE Press,New York, USA, 1998.

[41] David B Fogel. Introduction to evolutionary computation. Evolutionary Computa-tion, 1:1–3, 2000.

137

Page 150: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[42] Lawrence J Fogel, Alvin J Owens, and Michael J Walsh. Artificial Intelligencethrough Simulated Evolution. John Wiley & Sons, Chichester, WS, UK, 1966.

[43] Timo Friedrich and Stefan Menzel. A cascaded evolutionary multi-objective op-timization for solving the unbiased universal electric motor family problem. InProceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC’14),pages 3184–3191. IEEE, 2014.

[44] Gianpaolo Ghiani, Francesca Guerriero, Gilbert Laporte, and Roberto Musmanno.Real-time vehicle routing: Solution concepts, algorithms and parallel computingstrategies. European Journal of Operational Research, 151(1):1–11, 2003.

[45] Alessandro Giassi, F Bennis, and J-J Maisonneuve. Multidisciplinary design opti-misation and robust design approaches applied to concurrent design. Structural andMultidisciplinary Optimization, 28(5):356–371, 2004.

[46] David E Goldberg, Jon Richardson, et al. Genetic algorithms with sharing formultimodal function optimization. In Genetic Algorithms and Their Applications:Proceedings of the Second International Conference on Genetic Algorithms, pages41–49. Hillsdale, NJ: Lawrence Erlbaum, 1987.

[47] Jorge Gomes, Pedro Mariano, and Anders Lyhne Christensen. Avoiding convergencein cooperative coevolution with novelty search. In Proceedings of the 2014 interna-tional conference on Autonomous agents and multi-agent systems, pages 1149–1156.International Foundation for Autonomous Agents and Multiagent Systems, 2014.

[48] Jorge Gomes, Pedro Mariano, and Anders Lyhne Christensen. Devising effectivenovelty search algorithms: a comprehensive empirical study. In Proceedings of the2015 Annual Conference on Genetic and Evolutionary Computation (GECCO’15),pages 943–950. ACM, 2015.

[49] Jorge Gomes, Pedro Mariano, and Anders Lyhne Christensen. Novelty-driven co-operative coevolution. Evolutionary computation, 25(2):275–307, 2017.

[50] John J Grefenstette et al. Genetic algorithms for changing environments. In Pro-ceedings of the 2nd Conference on Parallel Problem Solving from Nature–PPSN II,volume 2, pages 137–144, 1992.

[51] S.M.K Hasan, R. Sarker, D. Essam, and D. Cornforth. Memetic algorithms forsolving job-shop scheduling problems. Memetic Computing, 1(1):69–83, 2009.

138

Page 151: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[52] W Daniel Hillis. Co-evolving parasites improve simulated evolution as an optimiza-tion procedure. Physica D: Nonlinear Phenomena, 42(1-3):228–234, 1990.

[53] Robert Hinterding, Zbigniew Michalewicz, and Thomas C Peachey. Self-adaptivegenetic algorithm for numeric functions. In Proceedings of the 4th Conference onParallel Problem Solving from Nature–PPSN IV, pages 420–429. Springer, 1996.

[54] Pei Yee Ho and Kazuyuki Shimizu. Evolutionary constrained optimization usingan addition of ranking method and a percentage-based tolerance value adjustmentscheme. Information Sciences, 177(14):2985–3004, 2007.

[55] H Holland John. Adaptation in Natural and Artificial Systems: an IntroductoryAnalysis with Applications to Biology, Control, and Artificial Intelligence. Univer-sity of Michigan Press, Ann Arbor, Michigan, USA, 1975.

[56] Colin G Johnson and Juan Jesus Romero Cardalda. Genetic algorithms in visualart and music. Leonardo, 35(2):175–184, 2002.

[57] Swaroop Kalasapur, Mohan Kumar, and Behrooz A Shirazi. Dynamic service com-position in pervasive computing. IEEE Transactions on Parallel and DistributedSystems, 18(7):907–918, 2007.

[58] Namwoo Kang, Michael Kokkolaras, and Panos Y Papalambros. Solving multiob-jective optimization problems using quasi-separable mdo formulations and analyti-cal target cascading. Structural and Multidisciplinary Optimization, 50(5):849–859,2014.

[59] Aida Khajavirad, Jeremy J Michalek, and Timothy W Simpson. An efficient de-composed multiobjective genetic algorithm for solving the joint product platformselection and product family design problem with generalized commonality. Struc-tural and Multidisciplinary Optimization, 39(2):187–201, 2009.

[60] Hyung Min Kim, Nestor F Michelena, Panos Y Papalambros, and Tao Jiang. Targetcascading in optimal system design. Journal of Mechanical Design, 125(3):474–480,2003.

[61] John R Koza. Genetic programming as a means for programming computers bynatural selection. Statistics and Computing, 4(2):87–112, 1994.

139

Page 152: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[62] Viswanathan Krishnan, Steven D Eppinger, and Daniel E Whitney. A model-based framework to overlap product development activities. Management Science,43(4):437–451, 1997.

[63] Andrew Kusiak. Concurrent engineering: automation, tools, and techniques. JohnWiley & Sons, 1993.

[64] Andrew Kusiak and Nick Larson. Decomposition and representation methods inmechanical design. Journal of Vibration and Acoustics, 117(B):17–24, 1995.

[65] Eric-Wubbo Lameijer, Thomas Back, Joost N Kok, and Ad P Ijzerman. Evolution-ary algorithms in drug design. Natural Computing, 4(3):177–243, 2005.

[66] Joel Lehman and Kenneth O Stanley. Exploiting open-endedness to solve problemsthrough the search for novelty. In Artificial Life, pages 329–336, 2008.

[67] Joel Lehman and Kenneth O Stanley. Efficiently evolving programs through thesearch for novelty. In Proceedings of the 12th Annual Conference on Genetic andEvolutionary Computation (GECCO’10), pages 837–844. ACM, 2010.

[68] Joel Lehman and Kenneth O Stanley. Abandoning objectives: Evolution throughthe search for novelty alone. Evolutionary Computation, 19(2):189–223, 2011.

[69] Changhe Li, Trung Thanh Nguyen, Ming Yang, Shengxiang Yang, and Sanyou Zeng.Multi-population methods in unconstrained continuous dynamic environments: Thechallenges. Information Sciences, 296:95–118, 2015.

[70] Jian-Ping Li, Marton E Balazs, Geoffrey T Parks, and P John Clarkson. A speciesconserving genetic algorithm for multimodal function optimization. EvolutionaryComputation, 10(3):207–234, 2002.

[71] Xiaodong Li. Adaptively choosing neighbourhood bests using species in a particleswarm optimizer for multimodal function optimization. In Proceedings of the AnnualConference on Genetic and Evolutionary Computation (GECCO’04), pages 105–116. Springer, 2004.

[72] Xiaodong Li, Michael G Epitropakis, Kalyanmoy Deb, and Andries Engelbrecht.Seeking multiple solutions: an updated survey on niching methods and their appli-cations. IEEE Transactions on Evolutionary Computation, 21(4):518–538, 2017.

140

Page 153: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[73] Antonios Liapis, Georgios N Yannakakis, and Julian Togelius. Constrained noveltysearch: A study on game content generation. Evolutionary Computation, 23(1):101–129, 2015.

[74] Jun Lin, Yanjun Qian, Wentian Cui, and Zhanli Miao. Overlapping and communi-cation policies in product development. European Journal of Operational Research,201(3):737–750, 2010.

[75] Christoph H Loch and Christian Terwiesch. Communication and uncertainty inconcurrent engineering. Management Science, 44(8):1032–1048, 1998.

[76] Xiaofen Lu, Stefan Menzel, Ke Tang, and Xin Yao. The performance effects ofinteraction frequency in parallel cooperative coevolution. In Simulated Evolutionand Learning, pages 82–93. Springer, 2014.

[77] Xiaofen Lu, Ke Tang, Bernhard Sendhoff, and Xin Yao. A review of concurrentoptimisation methods. International Journal of Bio-Inspired Computation, 6(1):22–31, 2014.

[78] Xiaofen Lu, Ke Tang, and Xin Yao. Speciated evolutionary algorithm for dynamicconstrained optimisation. In International Conference on Parallel Problem Solvingfrom Nature (PPSN XIV), pages 203–213. Springer, 2016.

[79] Sedigheh Mahdavi, Shahryar Rahnamayan, and Mohammad Ebrahim Shiri. Incre-mental cooperative coevolution for large-scale global optimization. Soft Computing,22(6):2045–2064, 2018.

[80] Samir W Mahfoud. Niching methods for genetic algorithms. Urbana, 51(95001):62–94, 1995.

[81] Rammohan Mallipeddi and Ponnuthurai N Suganthan. Ensemble of constraint han-dling techniques. IEEE Transactions on Evolutionary Computation, 14(4):561–579,2010.

[82] Joaquim RRA Martins and Andrew B Lambe. Multidisciplinary design optimiza-tion: a survey of architectures. American Institute of Aeronautics and Astronautics(AIAA) Journal, 51(9):2049–2075, 2013.

[83] Yi Mei, Xiaodong Li, and Xin Yao. Cooperative co-evolution with route distancegrouping for large-scale capacitated arc routing problems. IEEE Transactions onEvolutionary Computation, 18(3):435–449, 2014.

141

Page 154: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[84] Ole J Mengshoel and David E Goldberg. Probabilistic crowding: Deterministiccrowding with probabilistic replacement. In Proceedings of the Annual Conferenceon Genetic and Evolutionary Computation (GECCO’99), pages 409–416, 1999.

[85] Efren Mezura-Montes, Carlos A Coello Coello, and Edy I Tun-Morales. Simplefeasibility rules and differential evolution for constrained optimization. In MICAI2004: Advances in Artificial Intelligence, pages 707–716. Springer, 2004.

[86] Brad L Miller and Michael J Shaw. Genetic algorithms with dynamic niche sharingfor multimodal function optimization. In Proceedings of the 1996 IEEE InternationalConference on Evolutionary Computation, pages 786–791. IEEE, 1996.

[87] Angel Kuri Morales and Carlos Villegas Quezada. A universal eclectic genetic al-gorithm for constrained optimization. In Proceedings of the 6th European Congresson Intelligent Techniques and Soft Computing, volume 1, pages 518–522, 1998.

[88] Jean-Baptiste Mouret. Novelty-based multiobjectivization. In New Horizons inEvolutionary Robotics, pages 139–154. Springer, 2011.

[89] F Moussouni, S Kreuawan, S Brisset, F Gillon, P Brochet, and L Nicod. Analyticaltarget cascading for optimal design of railway traction system. In InternationalConference on Engineering Optimization, pages 1–10, 2008.

[90] Prasanth B Nair and Andy J Keane. Coevolutionary genetic adaptation–a newparadigm for distributed multidisciplinary design optimization. In The 40thAIAA/ASME/ASCE/AHS/ASC Structures Structural Dynamics and MaterialsConf., St. Louis, MO, AIAA Paper, pages 1428–1437. Citeseer, 1999.

[91] Prasanth B Nair and Andy J Keane. Coevolutionary architecture for distributedoptimization of complex coupled systems. American Institute of Aeronautics andAstronautics (AIAA) Journal, 40(7):1434–1443, 2002.

[92] Trung Thanh Nguyen. Continuous dynamic optimisation using evolutionary algo-rithms. PhD thesis, University of Birmingham, 2011.

[93] Trung Thanh Nguyen, Shengxiang Yang, and Juergen Branke. Evolutionary dy-namic optimization: A survey of the state of the art. Swarm and EvolutionaryComputation, 6:1–24, 2012.

142

Page 155: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[94] Trung Thanh Nguyen and Xin Yao. Continuous dynamic constrained optimization–the challenges. IEEE Transactions on Evolutionary Computation, 16(6):769–786,2012.

[95] Mohammad Nabi Omidvar, Xiaodong Li, Zhenyu Yang, and Xin Yao. Cooperativeco-evolution for large scale optimization through more frequent random grouping.In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC’10),pages 1–8. IEEE, 2010.

[96] Kunal Pal, Chiranjib Saha, and Swagatam Das. Differential evolution and offspringrepair method based dynamic constrained optimization. In Swarm, Evolutionary,and Memetic Computing, pages 298–309. Springer, 2013.

[97] Kunal Pal, Chiranjib Saha, Swagatam Das, Carlos Coello, et al. Dynamic con-strained optimization with offspring repair based gravitational search algorithm. InProceedings of the 2009 IEEE Congress on Evolutionary Computation (CEC’13),pages 2414–2421. IEEE, 2013.

[98] Liviu Panait. Theoretical convergence guarantees for cooperative coevolutionaryalgorithms. Evolutionary Computation, 18(4):581–615, 2010.

[99] Liviu Panait, R Paul Wiegand, and Sean Luke. A visual demonstration of conver-gence properties of cooperative coevolution. In International Conference on ParallelProblem Solving from Nature, pages 892–901. Springer, 2004.

[100] Daniel Parrott and Xiaodong Li. Locating and tracking multiple dynamic optimaby a particle swarm model using speciation. IEEE Transactions on EvolutionaryComputation, 10(4):440–458, 2006.

[101] Thorsten Pohlert. The pairwise multiple comparison of mean ranks package (pm-cmr). R package, pages 2004–2006, 2014.

[102] Riccardo Poli, James Kennedy, and Tim Blackwell. Particle swarm optimization.Swarm Intelligence, 1(1):33–57, 2007.

[103] Elena Popovici and Kenneth De Jong. The effects of interaction frequency on theoptimization performance of cooperative coevolution. In Proceedings of the 8thAnnual Conference on Genetic and Evolutionary Computation (GECCO’06), pages353–360. ACM, 2006.

143

Page 156: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[104] Mitchell A Potter and Kenneth A De Jong. A cooperative coevolutionary approachto function optimization. In Proceedings of the 3rd Conference on Parallel ProblemSolving from Nature–PPSN III, pages 249–257. Springer, 1994.

[105] Mitchell A Potter and Kenneth A De Jong. Cooperative coevolution: An architec-ture for evolving coadapted subcomponents. Evolutionary Computation, 8(1):1–29,2000.

[106] Sebastien Rabeau, Philippe Depince, and Fouad Bennis. Collaborative optimiza-tion of complex systems: a multidisciplinary approach. International Journal onInteractive Design and Manufacturing (IJIDeM), 1(4):209–218, 2007.

[107] Hendrik Richter. Memory design for constrained dynamic optimization problems.In Applications of Evolutionary Computation, pages 552–561. Springer, 2010.

[108] Thomas A Roemer, Reza Ahmadi, and Robert H Wang. Time-cost trade-offs inoverlapped product development. Operations Research, 48(6):858–865, 2000.

[109] Christopher D Rosin and Richard K Belew. New methods for competitive coevolu-tion. Evolutionary Computation, 5(1):1–29, 1997.

[110] Kamel Rouibah and Kevin R Caskey. Change management in concurrent engineer-ing from a parameter perspective. Computers in industry, 50(1):15–34, 2003.

[111] Thomas P. Runarsson and Xin Yao. Stochastic ranking for constrained evolution-ary optimization. IEEE Transactions on Evolutionary Computation, 4(3):284–294,2000.

[112] Sancho Salcedo-Sanz. A survey of repair methods used as constraint handling tech-niques in evolutionary algorithms. Computer Science Review, 3(3):175–192, 2009.

[113] Bruno Sareni and Laurent Krahenbuhl. Fitness sharing and niching methods revis-ited. IEEE Transactions on Evolutionary Computation, 2(3):97–106, 1998.

[114] Friedrich Schmid and Rafael Schmidt. Multivariate conditional versions of spear-man’s rho and related measures of tail dependence. Journal of Multivariate Analysis,98(6):1123–1140, 2007.

[115] Dan Simon. Biogeography-based optimization. IEEE Transactions on EvolutionaryComputation, 12(6):702–713, 2008.

144

Page 157: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[116] Timothy W Simpson. Product platform design and customization: status andpromise. AI EDAM: Artificial Intelligence for Engineering Design, Analysis andManufacturing, 18(01):3–20, 2004.

[117] Timothy W Simpson, Jonathan R Maier, and Farrokh Mistree. Product platformdesign: method and application. Research in Engineering Design, 13(1):2–22, 2001.

[118] Gulshan Singh and Kalyanmoy Deb Dr. Comparison of multi-modal optimizationalgorithms based on evolutionary algorithms. In Proceedings of the 8th AnnualConference on Genetic and Evolutionary Computation, pages 1305–1312. ACM,2006.

[119] C Stoen. Various collaborator selection pressures for cooperative coevolution forclassification. In International Conference of Artificial Intelligence and Digital Com-munications (AIDC), 2006.

[120] Rainer Storn and Kenneth Price. Differential evolutiona simple and efficient adap-tive scheme for global optimization over continuous spaces. Technical report, Berke-ley, CA, USA, 1995.

[121] Rainer Storn and Kenneth Price. Differential evolution–a simple and efficient heuris-tic for global optimization over continuous spaces. Journal of Global Optimization,11(4):341–359, 1997.

[122] Vladimir Tosic, David Mennie, and Bernard Pagurek. Dynamic service compositionand its applicability to e-business software systems–the icaris experience. Object-Oriented Business Solutions ECOOP, 2000.

[123] Simon Tosserams, LF Pascal Etman, and JE Rooda. An augmented lagrangiandecomposition method for quasi-separable problems in mdo. Structural and Multi-disciplinary Optimization, 34(3):211–227, 2007.

[124] Nikos Tzevelekos, Michael Kokkolaras, Panos Y Papalambros, Martijn F Hulshof,LF Pascal Etman, and JE Rooda. An empirical local convergence study of alter-native coordination schemes in analytical target cascading. In Proceedings of the5th World Congress on Structural and Multidisciplinary Optimization, pages 19–23,2003.

[125] Hans-Michael Voigt and Jan Matti Lange. Local evolutionary search enhancementby random memorizing. In Proceedings of the 1998 IEEE International Conferenceon Computational Intelligence, pages 547–552. IEEE, 1998.

145

Page 158: MULTI-SPECIES EVOLUTIONARY ALGORITHMS FOR COMPLEX

[126] Karen Willcox and Sean Wakayama. Simultaneous optimization of a multiple-aircraft family. Journal of Aircraft, 40(4):616–622, 2003.

[127] Yonggang Xing and Shuo Tang. A distributed coevolutionary multidisciplinarydesign optimization algorithm. In The 2010 International Joint Conference onComputational Science and Optimization (CSO), volume 2, pages 77–80. IEEE,2010.

[128] Peng Yang, Ke Tang, and Xiaofen Lu. Improving estimation of distribution algo-rithm on multimodal problems by detecting promising areas. IEEE Transactionson Cybernetics, 45(8):1438–1449, 2015.

[129] Zhenyu Yang, Ke Tang, and Xin Yao. Large scale evolutionary optimization usingcooperative coevolution. Information Sciences, 178(15):2985–2999, 2008.

[130] Ali Yassine and Dan Braha. Complex concurrent engineering and the design struc-ture matrix method. Concurrent Engineering, 11(3):165–176, 2003.

[131] Ali A Yassine, Ramavarapu S Sreenivas, and Jian Zhu. Managing the exchange ofinformation in product development. European Journal of Operational Research,184(1):311–326, 2008.

[132] Parviz M Zadeh, Vassili V Toropov, and Alastair S Wood. Metamodel-based col-laborative optimization framework. Structural and Multidisciplinary Optimization,38(2):103–115, 2009.

[133] Xiaoling Zhang, Hong-Zhong Huang, Zhonglai Wang, Yu Liu, and Yanfeng Li. Apareto set coordination method for analytical target cascading. Concurrent Engi-neering, 21(4):286–295, 2013.

[134] Albert Y. Zomaya and Yee-Hwei Teh. Observations on using genetic algorithms fordynamic load-balancing. IEEE Transactions on Parallel and Distributed Systems,12(9):899–911, 2001.

146