12
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 141 Classification of Adaptive Memetic Algorithms: A Comparative Study Yew-Soon Ong, Meng-Hiot Lim, Ning Zhu, and Kok-Wai Wong Abstract—Adaptation of parameters and operators represents one of the recent most important and promising areas of research in evolutionary computations; it is a form of designing self-con- figuring algorithms that acclimatize to suit the problem in hand. Here, our interests are on a recent breed of hybrid evolutionary al- gorithms typically known as adaptive memetic algorithms (MAs). One unique feature of adaptive MAs is the choice of local search methods or memes and recent studies have shown that this choice significantly affects the performances of problem searches. In this paper, we present a classification of memes adaptation in adaptive MAs on the basis of the mechanism used and the level of historical knowledge on the memes employed. Then the asymptotic conver- gence properties of the adaptive MAs considered are analyzed according to the classification. Subsequently, empirical studies on representatives of adaptive MAs for different type-level meme adaptations using continuous benchmark problems indicate that global-level adaptive MAs exhibit better search performances. Finally we conclude with some promising research directions in the area. Index Terms—Adaptation, evolutionary algorithm, memetic al- gorithm, optimization. I. INTRODUCTION I N problems characterized by many local optima, traditional local optimization techniques tend to fail in locating the global optimum. In these cases, modern stochastic techniques such as the genetic algorithm (GA) can be considered as an efficient and interesting option [1], [2]. As with most search and optimization techniques, the GA includes a number of operational parameters whose values significantly influence the behavior of the algorithm on a given problem, and usually in unpredictable ways. Often, one would need to tune the parameters of the GA to enhance its performance. Over the last 20 years, a great deal of research effort focused on adapting GA parameters automatically [3]–[5]. These include the mutation rate, crossover, and reproduction techniques where promising results have been demonstrated. Surveys and classifications of adaptations in evolutionary computation are available in Hinterding et al. [6] and Eiben et al. [7]. Nevertheless, traditional GAs generally suffer from exces- sively slow convergence to locate a precise enough solution be- Manuscript received August 31, 2004; revised February 21, 2005. This work was supported by Singapore Technologies Dynamics and Nanyang Technolog- ical University. This paper was recommended by Editor E. Santos. Y.-S. Ong and N. Zhu are with the School of Computer Engineering, Nanyang Technological University, Singapore 639798 (e-mail: [email protected]; [email protected]). M.-H. Lim is with the School of Electrical and Electronic Engi- neering, Nanyang Technological University, Singapore 639798 (e-mail: [email protected]). K.-W. Wong is with the School of Information Technology, Murdoch Univer- sity, Perth, WA 6150, Australia (e-mail: [email protected]). Digital Object Identifier 10.1109/TSMCB.2005.856143 cause of their failure to exploit local information. This often limits the practicality of GAs on many large-scale real world problems where the computational time is a crucial consider- ation. Memetic algorithms (MAs) are population-based meta- heuristic search approaches that have been receiving increasing attention in the recent years. They are inspired by Neo-Dar- winian’s principles of natural evolution and Dawkins’ notion of a meme defined as a unit of cultural evolution that is capable of local refinements. Generally, MA may be regarded as a marriage between a population-based global search and local improve- ment procedures. It has shown to be successful and popular for solving optimization problems in many contexts [8]–[21]. Particularly, once the technique has been properly developed, higher quality solutions can be attained much more efficiently. Nevertheless, one drawback of the MA is that in order for it to be useful on a problem instance, one often needs to carry out ex- tensive tuning of the control parameters, for example, the selec- tion of a problem-specific meme that suit the problem of interest [9]. The influence of the memes employed has been shown ex- tensively in [10]–[21] to have a major impact on the search per- formance of MAs. These studies demonstrated that the search performance obtained by MAs is often better than that obtained by the GA alone, especially when prior knowledge on suitable problem-specific memes is available. In discrete combinatorial optimization research, Cowling et al. [11] coined the term “hyperheuristic” to describe the idea of fusing a number of different memes together, so that the ac- tual meme applied may differ at each decision point, i.e., often at the chromosome/individual level. They describe the hyper- heuristic as a heuristic to choose memes. The idea of using mul- timemes and adaptive choice of memes at each decision point was also proposed by Krasnogor et al. in [13], [14]. At about the same time, Smith also introduced the co-evolution of mul- tiple memes in [15], [20]. These are some of the research groups that have been heavily involved in work relating to memes adap- tation in MAs for combinatorial optimization problems. In the area of continuous optimization, Hart in his dissertation work [10] as well as some of our earlier research work in [16]–[18] have demonstrated that the choice of memes affects the perfor- mance of MAs significantly on a variety of benchmark prob- lems of diverse properties. Ong and Keane [17] coined the term “meta-Lamarckian learning” to introduce the idea of adaptively choosing multiple memes during a MA search in the spirit of Lamarckian learning. From our survey 1 , it is noted there has been a lack of studies analyzing and comparing different adaptive MAs from the per- spective of choosing memes. Our objective in this paper is to 1 It is worth noting here that our focus is the class of adaptive MAs where the choice of memes is adapted during the evolutionary search. 1083-4419/$20.00 © 2006 IEEE Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 141

Classification of Adaptive Memetic Algorithms:A Comparative Study

Yew-Soon Ong, Meng-Hiot Lim, Ning Zhu, and Kok-Wai Wong

Abstract—Adaptation of parameters and operators representsone of the recent most important and promising areas of researchin evolutionary computations; it is a form of designing self-con-figuring algorithms that acclimatize to suit the problem in hand.Here, our interests are on a recent breed of hybrid evolutionary al-gorithms typically known as adaptive memetic algorithms (MAs).One unique feature of adaptive MAs is the choice of local searchmethods or memes and recent studies have shown that this choicesignificantly affects the performances of problem searches. In thispaper, we present a classification of memes adaptation in adaptiveMAs on the basis of the mechanism used and the level of historicalknowledge on the memes employed. Then the asymptotic conver-gence properties of the adaptive MAs considered are analyzedaccording to the classification. Subsequently, empirical studieson representatives of adaptive MAs for different type-level memeadaptations using continuous benchmark problems indicate thatglobal-level adaptive MAs exhibit better search performances.Finally we conclude with some promising research directions inthe area.

Index Terms—Adaptation, evolutionary algorithm, memetic al-gorithm, optimization.

I. INTRODUCTION

I N problems characterized by many local optima, traditionallocal optimization techniques tend to fail in locating the

global optimum. In these cases, modern stochastic techniquessuch as the genetic algorithm (GA) can be considered as anefficient and interesting option [1], [2]. As with most searchand optimization techniques, the GA includes a number ofoperational parameters whose values significantly influencethe behavior of the algorithm on a given problem, and usuallyin unpredictable ways. Often, one would need to tune theparameters of the GA to enhance its performance. Over the last20 years, a great deal of research effort focused on adapting GAparameters automatically [3]–[5]. These include the mutationrate, crossover, and reproduction techniques where promisingresults have been demonstrated. Surveys and classificationsof adaptations in evolutionary computation are available inHinterding et al. [6] and Eiben et al. [7].

Nevertheless, traditional GAs generally suffer from exces-sively slow convergence to locate a precise enough solution be-

Manuscript received August 31, 2004; revised February 21, 2005. This workwas supported by Singapore Technologies Dynamics and Nanyang Technolog-ical University. This paper was recommended by Editor E. Santos.

Y.-S. Ong and N. Zhu are with the School of Computer Engineering, NanyangTechnological University, Singapore 639798 (e-mail: [email protected];[email protected]).

M.-H. Lim is with the School of Electrical and Electronic Engi-neering, Nanyang Technological University, Singapore 639798 (e-mail:[email protected]).

K.-W. Wong is with the School of Information Technology, Murdoch Univer-sity, Perth, WA 6150, Australia (e-mail: [email protected]).

Digital Object Identifier 10.1109/TSMCB.2005.856143

cause of their failure to exploit local information. This oftenlimits the practicality of GAs on many large-scale real worldproblems where the computational time is a crucial consider-ation. Memetic algorithms (MAs) are population-based meta-heuristic search approaches that have been receiving increasingattention in the recent years. They are inspired by Neo-Dar-winian’s principles of natural evolution and Dawkins’ notion ofa meme defined as a unit of cultural evolution that is capable oflocal refinements. Generally, MA may be regarded as a marriagebetween a population-based global search and local improve-ment procedures. It has shown to be successful and popularfor solving optimization problems in many contexts [8]–[21].Particularly, once the technique has been properly developed,higher quality solutions can be attained much more efficiently.Nevertheless, one drawback of the MA is that in order for it tobe useful on a problem instance, one often needs to carry out ex-tensive tuning of the control parameters, for example, the selec-tion of a problem-specific meme that suit the problem of interest[9]. The influence of the memes employed has been shown ex-tensively in [10]–[21] to have a major impact on the search per-formance of MAs. These studies demonstrated that the searchperformance obtained by MAs is often better than that obtainedby the GA alone, especially when prior knowledge on suitableproblem-specific memes is available.

In discrete combinatorial optimization research, Cowling etal. [11] coined the term “hyperheuristic” to describe the ideaof fusing a number of different memes together, so that the ac-tual meme applied may differ at each decision point, i.e., oftenat the chromosome/individual level. They describe the hyper-heuristic as a heuristic to choose memes. The idea of using mul-timemes and adaptive choice of memes at each decision pointwas also proposed by Krasnogor et al. in [13], [14]. At aboutthe same time, Smith also introduced the co-evolution of mul-tiple memes in [15], [20]. These are some of the research groupsthat have been heavily involved in work relating to memes adap-tation in MAs for combinatorial optimization problems. In thearea of continuous optimization, Hart in his dissertation work[10] as well as some of our earlier research work in [16]–[18]have demonstrated that the choice of memes affects the perfor-mance of MAs significantly on a variety of benchmark prob-lems of diverse properties. Ong and Keane [17] coined the term“meta-Lamarckian learning” to introduce the idea of adaptivelychoosing multiple memes during a MA search in the spirit ofLamarckian learning.

From our survey1, it is noted there has been a lack of studiesanalyzing and comparing different adaptive MAs from the per-spective of choosing memes. Our objective in this paper is to

1It is worth noting here that our focus is the class of adaptive MAs where thechoice of memes is adapted during the evolutionary search.

1083-4419/$20.00 © 2006 IEEE

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 2: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006

Fig. 1. Canonical MA pseudocode.

summarize the state-of-art in the adaptation of choice of memesin general nonlinear optimization. In particular, we conduct aclassification on adapting the choice of memes in MAs basedon the mechanism and the level of historical knowledge used. Itis worth noting that such a classification would be informativeto the evolutionary computation community since researchersuse the terms “meta-Lamarckian learning”, “hyperheuristic”,and “multimemes” arbitrarily when referring to memes adapta-tion in adaptive MAs. Based on the resultant classification, weconduct a systematic study on the adaptive MAs for differenttype-level adaptations in the taxonomy using continuous bench-mark problems of diverse properties. Last but not least, we hopethat the classification and results presented here will help pro-mote greater research in adaptive MAs and assist in identifyingnew research directions.

This paper is organized in the following manner. Section IIpresents the recent development of adaptive MAs in general non-linear optimization. The proposed taxonomy or classification foradaptive MAs is then presented in Section III. Section IV presentsthe analyses on global convergence properties of adaptive MAsby means of Finite Markov chain. Section V summarizes the em-pirical studies on the assortment of adaptations using a varietyof continuous parametric benchmark test functions. Finally, Sec-tion VI concludes this paper with somefuture research directions.

II. ADAPTIVE MAS FOR GENERAL NONLINEAR OPTIMIZATION

Optimization theory [22] is the study of the mathematicalproperties of optimization problems and the analysis of algo-rithms for their solutions. It deals with the problem of mini-mizing or maximizing a mathematical model of an objectivefunction such as cost, fuel consumption, etc., subject to a setof constraints. In particular, we consider the general nonlinearprogramming problem of the form:

• a target objective function to be minimized or max-imized;

• a set of variables, , , which affect thevalue of the objective function, where and arethe lower and upper bounds, respectively;

• a set of equality/inequality constraints that allowsthe unknowns to take on certain values but excludeothers. For example, the constraints may take the formof , for , where is the numberof constraints.

A. Memetic Algorithms (MAs)

A GA is a computational model that mimics the biologicalevolution, whereas a MA, in contrast mimics culture evolution[23]. It can be thought of as units of information that are repli-cated while people exchange ideas. In a MA, a population con-sists solely of local optimum solutions. The basic steps of acanonical MA for general nonlinear optimization based on theGA can be outlined in Fig. 1.

B. Adaptive MAs

One unique feature of the adaptive MAs we consider hereis the use of multiple memes in the search and the decisionon which meme to apply on an individual is made dynami-cally. This form of adaptive MAs promotes both cooperation andcompetition among various problem-specific memes and favorsneighborhood structures containing high quality solutions thatmay be arrived at low computational efforts. The adaptive MAscan be outlined in Fig. 2. In the first step, the GA populationmay be initialized either randomly or using design of experi-ments technique such as Latin hypercube sampling [24]. Subse-quently, for each individual in the population, a meme is selectedfrom a pool of memes considered in the search to conduct thelocal improvements. Different strategies may be employed tofacilitate the decision making process [11]–[20]. For example,one may reward a meme based on its ability to perform localimprovement and use this as a metric in the selection process[11], [12], [16]–[19]. After local improvement, the genotypesand/or phenotypes in the original population are replaced withthe improved solution depending on the learning mechanism,i.e., Lamarckian or Baldwinian learning. Standard GA opera-tors are then used to form the next population.

1) Hyperheuristic Adaptive MAs: In the context of combi-natorial optimization, Cowling et al. [11] introduced the term“hyperheuristic” as a strategy that manages the choice of whichmeme should be applied at any given time, depending uponthe characteristics of the memes and the region of the solutionspace currently under exploration. With hyperheuristic, multiplememes were considered in the evolution search. In their work,three different categories of hyperheuristics have been demon-strated for scheduling problems, namely: 1) random; 2) greedy;and 3) choice-function [11], [12], [19].

Under the random category [11], the first is Simplerandom.Here, a meme is selected randomly at each decision point. It is

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 3: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

ONG et al.: CLASSIFICATION OF ADAPTIVE MAs: A COMPARATIVE STUDY 143

Fig. 2. General framework of memes adaptation in adaptive MAs.

purely stochastic in nature, and the probability of choosing eachmeme is kept constant throughout the search. This strategy maybe regarded as a datum with which other selection strategies maybe compared. In Randomdescent, initially the choice of memesis decided randomly. Subsequently, this same meme is used re-peatedly until no further local improvements can be found. Thissame process then repeats to consider all the other memes. Ran-dompermdescent is similar to the Randomdescent strategy ex-cept that a random permutation of memes isfixed in advance, and when the application of a meme does notresult in any improvement, the next meme in the permutation isused.

Unlike the random category, the greedy category [11] re-sembles a brute force technique that experiments with everymeme on each individual and chooses the meme that resultin the biggest improvement. Since it is a brute-force method,the drawback of greedy hyperheuristic is clearly the highcomputational cost.

In their choice-function category [11], [12], choice functionincorporating multiple metrics of goodness is used to assess howeffective a meme is, based upon the current state of knowledgeabout the region of the solution space under exploration. Thechoice function proposed in [11], [12] is composed of threecomponents. The first component represented by reflects therecent improvement made by a meme and expresses the idea thatif a meme has recently performed well, it is likely to continue tobe effective. The second component describes the improve-ment contributed by the consecutive pairs of memes, and thelast component records the period elapsed since a meme waslast used. Five strategies were introduced for the hyperheuris-tics, choice-function category [11], [12]. In the Straightchoicestrategy, the meme that yields the best value is chosen at eachdecision point. In the second strategy, Rankedchoice, memes areranked according to , and the top ranking memes are experi-mented individually, and only the meme that yields the largestimprovement proceeds with Lamarckian learning. In the Roulet-techoice strategy, a meme is chosen with probability relativeto the overall improvement, i.e., , isthe total memes considered. The Decompchoice strategy con-siders each component in , i.e., , and , individually. Inparticular, the strategy experiments with each of the meme andrecords the best local improvement based on , , andindividually. Subsequently, the meme that results in the best im-

provement among those identified is used. This implies that upto four memes will be individually tested in the case when allthe highest ranked performing memes are different for , ,and . Alternatively, using the choice function, a tabu-list cre-ated may also be used to narrow down the choice of memes ateach decision point [19]. This is labeled here as the Tabu-searchstrategy.

2) Multimemes and Co-Evolving MAs: Krasnogor alsoproposed a simple inheritance mechanism for discrete combi-natorial search [13], [14]. Each individual is represented andcomposed by its genetic material and memetic material. Thememetic material encoded into its genetic part specifies thememe that will be used to perform local search in the neigh-borhood of the solution. Smith also worked on co-evolvingmemetic algorithms that use similar mechanisms to govern thechoice of memes represented in the form of rules [15], [20].These are forms of self-adaptive MA that evolves simultane-ously the genetic material and the choice of memes during thesearch. A simple vertical inheritance mechanism, as used ingeneral self-adaptive GAs and evolutionary strategies is shownto provide a robust adaptation of behavior [13], [14]. Themultimemes algorithm with simple inheritance mechanism isoutlined in Fig. 3.

3) Meta-Lamarckian Learning: In continuous nonlinearfunction optimization, Ong and Keane [17] studied themeta-Lamarckian learning on a range of benchmark problemsof diverse properties. Since the study on using multiplememes in a MA search concentrated on Lamarckian learning,it was termed as meta-Lamarckian learning [17]. The mainmotivation of the work was to facilitate competition andcooperation among the multiple memes employed in thememetic search so as to solve a problem with greater effec-tiveness and efficiency. A Basic meta-Lamarckian learningstrategy was proposed as the baseline algorithm that formsa datum that other meta-Lamarckian learning strategies maybe compared. This is similar to the Simplerandom proposedin hyperheuristic where no adaptation has been used. It hasthe advantage of at least giving all the available memesbeing considered a chance to improve each chromosomethroughout the MA search.

Further, two adaptive strategies were investigated in [16],[17]. The rationale behind the Sub-problem Decompositionstrategy was to decompose the original search problem cost

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 4: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

144 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006

Fig. 3. Simple inheritance mechanism pseudocode.

Fig. 4. Outline of subproblem decomposition strategy.

Fig. 5. Outline of Biased Roulette Wheel strategy.

surface, which is often large and complex, into many sub-parti-tions dynamically, and attempts to choose the most competitivememe for each sub-partition. To choose a suitable meme at eachdecision point, the strategy gathers knowledge about the abilityof the memes to search on a particular region of the searchspace from a database of past experiences archived during theinitial EA search. The memes identified then form the candidatememes that will compete, based on their rewards, to decideon which meme will proceed with the local improvement. Inthis manner, it was shown that the strategy proposed createsopportunities for joint operations between different memes insolving the problem as a whole, because the diverse memeshelp to improve the overall population based on their areas ofspecialization. Hence, Sub-problem Decomposition promotesboth cooperation and competition among the memes in thememetic search. On the other hand, the Biased Roulette Wheelstrategy [17] is similar to the Roulettechoice [11]. Nevertheless,

they do differ in the choice functions used. The pseudo-codesfor these two strategies are outlined in Figs. 4 and 5.

III. CLASSIFICATION OF ADAPTIVE MAS

In this section, a classification of memes adaptation in adap-tive MAs is presented. This classification is based on the mech-anism of adaptation (adaptation type) [6], [7] and on which levelthe historical knowledge of the memes is used (adaptation level)in adapting the choice of memes in adaptive MAs. It is orthog-onal and encompasses diverse forms of memes adaptation inadaptive MAs. The taxonomy on existing strategies of adaptiveMAs based on adaptation type and level is depicted in Table I.

A. Adaptation Type

The classification of the adaptation type is made on the basisof the mechanism of the adaptation used in the choice of memes.

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 5: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

ONG et al.: CLASSIFICATION OF ADAPTIVE MAs: A COMPARATIVE STUDY 145

TABLE IA CLASSIFICATION OF MEMES ADAPTATION IN ADAPTIVE MAS

Shaded region indicates that the type/level adaptive MAs do not exist in our classification.

In particular, attention is paid to the issue of whether or not afeedback from the adaptive MAs is used and how it is used.Here, feedback is defined as the improvement attained by thechosen meme on the chromosome searched.

1) Static: When no form of feedback is used during the evo-lutionary search, it is considered as a static type adaptation. TheBasic meta-Lamarckian learning strategy for meme selection[17], or Simplerandom strategy [11], are simple random walkover the available memes every time a chromosome is to be lo-cally improved. Since it does not make use of any feedback fromthe search, it is a form of static adaptation strategy in adaptiveMAs.

2) Adaptive: Adaptive dynamic adaptation takes placewhen feedback from the MA search influences the choiceof memes at each decision point. Here, we divided adaptivedynamic adaptation into qualitative or quantitative adaptation.

In qualitative adaptation, the exact value of the feedback isof little importance. Instead, the quality of a meme is suffi-cient. As long as the present meme generates improvement inthe local learning process, it remains employed in the next de-cision point. Otherwise, a new meme is chosen and the processrepeats until the stopping criteria are met. The Randomdescent,Randompermdescent and Tabu-search strategies [11], [19] areforms of qualitative adaptation.

On the other hand, the Greedy, Straightchoice, Rankedchoice,Roulettechoice, Decompchoice, Biased Roulette Wheel, andSub-Problem Decomposition strategies [11], [17] rely on thequantitative value of the feedback obtained on each individual’sculture evolution to decide on the choice of memes. They arethus considered as forms of quantitative adaptation.

3) Self-Adaptive: Self-adaptive type adaptation employsthe idea of evolution to implement the self-adaptation ofmemes. Naturally, both multimemes [13], [14] and co-evolutionMAs[15], [20] are forms of self-adaptive adaptation as thememetic representation of the memes is coded as part of theindividual and undergoes standard evolution.

B. Adaptation Level

The adaptation level refers to the level of historical knowl-edge of the memes that are employed in the choice of memes.Here, the level of historical knowledge means the extent of past

knowledge about the memes. The adaptive level is further di-vided into external, local, and global.

1) External: External-level adaptation refers to the casewhere no online knowledge about the memes is involved inthe choice of memes. In many real world applications, thepool of problem-specific memes is usually selected from pastexperiences of human experts. This is a formalization of theknowledge that domain experts possess about the behaviorof the memes and the optimization problem in general. Basicmeta-Lamarckian learning or Simplerandom strategies [11],[17] are classified as forms of external adaptive level since theymake use of external knowledge from past experiences.

2) Local: In local-level adaptation, the decision makingprocess on choice of memes involves simply on parts ofthe historical knowledge. The Greedy, Randomdescent, andRandompermdescent strategies [11] make decisions based onthe improvement obtained in the present or immediate pre-ceding culture evolution; hence, they are strategies categorizedunder local adaptive level. Nevertheless, it is worth notingthat global-level adaptation may be easily derived when oneconsiders all previously searched chromosomes.

The Sub-Problem Decomposition strategy, multimemes, andCo-evolution MA[13], [17] selects a meme based on the knowl-edge gained from only the nearest individuals or parentsamong all that were searched previously. Hence, they are alsoconsidered as strategies that practice local-level adaptations.

3) Global: Global-level adaptation takes place if the com-plete historical knowledge is used to decide on the choice ofmemes. Straightchoice, Rankedchoice, Roulettechoice, and De-compchoice, Biased Roulette Wheel, and Tabu-search [11], [17],[19] strategies are classified as forms of adaptations at the globaladaptive level since they make complete use of historical knowl-edge on the memes when deciding which memes to opt for.

IV. CONVERGENCE ANALYSIS OF ADAPTIVE MAS

In this section, we analyze the global convergence proper-ties of adaptive MAs according to their level of adaptations de-scribed in Section III, i.e., External, Local or Global, using thetheory of Markov chain and extending from previous efforts onconvergence analysis of genetic algorithms [25]–[29].

A. Finite Markov Chain

The Markov chain is a popular theory among the EA com-munity as it offers an appropriate framework for analyzing dis-crete-time stochastic process. To begin, we outline some basicdefinitions of Markov chain that are used in the analysis onglobal convergence properties of adaptive MAs [30].

Definition 1: If the transition probability is indepen-dent of time, i.e., for all and

, the Markov chain is said to be homogeneous overthe finite state space .

Definition 2: A Markov chain is called irreducible if for allpairs of states there exists an integer such thatfinite irreducible chains are always recurrent.

Definition 3: An irreducible chain is called aperiodic if forall pairs of states there is an integer such that for all

, the probability .

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 6: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

146 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006

B. Markov Chain Analysis of Adaptive MAs

To model an adaptive MA, we define the states of a Markovchain. Let be the collection of length binary strings. Hencethe number of possible strings is . If is the size of the pop-ulation pool, it is possible to show that , the total number ofpopulation pools or the number of Markov states, can be defined

by in the adaptive MA.

Further, we model the adaptive MA as a discrete-time Markovchain , with a finite state space

, and th step transition matrix where

(1)

and initial probability distribution

(2)

The probabilistic changes in the adaptive MA population dueto the evolutionary operators may be modeled using stochasticmatrices , and representing the crossover, mutation,and selection operations, respectively. Besides the standard evo-lutionary operators, the adaptive MA will also refine each indi-vidual using different memes at each decision point. We modelthis process of local improvement as a transition matrix

, which represents the state transition matrix of each indi-vidual that undergoes Lamarckian learning. This indicates that

has at least one positive entry in each row.On the whole, the process of adaptive MAs is then modeled

as a single transition matrix of size given by

(3)

where transition matrix incorporates all adaptive MA oper-ators which includes crossover, mutation, fitness-proportionalselection and the Lamarckian learning mechanisms. In (3), tran-sition matrices , , are independent of time. In contrast,

may be dependent or independent of time, depending on theadaptive MA strategies considered.

We begin with local-level adaptive MAs, particularly the Ran-domdescent, Randompermdescent, and Sub-Problem Decompo-sition strategies. Since the choice of meme is made based on theimmediate preceding or neighboring culture evolution, is de-pendent of time and varies for each decision point. Hence, thisform of adaptive MAs may not possess any global convergenceguarantee.

On the other hand, since the choice of meme in the Basicmeta-Lamarckian strategy is randowm while the Greedystrategy judges all options of memes experimentally and al-ways selects the best among them, the transition matrix isclearly time homogeneous or independent of time. In addition,strategies considered in the global-level adaptive MAs alsoconverge with a probability of one [28]. When , it isfeasible to assume the probability that the most suitable memeis selected tends to 1, i.e., converges to a constantmatrix.

Theorem 1: This kind of adaptive MAs is irreducible andaperiodic.

Proof: We know that is positive as shown in[26]. Further, from the properties of , it can be easily shownthat is positive, i.e., since has at least one posi-tive entry in each row and is positive,is also strictly positive. Hence it is irreducible and aperiodic.

Theorem 2: There are only positive recurrent states in thiskind of adaptive MAs.

Proof: The entire state space is a closed (ergodic) set be-cause the Markov chain is irreducible. Hence, the Markov chainmust be composed of only positive recurrent states.

Theorem 3: This kind of adaptive MAs possesses the prop-erty of global convergence.

Proof: Suppose , being the set of statescontaining the global optima. Because it is irreducible, aperi-odic and positive recurrent, as the probability that allthe points in the search space will be visited at least once, ap-proaches 1. Let refers to the fittest individual in the evolu-tionary search process. So, whatever the initial distribution is,it must happen such that . Further,since the fittest solution is always tracked in practice, it extendsfrom [26] that the search converges globally using an elitist se-lection mechanism.

V. EMPIRICAL STUDY ON BENCHMARKING PROBLEMS

In this section, we present an empirical study on the var-ious adaptive MA strategies. In particular, the representativestrategies from each category of adaptive MAs as depicted inTable I, are compared with the canonical MAs. These adaptiveMA strategies include the following.

1) External-Static: Basic meta-Lamarckian learning.2) Local-Qualitative: Randomdescent, Randompermdes-

cent.3) Global-Qualitative: Tabu-search.4) Global-Quantitative: Straightchoice, Roulettechoice.5) Local-Quantitative: Sub-Problem Decomposition.6) Local-Self-adaptive: multimemes.

For the sake of brevity, these strategies are abbreviated inthis section as S-E, QL1-L, QL2-L, QL3-G, QN1-G, QN2-G,QN3-L, S-L, respectively. Further, considering that most ex-isting efforts on adaptive MAs have been on combinatorialoptimization problems, the emphasis here is placed on contin-uous benchmark optimization problems.

A. Benchmark Problems for Function Optimization

Five commonly used continuous benchmark test functions areemployed in this study [17], [31], [32]. They have diverse prop-erties in term of epistasis, multimodality, discontinuity, and con-straint, as summarized in Table II.

B. Memes for Function Optimization

Various memes from the OPTIONS optimization package[33] were employed in the empirical studies. They consist ofa variety of optimization methods from the Schwefel libraries[34] and a few others in the literature [35]–[40]. The eightmemes used here are representatives of second, first, and

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 7: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

ONG et al.: CLASSIFICATION OF ADAPTIVE MAs: A COMPARATIVE STUDY 147

TABLE IICLASSES OF BENCHMARK TEST FUNCTIONS CONSIDERED. �1: EPISTASIS, �2: MULTIMODALITY, �3: DISCONTINUITY, �4: CONSTRAINT

TABLE IIILIST OF MEMES OR LOCAL SEARCH METHODS CONSIDERED

zeroth-order local search methods and are listed in Table IIItogether with their respective abbreviations used later in thepaper.

C. Choice Function

The choice function employed in this study is based on thatproposed in [11], which appears to be one of the most sophisti-cated that exists. It is used to select a meme and is defined bythe effective choice function given by (4) and (5), shownat the bottom of the next page. Equation (4) records for eachmeme the feedback on the effectiveness of a meme, feedbackin regard to the effectiveness on consecutive pairs ofmemes represented by , and to facilitate diversity in memesselection (see [11] for details on the formulations of , and

). In our empirical studies, the parameters of (4) and (5) areconfigured as follows: , , , and

, which corresponds to the suggested values in the lit-erature [17], [18], [22]. Since the number of memes employedtotals to eight, hence .

D. Results for Benchmark Test Problems

To see how the choice of the memes affects the performanceand efficiency of the search, the eight different memes usedto form the canonical MAs were employed on the benchmarkproblems. All results presented are averages of ten independentruns. Each run continues until the global optimum was foundor a maximum of 40 000 trials (function evaluation calls) wasreached, except for the Bump function where a maximum of upto 100 000 trials was used. In each run, the control parametersused in solving the benchmark problems were set as follows:population size of 50, mutation rate of 0.1%, 2-point crossover,10-bit binary encoding, maximum local search length of 100

(4)

(5)

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 8: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

148 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006

TABLE IVRESULTS FOR BENCHMARK TEST PROBLEM

evaluations and the probability of applying local search on aparent chromosome is set to unity.

The results obtained from our studies on the benchmark testproblems are presented in Table IV. In the case where an al-gorithm manages to locate the global optimum of a benchmarkproblem, the number of evaluation count presented indicates theeffort taken to reach this optimum solution. Otherwise, the bestfitness averaged over ten runs is presented. Further, the canon-ical MAs and adaptive MAs are ranked according to their abilityto produce high-quality solutions on the benchmark problemsunder the specified computational budget. The search traces ofthe best performing canonical MA and adaptive MA, togetherwith the worst performing canonical MA on each benchmarkfunction are also revealed in Figs. 6–10. Note that in all the fig-ures, results are plotted against the total number of function callsmade by the combined genetic and local searches. These nu-merical results obtained are analyzed according to the followingaspects.

• Robustness and Search Quality—the capability of thestrategy to generate search performances that are compet-itive or superior to the best canonical MA (from amongthe pool considered), on different problems and the capa-bility of the strategy to provide high quality solutions.

• Computational Cost—the computational effort needed bythe different adaptive MAs.

1) Robustness and Search Quality: From the results (seeTable IV and Figs. 6–10), it is worth noting that the rankingof the adaptive MA strategies are relatively higher than mostcanonical MAs for the majority of the benchmark problems,implying that adaptive MAs generally outperform canonical

Fig. 6. Search traces for maximizing 20-dimensional Bump function.

MAs for these benchmark problems. This demonstrates theeffectiveness of adaptive MAs in providing robust searchperformances, suggesting the basis for the increasing researchinterests in developing new adaptive MA strategies.

From the search traces, it is worth noting that the adaptivestrategies gradually “learn” the characteristics of the problem,i.e., the strengths and weaknesses of the different memes totackle the problem in hand, suggesting why it is inferior to somecanonical MAs during the early stages of the search. This is re-ferred to as the learning stage in [17]. After this initial trainingstage, most of the best adaptive MAs were observed to outper-form even the best canonical MAs on each benchmark func-tion (see Figs. 6–8). The only exception is on the Sphere testfunction. While any adaptive MA takes effort to learn about

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 9: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

ONG et al.: CLASSIFICATION OF ADAPTIVE MAs: A COMPARATIVE STUDY 149

Fig. 7. Search traces for minimizing 10-dimensional Griewank function.

Fig. 8. Search traces for minimizing 20-dimensional Rastrigin function.

Fig. 9. Search traces for minimizing 30-dimensional Sphere function.

the different memes employed, the canonical MA using FL hadalready converged to the optimum of the unimodal quadraticsphere function since as a quasi-Newton method, it is capableof locating the global/local optimum of the unimodal quadraticSphere function rapidly. The reader is referred to [17] for greaterdetails on such observations where it was demonstrated that anincrease in the size of memes employed in the adaptive MAsresults in greater learning time.

Fig. 10. Search traces for minimizing 5-dimensional Step function.

TABLE VNORMALIZED COMPUTATIONAL COST RELATIVE

TO S-E ON BENCHMARK PROBLEMS

In general, all the adaptive MA strategies surveyed werecapable of selecting memes that matches the problem ap-propriately throughout the search, thus producing searchperformances that are competitive or superior to the canon-ical MAs on the benchmark problems. It is notable thatexternal-level adaptation fares relatively poorer than localand global-level adaptations. This makes good sense sinceexternal-level adaptation does not involve any “learning” onthe characteristics of the problem during a search. Particularly,global-level adaptation results in the best search performanceamong all forms of adaptations considered. This is evident inTable IV where global-level adaptation was ranked the bestamong all other adaptive MA strategies on all the benchmarkproblems considered.

2) Computational Cost: Among the adaptive MA strate-gies, S-E MA incurs minimum extra computational cost overthe canonical MA since it does not make use of any historicalknowledge but selects the meme randomly. For the sake ofbrevity, we compare the computational costs of the adaptiveMA strategies relative to S-E and tabulated them in Table V.

Generally, qualitative adaptive MAs, i.e., QL1-L, QL2-L,QL3-G, and self-adaptive MA, i.e., S-L, consume only slightlyhigher computational cost than the S-E. The extra effort is aresult of the mechanisms used to choose a suitable meme usinghistorical knowledge at each decision point. Note that QL3-Gincurs slightly more computational cost than QL1-L, QL2-L

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 10: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

150 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006

since greater effort is required in the former’s tabu-searchmechanism to select memes. Overall, since quantitative adap-tive MAs, i.e., QN1-G, QN2-G, QN3-L, involve the use ofchoice-functions in the process of selecting memes, they incurthe highest computational costs in comparison. Furthermore,QN3-L is found to require more effort than the others, since itinvolves the additional distance measure mechanism used in itsstrategy. Nevertheless, it is worth noting that on the whole, thedifferences in computational costs between the various adaptiveMA strategies may be considered as relatively insignificant.

VI. CONCLUSION

The limited amount of theory and a priori knowledge cur-rently available for the choice of memes to best suit a problemhas paved the way for research on developing adaptive MAs fortackling general optimization problems in a robust manner. Inthis paper, a classification of adaptive MAs based on types andlevels of adaptation has been compiled and presented. An attemptto analyze the global convergence properties of adaptive MAsaccording to their level of adaptations was also presented. Em-pirical study on adaptive MAs was also presented according totheir type-level adaptations. Numerical results obtained on repre-sentatives of adaptive MAs with different type-level adaptationsusing a range of commonly used benchmark functions of diverseproperties indicate that the forms of adaptive MAs considered arecapable ofgenerating more robust search performances than theircanonical MAs counterparts. More importantly, adaptive MAsare shown to be capable of arriving at solution qualities that aresuperior to the best canonical MAs more efficiently. In addition,among the various categories of adaptive MAs, the global-levelMA adaptation appears to outperform others considered.

Clearly there is much ground for further research efforts todiscover ever more successful adaptive MAs. From our inves-tigations conducted, we believe there are strong motivations towarrant further research in the areas of memes adaptations.

• The success of global-level adaptation schemes may beattributed to its ease of implementations and the unifor-mity in the local and global landscapes of the test prob-lems considered. On the other hand, there are insuffi-cient research attempts on other forms of adaptations inMAs. This suggests a need for greater research efforts onlocal-level and self-adaptive MA adaptations. Some of theexperiences gained from global-level adaptation may alsoapply to other forms of adaptive MAs. For example, be-sides the simple credit assignment mechanisms used in[13] and [20], more sophisticated mechanisms such asin [11] and [12] may be tailored for self-adaptive MAs.Further, statistical measure may then be used to charac-terize fitness landscapes or neighborhood structures [41],[42] and the success of the memes on them. Subsequently,knowledge about the diverse neighborhood structures ofthe problem in hand may be gathered during the evo-lutionary search process and choice of memes is thenachieved by matching memes to neighborhood structures.

• Thus far, little progress has been made to enhance ourunderstanding on the behavior of MAs from a theoret-ical point of view. It would be more meaningful to pro-

vide some transparency on the choice of memes duringthe adaptive MAs search, see for example [20]. Greaterefforts may be expended on discovering rules to enhanceour understanding on when and why a particular memeor a sequence of memes should be used constructively,given a particular problem or landscape of known prop-erties. Knowledge derived in this form would help fulfillthe human-centered criterion. Besides, domain specialistscan manually validate these rules and also use them to en-hance their knowledge of the problem domain. This willpave the way for further theoretical developments of MAsand the designs of successful novel memes.

• Most work on meme adaptations have concentrated onusing the improvements in solution quality against howmuch effort incurred to express the capability of memes tosearch on a problem. Nevertheless, more effective creditassignment mechanisms and rewards should be consid-ered and explored.

• Besides the issue on choice of memes in MAs, a numberof other core issues affecting the performance of MAsincluding interval, duration and intensity of local searchhave been studied in recent years. Most of these are re-lated to balancing the computational time between localand genetic search [21], [43]. While researchers often ex-periment with each issue separately, it would be worth-while to explore how they may be used together to opti-mize the performance of MAs. For instance, given a fixedtime budget, by monitoring the status of a search andthe remaining time budget, one may use it as a basis tomake decisions on the choice of memes and local/globalsearch ratio. This in turn helps to define the local searchinterval and duration online throughout the entire evolu-tionary search.

• Last but not least, it would be interesting to extend theefforts on choice of memes in MAs to multicriteria,multiobjective, and constrained optimization problems,for example, developing appropriate reward measuresand credit assignment mechanisms.

ACKNOWLEDGMENT

The authors wish to thank the anonymous referees and editorsfor their constructive comments on an earlier draft of this paper.They would also like to thank the Intelligent Systems Centreof Nanyang Technological University for their support in thiswork.

REFERENCES

[1] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Ma-chine Learning. Reading, MA: Addison-Wesley, 1989.

[2] L. O. Hall, B. Ozyur, and J. C. Bezdek, “The case for genetic algorithmsin fuzzy clustering,” in Proc. Int. Conf. Information Processing andManagement of Uncertainty in Knowledge-Based Systems, Jul. 1998,pp. 288–295.

[3] J. J. Grefenstette, “Optimization of control parameters for genetic al-gorithms,” IEEE Trans. Syst., Man, Cybern., vol. SMC-16, no. 1, pp.122–128, Jan./Feb. 1986.

[4] M. Srinivas and L. M. Patnaik, “Adaptive probabilities of crossover andmutation in genetic algorithms,” IEEE Trans. Syst., Man, Cybern., vol.24, no. 4, pp. 17–26, Apr. 1994.

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 11: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

ONG et al.: CLASSIFICATION OF ADAPTIVE MAs: A COMPARATIVE STUDY 151

[5] Z. Bingul, A. Sekmen, and S. Z. Sabatto, “Evolutionary approach tomulti-objective problems using adaptive genetic algorithms,” in Proc.IEEE Int. Conf. Systems, Man, and Cybernetics, vol. 3, Oct. 2000, pp.1923–1927.

[6] R. Hinterding, Z. Michalewicz, and A. E. Eiben, “Adaptation in Evolu-tionary Computation: A Survey,” in IEEE International Conference onEvolutionary Computation. Piscataway, NJ: IEEE Press, Apr. 1997,pp. 65–69.

[7] A. E. Eiben, R. Hinterding, and Z. Michalewicz, “Parameter controlin evolutionary algorithm,” IEEE Trans. Evol. Comput., vol. 3, pp.124–141, Jul. 1999.

[8] J. A. Miller, W. D. Potter, R. V. Gandham, and C. N. Lapena, “An evalua-tion of local improvement operators for genetic algorithms,” IEEE Trans.Syst., Man, Cybern., vol. 23, pp. 1340–1351, Sep./Oct. 1993.

[9] A. Torn and A. Zilinskas, Global Optimization. Berlin, Germany:Springer-Verlag, 1989, vol. 350, Lecture Notes in Computer Science.

[10] W. E. Hart, “Adaptive Global Optimization With Local Search,” Ph.D.dissertation, Univ. of California, San Diego, 1994.

[11] P. Cowling, G. Kendall, and E. Soubeiga, “A hyperheuristic approach toscheduling a sales summit,” in PATAT 2000, Springer Lecture Notes inComputer Science, Konstanz, Germany, Aug. 2000, pp. 176–190.

[12] G. Kendall, P. Cowling, and E. Soubeiga, “Choice function and randomhyperheuristics,” in Proc. 4th Asia-Pacific Conference on SimulatedEvolution and Learning, Singapore, Nov. 2002, pp. 667–671.

[13] N. Krasnogor, B. Blackburne, J. D. Hirst, and E. K. N. Burke, “Multi-meme algorithms for the structure prediction and structure comparisonof proteins,” in Parallel Problem Solving From Nature, 2002, LectureNotes in Computer Science.

[14] N. Krasnogor, “Studies on the Theory and Design Space of MemeticAlgorithms,” Ph.D., Faculty of Comput., Math., and Eng., Univ. of theWest of England, Bristol, U.K., 2002.

[15] J. E. Smith et al., “Co-evolution of memetic algorithms: Initial investiga-tions,” in Parallel Problem Solving From Nature—PPSN VII, G. Guervoset al., Eds. Berlin, Germany: Springer, 2002, vol. 2439, Lecture Notesin Computer Science, pp. 537–548.

[16] Y. S. Ong, “Artificial Intelligence Technologies in Complex Engi-neering Design,” Ph.D. dissertation, Sch. Eng. Sci., Univ. Southampton,Southampton, U.K., 2002.

[17] Y. S. Ong and A. J. Keane, “Meta-Lamarckian in memetic algorithm,”IEEE Trans. Evol. Comput., vol. 8, pp. 99–110, Apr. 2004.

[18] N. Zhu, Y. S. Ong, K. W. Wong, and K. T. Seow, “Using memetic algo-rithms for fuzzy modeling,” Austral. J. Intell. Inform. Process. (SpecialIssue on Intelligent Technologies), vol. 8, no. 3, pp. 147–154, Dec. 2004.

[19] E. K. Burke, G. Kendall, and E. Soubeiga, “A tabu search hyperheuristicfor timetabling and rostering,” J. Heuristics, vol. 9, no. 6, 2003.

[20] J. E. Smith, “Co-evolving memetic algorithms: A learning approach torobust scalable optimization,” in IEEE Congress on Evolutionary Com-putation. Piscataway, NJ: IEEE Press, 2003, vol. 1, pp. 498–505.

[21] H. Ishibuchi, T. Yoshida, and T. Murata, “Balance between geneticsearch and local search in memetic algorithms for multiobjectivepermutation flowshop scheduling,” IEEE Trans. Evol. Comput., vol. 7,pp. 204–223, Apr. 2003.

[22] S. G. Byron and W. Joel, Introduction to Optimization Theory, ser. In-ternational Series in Industrial and Systems Engineering. EnglewoodCliffs, NJ: Prentice-Hall, 1973.

[23] R. Dawkin, The Selfish Gene. Oxford, U.K.: Oxford Univ. Press, 1976.[24] M. D. McKay, R. J. Beckman, and W. J. Conover, “A comparison of

three methods for selecting values of input variables in the analysisof output from a computer code,” Technometrics, vol. 21, no. 2, pp.239–245, 1979.

[25] A. E. Eiben, E. H. L. Aarts, and H. K. M. Van, “Global convergenceof genetic algorithms: A Markov chain analysis,” in Proc. 1st Int. Conf.Parallel Problem Solving from Nature (PPSN), 1991, pp. 4–12.

[26] G. Rudolph, “Convergence analysis of canonical genetic algorithms,”IEEE Trans. Neural Netw., vol. 5, pp. 96–101, 1994.

[27] J. Suzuki, “A Markov chain analysis on simple genetic algorithms,”IEEE Trans. Syst., Man, Cybern., vol. 25, pp. 655–659, Apr. 1995.

[28] Y. J. Cao and Q. H. Wu, “Convergence analysis of adaptive genetic al-gorithm,” in Genetic Algorithms in Engineering Systems Conf.: Innova-tions and Applications, Sep. 1997, pp. 85–89.

[29] B. Li and W. Jiang, “A novel stochastic optimization algorithm,” IEEETrans. Syst., Man, Cybern., vol. 30, pp. 193–198, Feb. 2000.

[30] M. Iosifescu, Finite Markov Processes and Their Application. NewYork: Wiley, 1980.

[31] C. A. Floudas, P. M. Pardalos, C. Adjiman, W. R. Esposito, Z. H. Gümüs,S. T. Harding, J. L. Klepeis, C. A. Meyer, and C. A. Schweiger, Hand-book of Test Problems in Local and Global Optimization Dordrecht,The Netherlands, 1999.

[32] A. O. Griewank, “Generalized descent for global optimization,” J.Optim. Theory Applicat., vol. 34, pp. 11–39, 1981.

[33] A. J. Keane. (2002) The OPTIONS Design Exploration SystemUser Guide and Reference Manual. [Online]. Available:http://www.soton.ac.uk/~ajk/options.ps

[34] H. P. Schwefel, Evolution and Optimum Seeking. New York: Wiley,1995.

[35] L. Davis, “Bit climbing, representational bias, and test suite design,” inProc. 4th Int. Conf. Genetic Algorithms, 1991, pp. 18–23.

[36] W. H. Swann, A Report on the Development of a New Direct SearchingMethod of Optimization. Middleborough, U.K.: ICI CENTER Instru-ment Laboratory, 1964.

[37] R. Fletcher, “A new approach to variable metric algorithm,” Comput. J.,vol. 7, no. 4, pp. 303–307, 1964.

[38] M. J. D. Powell, “An efficient method for finding the minimum of afunction of several variables without calculating derivatives,” Comput.J., vol. 7, no. 4, pp. 303–307, 1964.

[39] R. Hooke and T. A. Jeeves, “Direct search solution of numerical andstatistical problems,” J. ACM, vol. 8, no. 2, pp. 212–229, Apr. 1961.

[40] J. A. Nelder and R. Meade, “A simplex method for function minimiza-tion,” Comput. J., vol. 7, pp. 308–313, 1965.

[41] T. Jones, “Evolutionary Algorithms, Fitness Landscapes and Search,”Ph.D. dissertation, Univ. of New Mexico, Albuquerque, NM, 1995.

[42] P. Merz and B. Freisleben et al., “Fitness landscapes and memetic algo-rithm design,” in New Ideas in Optimization, D. Corne et al., Eds. NewYork: McGraw-Hill, 1999, pp. 245–260.

[43] N. K. Bambha, S. S. Bhattacharyya, J. Teich, and E. Zitzler, “Systematicintegration of parameterized local search into evolutionary algorithms,”IEEE Trans. Evol. Comput., vol. 8, pp. 137–154, 2004.

Yew-Soon Ong received the Bachelor’s and Master’sdegrees in electrical and electronics engineering fromNanyang Technology University, Singapore, in 1998and 1999, respectively, and the Ph.D. degree from heComputational Engineering and Design Center, Uni-versity of Southampton, Southampton, U.K., in 2003.

He is currently an Assistant Professor withthe School of Computer Engineering, NanyangTechnological University. His research interestslie in evolutionary computation spanning: designoptimization, surrogate-assisted evolutionary al-

gorithms, memetic algorithms, evolutionary computation in dynamic anduncertain environments, response surface methods for data modeling, andgrid-based computing.

Meng-Hiot Lim received the B.Sc., M.Sc., and Ph.D.degrees from the University of South Carolina, Co-lumbia.

He joined the faculty at the School of Electricaland Electronics Engineering, Nanyang Techno-logical University, Singapore, in 1989, where heis currently an Associate Professor. His researchinterests include computational intelligence, com-binatorial optimization, web-based applications,evolvable hardware systems, reconfigurable circuitsand architecture, computational finance, and graph

theory. He has co-edited a volume on Recent Advances in Simulated Evolutionand Learning published by World Scientific Publishing.

Dr. Lim is an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS,MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS.

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.

Page 12: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B … · 142 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 Fig

152 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006

Ning Zhu received the B.Eng. degree from theSchool of Computers, WuHan University (WHU),WuHan, China, in 2002.

He is currently a postgraduate student in theSchool of Computer Engineering, Nanyang Techno-logical University, Singapore. His research interestsinclude genetic algorithms and memetic algorithms.

Kok-Wai Wong received the B.Eng. (Hons.)and Ph.D. degrees from the School of Electricaland Computer Engineering, Curtin University ofTechnology, Western Australia, in 1994 and 2000,respectively.

He joined the School of Information Technology,Murdoch University, Perth, WA, Australia, as a Lec-turer in 2005. Prior to this appointment, he was anAssistant Professor at the School of Computer Engi-neering, Nanyang Technological University, Singa-pore. His research interests include intelligent data

analysis, fuzzy rule based systems, computational intelligence, and game tech-nology.

Authorized licensed use limited to: Murdoch University. Downloaded on October 27, 2009 at 05:06 from IEEE Xplore. Restrictions apply.