30
A Survey of Parallel Genetic Algorithms Erick Cantú-Paz Department of Computer Science and Illinois Genetic Algorithms Laboratory University of Illinois at Urbana-Champaign [email protected] ABSTRACT. Genetic algorithms (GAs) are powerful search techniques that are used success- fully to solve problems in many different disciplines. Parallel GAs are particularly easy to im- plement and promise substantial gains in performance. As such, there has been extensive re- search in this field. This survey attempts to collect, organize, and present in a unified way some of the most representative publications on parallel genetic algorithms. To organize the litera- ture, the paper presents a categorization of the techniques used to parallelize GAs, and shows examples of all of them. However, since the majority of the research in this field has concen- trated on parallel GAs with multiple populations, the survey focuses on this type of algorithms. Also, the paper describes some of the most significant problems in modeling and designing multi-population parallel GAs and presents some recent advancements. KEYWORDS: Parallel genetic algorithms, master-slave genetic algorithms, multiple demes, hi- erarchical genetic algorithms

A Survey of Parallel Genetic Algorithms - TRACER

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Survey of Parallel Genetic Algorithms - TRACER

A Surveyof Parallel GeneticAlgorithms

Erick Cantú-Paz

Departmentof ComputerScienceandIllinois GeneticAlgorithmsLaboratoryUniversityof Illinois [email protected]

ABSTRACT. Geneticalgorithms(GAs) arepowerful searchtechniquesthatareusedsuccess-fully to solve problemsin many differentdisciplines.ParallelGAs areparticularlyeasyto im-plementandpromisesubstantialgainsin performance.As such,therehasbeenextensive re-searchin thisfield. Thissurvey attemptsto collect,organize,andpresentin aunifiedwaysomeof the most representative publicationson parallelgeneticalgorithms.To organizethe litera-ture,thepaperpresentsa categorizationof the techniquesusedto parallelizeGAs, andshowsexamplesof all of them.However, sincethemajority of the researchin this field hasconcen-tratedonparallelGAswith multiplepopulations,thesurvey focuseson this typeof algorithms.Also, the paperdescribessomeof the most significantproblemsin modelinganddesigningmulti-populationparallelGAsandpresentssomerecentadvancements.

KEYWORDS: Parallelgeneticalgorithms,master-slavegeneticalgorithms,multiple demes,hi-erarchicalgeneticalgorithms

Page 2: A Survey of Parallel Genetic Algorithms - TRACER

1. Intr oduction

GeneticAlgorithms(GAs)areefficientsearchmethodsbasedonprinciplesof nat-ural selectionand genetics.They are being appliedsuccessfullyto find acceptablesolutionsto problemsin business,engineering,andscience[GOL 94]. GAsaregener-ally ableto find goodsolutionsin reasonableamountsof time,but asthey areappliedto harderandbiggerproblemsthereis anincreasein thetimerequiredto find adequatesolutions.As aconsequence,therehavebeenmultipleefforts to makeGAsfaster, andoneof themostpromisingchoicesis to useparallelimplementations.

The objective of this paperis to collect,organize,andpresentsomeof the mostrelevantpublicationsonparallelGAs. In orderto organizethegrowing amountof lit-eraturein thisfield, thepaperpresentsa categorizationof thedifferenttypesof paral-lel implementationsof GAs.By far, themostpopularparallelGAsconsistin multiplepopulationsthat evolve separatelymostof the time andexchangeindividualsocca-sionally. This typeof parallelGAs is calledmulti-deme,coarse-grainedor distributedGAs,andthissurvey concentrateson thisclassof algorithm.However, thispaperalsodescribestheothermajortypesof parallelGAsanddiscussesbriefly someexamples.

Therearemany examplesin theliteraturewhereparallelGAsareappliedto apar-ticular problem,but the intentionof this paperis not to enumerateall the instanceswhereparallelGAs have beensuccessfulin finding goodsolutions,but instead,tohighlight thosepublicationsthathavecontributedto thegrowth of thefield of parallelGAs in someways.The survey examinesin moredetail thosepublicationsthat in-troducesomethingnew or that attemptto explain why this or that works,but it alsomentionsafew of theapplicationproblemsto show thatparallelGAsareusefulin the“real world” andarenot justanacademiccuriosity.

This paperis organizedasfollows.Thenext sectionis a brief introductionto ge-neticalgorithmsandSection3 describesa categorizationof parallelGAs.Thereviewof parallelGAs beginsin Section4 with a presentationof earlypublications.Section5 describesthemaster-slavemethodto parallelizeGAs.Sections6 and7 examinetheliteratureon parallelGAs with multiple populationsandSection8 summarizestheresearchon fine-grainedparallelGAs. Section9 dealswith hybrid methodsto paral-lelize GAs. Next, Section10 describessomeopenproblemsin modelinganddesignandsomerecentadvancements.Thereportendswith a summaryandtheconclusionsof thisproject.

2. GeneticAlgorithms — A Quick Intr oduction

This sectionprovidesbasicbackgroundmaterialon GAs. It definessometermsanddescribeshow a simpleGA works, but it is not a completetutorial. Interestedreadersmayconsultthebookby Goldberg [GOL 89a] for moredetailedbackgroundinformationonGAs.

Geneticalgorithmsarestochasticsearchalgorithmsbasedonprinciplesof naturalselectionandrecombination.They attemptto find the optimal solutionto the prob-

Page 3: A Survey of Parallel Genetic Algorithms - TRACER

lem at handby manipulatinga populationof candidatesolutions.The populationisevaluatedandthebestsolutionsareselectedto reproduceandmateto form thenextgeneration.Overanumberof generations,goodtraitsdominatethepopulation,result-ing in anincreasein thequalityof thesolutions.

The basicmechanismin GAs is Darwinianevolution : badtraits areeliminatedfrom thepopulationbecausethey appearin individualswhich do not survive thepro-cessof selection.Thegoodtraitssurviveandaremixedby recombination(mating)toform betterindividuals.Mutationalsoexists in GAs,but it is considereda secondaryoperator. Its functionis to ensurethatdiversityis not lost in thepopulation,sotheGAcancontinueto explore.

Thenotionof ‘good’ traitsis formalizedwith theconceptof buildingblocks(BBs),whicharestringtemplates(schemata)thatmatchashortportionof theindividualsandactasa unit to influencethefitnessof individuals.Theprevailing theorysuggeststhatGAs work by propagatingBBs usingselectionandcrossover. We follow Goldberg,Deb,andThierens[GOL 93b], andrestrictthenotionof abuilding blockto theshort-estschematathatcontributeto theglobaloptimum.In this view, the juxtapositionoftwo BBsof order

�ataparticularstringdoesnot leadto aBB of order � � , but instead

to two separateBBs.Often,theindividualsarecomposedof abinarystringof afixedlength, � , andthus

GAs explorea searchspaceformedby ��� points.Initially, thepopulationconsistsofpointschosenrandomly, unlessthereis a heuristicto generategoodsolutionsfor thedomain.In the latter case,a portionof the populationis still generatedrandomlytoensurethatthereis somediversityin thesolutions.

Thesizeof thepopulationis importantbecauseit influenceswhethertheGA canfind good solutionsand the time it takes to reachthem [GOL 92, HAR 97]. If thepopulationis too small, theremight not be an adequatesupplyof BBs, and it willbedifficult to identify goodsolutions.If thepopulationis too big, theGA will wastecomputationalresourcesprocessingunnecessaryindividuals.This balancebetweenthequalityof thesolutionandthetimethatasimpleGA needsto find it alsoexistsforparallelGAs,andlaterweshallseehow it affectstheirdesign.

Eachindividual in the populationhasa fitnessvalue.In general,the fitnessis apayoff measurethatdependsonhow well theindividualsolvestheproblem.In partic-ular, GAs areoftenusedasoptimizers,andthefitnessof anindividual is thevalueoftheobjective functionat thepoint representedby thebinarystring.Selectionusesthefitnessvalueto identify the individualsthatwill reproduceandmateto form thenextgeneration.

SimpleGAs usetwo operatorslooselybasedon naturalgeneticsto explore thesearchspace: crossover andmutation.Crossover is the primaryexplorationmecha-nismin GAs.Thisoperatortakestwo randomindividualsfrom thosealreadyselectedto form the next generationandexchangesrandomsubstringsbetweenthem.As anexample,considerstrings��� and �� of length8 :� � = 0 1 1 0 | 1 1 1 1� � = 1 1 1 0 | 0 0 0 0

Thereare �� � possiblecrossover pointsin stringsof length � . In theexample,a

Page 4: A Survey of Parallel Genetic Algorithms - TRACER

singlecrossoverpoint is chosenrandomlyas4 (asindicatedby thesymbol| above).Exchangingsubstringsaroundthe crossover point resultsin two new stringsfor thenext generation:���� = 0 1 1 0 0 0 0 0� �� = 1 1 1 0 1 1 1 1

The recombinationoperationcantake many forms.The exampleabove used1-point crossover, but it is possibleto use2-point,n-point,or uniform crossover. Morecrossover pointsresultin a moreexploratorysearch,but alsoincreasethechanceofdestroying longBBs.

Mutation is usuallyconsidereda secondarysearchoperator. Its function is to re-storediversitythatmaybelostfromtherepeatedapplicationof selectionandcrossover.This operatoralterssomerandomvaluewithin a string.For example,take string ���= 1 1 0 1 1 andassumethat position3 is chosenrandomlyto mutate.The newstringwould be � � � = 1 1 1 1 1. As in Nature,theprobabilityof applyingmuta-tion is very low in GAs,but theprobabilityof crossover is usuallyhigh.

Thereareseveralwaysto stopa GA. Onemethodis to stopaftera predeterminednumberof generationsor functionevaluations.Anotheris to stopwhenthe averagequalityof thepopulationdoesnot improveaftersomenumberof generations.Anothercommonalternativeis to halt theGA whenall theindividualsareidentical,whichcanonly occurwhenmutationis notused.

3. A Classificationof Parallel GAs

Thebasicideabehindmostparallelprogramsis to divideataskinto chunksandtosolve thechunkssimultaneouslyusingmultiple processors.This divide-and-conquerapproachcanbe appliedto GAs in many differentways,andthe literaturecontainsmany examplesof successfulparallelimplementations.Someparallelizationmethodsusea singlepopulation,while othersdivide thepopulationinto severalrelatively iso-latedsubpopulations.Somemethodscanexploit massivelyparallelcomputerarchitec-tures,while othersarebettersuitedto multicomputerswith fewer andmorepowerfulprocessingelements.Theclassificationof parallelGAs usedin this review is similarto others[ADA 94, GOR93, LIN 94], but it is extendedto includeonemorecategory.

Therearethreemaintypesof parallelGAs : (1) globalsingle-populationmaster-slave GAs, (2) single-populationfine-grained,and (3) multiple-populationcoarse-grainedGAs.In a master-slaveGA thereis a singlepanmicticpopulation(justasin asimpleGA), but theevaluationof fitnessis distributedamongseveralprocessors(seeFigure1). Sincein this typeof parallelGA, selectionandcrossover considertheen-tire populationit is alsoknown asglobalparallelGAs.Fine-grainedparallelGAs aresuitedfor massively parallelcomputersandconsistof onespatially-structuredpop-ulation. Selectionandmatingare restrictedto a small neighborhood,but neighbor-hoodsoverlappermittingsomeinteractionamongall theindividuals(seeFigure2 fora schematicof this classof GAs). The ideal caseis to have only oneindividual foreveryprocessingelementavailable.

Page 5: A Survey of Parallel Genetic Algorithms - TRACER

Master

Workers

Figure1. A schematicof a master-slaveparallel GA. Themasterstoresthepopula-tion, executesGA operations,anddistributesindividualsto theslaves.Theslavesonlyevaluatethefitnessof theindividuals.

Multiple-population(or multiple-deme)GAs aremoresophisticated,asthey con-sistonseveralsubpopulationswhich exchangeindividualsoccasionally(Figure3 hasa schematic).This exchangeof individuals is calledmigrationand,aswe shall seein latersections,it is controlledby severalparameters.Multiple-demeGAs areverypopular, but alsoaretheclassof parallelGAs which is mostdifficult to understand,becausetheeffectsof migrationarenot fully understood.Multiple-demeparallelGAsintroducefundamentalchangesin theoperationof theGA andhaveadifferentbehav-ior thansimpleGAs.

Multiple-demeparallelGAs areknown with differentnames.Sometimesthey areknown as “distributed” GAs, becausethey areusually implementedon distributed-memoryMIMD computers.Sincethecomputationto communicationratio is usuallyhigh,they areoccasionallycalledcoarse-grainedGAs.Finally, multiple-demeGAsre-semblethe“islandmodel” in PopulationGeneticswhich considersrelatively isolateddemes,sotheparallelGAsarealsoknown as“island” parallelGAs.

Sincethe sizeof the demesis smallerthanthe populationusedby a serialGA,we would expectthat the parallelGA convergesfaster. However, whenwe comparethe performanceof the serialandthe parallelalgorithms,we mustalsoconsiderthequality of the solutionsfound in eachcase.Therefore,while it is true that smallerdemesconvergefaster, it is alsotruethat thequality of thesolutionmight bepoorer.Section11 presentsa recenttheorythatpredictsthequality of thesolutionsfor someextremecasesof multi-demeparallelGAs, andallows us to make fair comparisonswith serialGAs.

It is importantto emphasizethat while the master-slave parallelizationmethoddoesnotaffect thebehavior of thealgorithm,thelasttwo methodschangethewaytheGA works.For example,in master-slaveparallelGAs,selectiontakesinto accountallthepopulation,but in theothertwo parallelGAs,selectiononly considersa subsetofindividuals.Also, in themaster-slave any two individualsin thepopulationcanmate(i.e., thereis randommating),but in theothermethodsmatingis restrictedto asubsetof individuals.

Thefinal methodtoparallelizeGAscombinesmultipledemeswith master-slaveorfine-grainedGAs.Wecall thisclassof algorithmshierarchicalparallelGAs,becauseat

Page 6: A Survey of Parallel Genetic Algorithms - TRACER

Figure2. A schematicof a fine-grainedparallel GA. Thisclassof parallel GAshasonespatially-distributedpopulation,and it can be implementedvery effi-cientlyonmassivelyparallel computers.

Figure3. A schematicof a multiple-populationparallel GA.Each processis a simpleGA,andthere is (infrequent)communicationbetweenthepopulations.

Page 7: A Survey of Parallel Genetic Algorithms - TRACER

a higherlevel they aremultiple-demealgorithmswith single-populationparallelGAs(eithermaster-slave or fine-grained)at the lower level. A hierarchicalparallelGAscombinesthebenefitsof its components,andit promisesbetterperformancethananyof themalone.Section10reviewstheresultsof someimplementationsof hierarchicalparallelGAs.

4. Early Studies

Despiteall thegrowing attentionthatparallelcomputationhasreceivedin thelast15years,theideaof buildingmassivelyparallelcomputersismucholder. Forexample,in the late 1950sJohnHolland proposedan architecturefor parallelcomputersthatcouldrunanundeterminednumberof programsconcurrently[HOL 59, HOL 60]. Thehardwareof Holland’s computerwascomposedof relatively simple interconnectedmodulesandtheprogramsrunningonthesemachinescouldmodify themselvesastheywereexecuting,mainly to adaptto failuresin thehardware.Oneof theapplicationsthat Holland had in mind for this computerwas the simulationof the evolution ofnaturalpopulations.

The“iterative circuit computers”,asthey werecalled,werenever built, but someof the hardwareof the massively parallelcomputersof the 1980s(e.g.,ConnectionMachine1) usethe samemodularconceptof Holland’s computers.The software,however, is notveryflexible in themorerecentcomputers.

An earlystudyof how to parallelizegeneticalgorithmswasconductedby Bethke[BET 76]. He described(global)parallelimplementationsof a conventionalGA andof aGA with agenerationgap(i.e., it only replacesaportionof its populationin everygeneration),andanalyzedtheefficiency of thebothalgorithms.His analysisshowedthatglobal parallelGAs mayhave an efficiency closeto 100%in SIMD computers.Healsoanalyzedgradient-basedoptimizersandidentifiedsomebottlenecksthatlimittheirparallelefficiency.

Anotherearlystudyonhow to mapgeneticalgorithmsto existingcomputerarchi-tectureswasmadeby Grefenstette[GRE81]. He proposedfour prototypesfor paral-lel GAs; thefirst threearevariationsof themaster-slave scheme,andthe fourth is amultiple-populationGA. In thefirst prototype,themasterprocessorstoresthepopu-lation,selectstheparentsof thenext generation,andappliescrossoverandmutation.Theindividualsaresentto slaveprocessorsto beevaluatedandreturnto themasterattheendof everygeneration.Thesecondprototypeis verysimilar to thefirst, but thereis nocleardivisionbetweengenerations,whenany slaveprocessorfinishesevaluatingan individual it returnsit to themasterandreceivesanotherindividual.This schemeeliminatestheneedto synchronizeevery generationandit canmaintaina high levelof processorutilization,evenif theslaveprocessorsoperateatdifferentspeeds.In thethird prototypethepopulationis storedin sharedmemory, which canbeaccessedbytheslavesthatevaluatethe individualsandapplygeneticoperatorsindependentlyofeachother.

Grefenstette’sfourthprototypeis amultiple-populationGA wherethebestindivid-

Page 8: A Survey of Parallel Genetic Algorithms - TRACER

ualsof eachprocessorarebroadcastevery generationto all theothers.Thecomplex-ity of multiple-demeparallelGAs wasevident from this earlyproposalandGrefen-stetteraisedseveral“interestingquestions”aboutthefrequency of migration,thedes-tination of the migrants(topology),and the effect of migration on preventingpre-matureconvergence.This is the only prototypethat was implementedsometimelater [PET87b, PET87a] andwe review someof the resultsin Section6. Grefen-stettealsohintedat thepossibilityof combiningthefourth prototypewith any of theotherthree,therebycreatingahierarchicalparallelGA.

5. Master-SlaveParallelization

Thissectionreviewsthemaster-slave(or global)parallelizationmethod.Thealgo-rithm usesasinglepopulationandtheevaluationof theindividualsand/ortheapplica-tion of geneticoperatorsaredonein parallel.As in theserialGA, eachindividualmaycompeteandmatewith any other(thusselectionandmatingareglobal).Globalpar-allel GAsareusuallyimplementedasmaster-slaveprograms,wherethemasterstoresthepopulationandtheslavesevaluatethefitness.

Themostcommonoperationthat is parallelizedis theevaluationof the individu-als,becausethefitnessof anindividual is independentfrom therestof thepopulation,andthereis no needto communicateduring this phase.The evaluationof individu-als is parallelizedby assigninga fractionof thepopulationto eachof theprocessorsavailable.Communicationoccursonly aseachslave receivesits subsetof individualsto evaluateandwhentheslavesreturnthefitnessvalues.

If thealgorithmstopsandwaitsto receive thefitnessvaluesfor all thepopulationbeforeproceedinginto thenext generation,thenthealgorithmis synchronous.A syn-chronousmaster-slaveGA hasexactlythesamepropertiesasasimpleGA, with speedbeingtheonly difference.However, it is alsopossibleto implementanasynchronousmaster-slave GA wherethealgorithmdoesnot stopto wait for any slow processors,but it doesnot work exactly like a simpleGA. Most globalparallelGA implementa-tionsaresynchronousandtherestof thepaperassumesthatglobalparallelGAscarryout theexactsamesearchof simpleGAs.

Theglobal parallelizationmodeldoesnot assumeanything aboutthe underlyingcomputerarchitecture,andit canbe implementedefficiently on shared-memoryanddistributed-memorycomputers.On a shared-memorymultiprocessor, thepopulationcouldbestoredin sharedmemoryandeachprocessorcanreadtheindividualsassignedto it andwrite theevaluationresultsbackwithoutany conflicts.

Onadistributed-memorycomputer, thepopulationcanbestoredin oneprocessor.This “master” processorwould be responsiblefor explicitly sendingthe individualsto theotherprocessors(the“slaves”) for evaluation,collectingtheresults,andapply-ing the geneticoperatorsto producethenext generation.The numberof individualsassignedto any processormay be constant,but in somecases(like in a multiuserenvironmentwherethe utilization of processorsis variable)it may be necessarytobalancethecomputationalloadamongtheprocessorsby usinga dynamicscheduling

Page 9: A Survey of Parallel Genetic Algorithms - TRACER

algorithm(e.g.,guidedself-scheduling).FogartyandHuang[FOG91] attemptedtoevolveasetof rulesfor apolebalancing

applicationwhichtakesaconsiderabletimeto simulate.They usedanetwork of trans-puterswhich aremicroprocessorsdesignedspecificallyfor parallelcomputations.Atransputercanconnectdirectly to only four transputersandcommunicationbetweenarbitrarynodesis handledby retransmittingmessages.This causesan overheadincommunications,andin anattemptto minimizeit, FogartyandHuangconnectedthetransputersin differenttopologies.They concludedthat theconfigurationof thenet-work did not make a significantdifference.They obtainedreasonablespeed-ups,butidentifiedthefast-growing communicationoverheadasanimpedimentfor furtherim-provementsin speed.

AbramsonandAbela[ABR 92] implementedaGA onashared-memorycomputer(anEncoreMultimax with 16processors)to searchfor efficienttimetablesfor schools.They reportedlimited speed-ups,andblamedafew sectionsof serialcodeonthecriti-calpathof theprogramfor theresults.Later, Abramson,Mills, andPerkins[ABR 93]addeda distributed-memorymachine(a FujitsuAP1000with 128processors)to theexperiments,changedtheapplicationto train timetables,andmodifiedthecode.Thistime,they reportedsignificant(andalmostidentical)speed-upsfor upto 16processorson thetwo computers,but thespeed-upsdegradesignificantlyasmoreprocessorsareadded,mainlydueto theincreasein communications.

A morerecentimplementationof a global GA is the work by HauserandMän-ner [HAU 94]. They usedthreedifferentparallelcomputers,but only obtainedgoodspeed-upson a NERV multiprocessor(speed-upof 5 using6 processors),that hasavery low communicationsoverhead.They explain thepoorperformanceon theothersystemsthey used(a SparcServeranda KSR1)on theinadequateschedulingof com-putationthreadsto processorsby thesystem.

Anotheraspectof GAsthatcanbeparallelizedis theapplicationof thegeneticop-erators.Crossoverandmutationcanbeparallelizedusingthesameideaof partitioningthepopulationanddistributing thework amongmultiple processors.However, theseoperatorsaresosimplethat it is very likely thatthetime requiredto sendindividualsbackandforth wouldoffsetany performancegains.

Thecommunicationoverheadalsoobstructstheparallelizationof selection,mainlybecauseseveral forms of selectionneedinformationabouttheentirepopulationandthusrequiresomecommunication.Recently, Branke [BRA 97] parallelizeddifferenttypesof globalselectionon a 2-D grid of processorsandshow that their algorithmsareoptimal for the topology they use(the algorithmstake ����� ��� time stepson a� ����� � grid).

In conclusion,master-slave parallelGAs areeasyto implementand it canbe avery efficientmethodof parallelizationwhentheevaluationneedsconsiderablecom-putations.Besides,themethodhastheadvantageof not alteringthesearchbehaviorof theGA, sowecanapplydirectlyall thetheoryavailablefor simpleGAs.

Page 10: A Survey of Parallel Genetic Algorithms - TRACER

6. Multiple-Deme Parallel GAs — The First Generation

The importantcharacteristicsof multiple-demeparallelGAs aretheuseof a fewrelatively largesubpopulationsandmigration.Multiple-demeGAs arethemostpop-ular parallelmethod,andmany papershave beenwritten describinginnumerableas-pectsand detailsof their implementation.Obviously, it is impossibleto review allthesepapers,andthereforethissectionpresentsonly themostinfluentialpapersandafew relevantexamplesof particularimplementationsandapplications.

Probablythefirst systematicstudyof parallelGAs with multiple populationswasGrosso’s dissertation[GRO 85]. His objective wasto simulatetheinteractionof sev-eralparallelsubcomponentsof anevolving population.Grossosimulateddiploid in-dividuals(sothereweretwo subcomponentsfor each“gene”),andthepopulationwasdividedinto five demes.Eachdemeexchangedindividualswith all theotherswith afixedmigrationrate.

With controlledexperiments,Grossofound that the improvementof the averagepopulationfitnesswasfasterin thesmallerdemesthanin asinglelargepanmicticpop-ulation.This confirmsa long-heldprinciple in PopulationGenetics: favorabletraitsspreadfasterwhenthedemesaresmall thanwhenthedemesarelarge.However, healsoobservedthatwhenthedemeswereisolated,therapidrisein fitnessstoppedat alower fitnessvaluethanwith the largepopulation.In otherwords,thequality of thesolutionfoundafterconvergencewasworsein theisolatedcasethanin thesinglepop-ulation.With alow migrationrate,thedemesstill behavedindependentlyandexploreddifferentregionsof thesearchspace.Themigrantsdid nothaveasignificanteffectonthereceiving demeandthequality of thesolutionswassimilar to thecasewherethedemeswereisolated.However, at intermediatemigrationratesthedividedpopulationfoundsolutionssimilarto thosefoundin thepanmicticpopulation.Theseobservationsindicatethat thereis a critical migrationratebelow which theperformanceof theal-gorithmis obstructedby theisolationof thedemes,andabove which thepartitionedpopulationfindssolutionsof thesamequalityasthepanmicticpopulation.

In a similar parallelGA by Pettey, Leuze,andGrefenstette[PET87a], a copy ofthebestindividual found in eachdemeis sentto all its neighborsafterevery gener-ation.The purposeof this constantcommunicationwasto ensurea goodmixing ofindividuals.Like Grosso,the authorsof this paperobserved that parallelGAs withsucha high level of communicationfoundsolutionsof thesamequality asa sequen-tial GA with a singlelargepanmicticpopulation.Theseobservationspromptedotherquestionsthatarestill unresolvedtoday: (1) Whatis thelevel of communicationnec-essaryto make a parallelGA behave like a panmicticGA ? (2) What is the costofthis communication? And, (3) is thecommunicationcostsmallenoughto make thisa viablealternative for thedesignof parallelGAs? More researchis necessaryto un-derstandtheeffectof migrationon thequalityof thesearchin parallelGAsto beableto answerthesequestions.

It is interestingthat suchimportantobservationsweremadeso long ago,at thesametimethatothersystematicstudiesof parallelGAs wereunderway. For example,Tanese[TAN 87] proposeda parallelGA with thedemesconnectedon a 4-D hyper-

Page 11: A Survey of Parallel Genetic Algorithms - TRACER

cubetopology. In Tanese’s algorithm,migrationoccurredat fixed intervalsbetweenprocessorsalongonedimensionof thehypercube.Themigrantswerechosenproba-bilistically from thebestindividualsin thesubpopulation,andthey replacedtheworstindividualsin thereceiving deme.Tanesecarriedout threesetsof experiments.In thefirst, theinterval betweenmigrationswassetto 5 generations,andthenumberof pro-cessorsvaried.In testswith two migrationratesandvaryingthenumberof processors,theparallelGA foundresultsof thesamequalityastheserialGA. However, it is diffi-cult to seefrom theexperimentalresultsif theparallelGA foundthesolutionssoonerthanthe serialGA, becausethe rangeof the timesis too large. In the secondsetofexperiments,Tanesevariedthemutationandcrossoverratesin eachdeme,attemptingto find parametervaluesto balanceexplorationandexploitation.The third setof ex-perimentsstudiedtheeffect of theexchangefrequency on thesearch,andtheresultsshowedthatmigratingtoo frequentlyor too infrequentlydegradedtheperformanceofthealgorithm.

Also in this year, Cohoon,Hedge,Martin, andRichardsnotedcertainsimilaritiesbetweenthe evolution of solutionsin the parallelGA andthe theoryof punctuatedequilibria[COH 87]. This theorywasproposedby ElredgeandGould to explain themissinglinks in the fossil record.It statesthat mostof the time, populationsareinequilibrium(i.e., thereareno significantchangesin its geneticcomposition),but thatchangesontheenvironmentcanstarta rapidevolutionarychange.An importantcom-ponentof theenvironmentof apopulationis its own composition,becauseindividualsin a populationmustcompetefor resourceswith the othermembers.Therefore,thearrival of individualsfrom otherpopulationscanpunctuatetheequilibriumandtrig-ger evolutionarychanges.In their experimentswith the parallelGA, Cohoonet al.noticedthat therewasrelatively little changebetweenmigrations,but new solutionswerefoundshortlyafterindividualswereexchanged.

Cohoonet al. useda linearplacementproblemasa benchmarkandexperimentedby connectingthe demesusinga meshtopology. However, they mentionedthat thechoiceof topologyis probablynot very importantin theperformanceof theparallelGA, aslongasit has“high connectivity andasmalldiameterto ensureadequate‘mix-ing’ astimeprogresses”.Notethatthemeshtopologythey choseis denselyconnectedandit alsohasa smalldiameter( ��� � , where � is thenumberof demes).This workwaslaterextendedby thesamegroupusinga VLSI application(a graphpartitioningproblem)on a 4-D hypercubetopology[COH 91a, COH91b]. As in themeshtopol-ogy, thenodesin thehypercubehavefourneighbors,but thediameterof thehypercubeis shorterthanthemesh( ��� � � � ).

Tanesecontinuedherwork with a very exhaustive experimentalstudyon the fre-quency of migrationsandthenumberof individualsexchangedeachtime [TAN 89a].Shecomparedtheperformanceof a serialGA againstparallelGAs with andwithoutcommunication.All the algorithmshadthe sametotal populationsize(256 individ-uals)andexecutedfor 500generations.Tanesefoundthata multi-demeGA with nocommunicationand � demesof � individualseachcouldfind — atsomepointduringtherun— solutionsof thesamequalityasa singleserialGA with a populationof �!�individuals.However, theaveragequality of thefinal populationwasmuchlower in

Page 12: A Survey of Parallel Genetic Algorithms - TRACER

the parallel isolatedGA. With migration,the final averagequality increasedsignifi-cantly, andin somecasesit wasbetterthanthefinal quality in a serialGA. However,thisresultshouldbeinterpretedcarefullybecausetheserialGA did notconvergefullyat the endof the 500 generationsandsomefurther improvementwasstill possible.More detailson theexperimentsandtheconclusionsderived from this studycanbefoundin Tanese’sdissertation[TAN 89b].

A recentpaperby Belding[BEL 95] confirmsTanese’sfindingsusingRoyal Roadfunctions.Migrantsweresentto a randomdestination,ratherthanto a neighborinthe hypercubetopology, but the authorclaimedthat experimentswith a hypercubeyieldedsimilar results.In mostcases,theglobaloptimumwasfoundmoreoftenwhenmigrationwasusedthanin thecompletelyisolatedcases.

Evenat this point,wherewe have reviewedonly a handfulof examplesof multi-demeparallelGAs, we mayhypothesizeseveral reasonsthat make thesealgorithmssopopular:

— Multiple-demeGAs seemlike a simpleextensionof theserialGA. Therecipeis simple : take a few conventional(serial)GAs, run eachof them on a nodeof aparallelcomputer, andat somepredeterminedtimesexchangea few individuals.

— Thereis relatively little extraeffort neededtoconvertaserialGA intoamultiple-demeGA. Most of the programof the serialGA remainsthe sameandonly a fewsubroutinesneedto beaddedto implementmigration.

— Coarse-grainparallelcomputersareeasilyavailable,andevenwhenthey arenot, it is easyto simulateone with a network of workstationsor even on a singleprocessorusingfreesoftware(likeMPI or PVM).

This sectionreviewed someof the papersthat initiated the systematicresearchon parallel GAs with multiple populations,and that raisedmany of the importantquestionsthatthisareastill facetoday. Thenext sectionreviewsmorepapersonmulti-demeparallelGAsthatshow how thefield maturedandbeganto exploreotheraspectsof thesealgorithms.

7. Coarse-GrainedParallel GAs — The SecondGeneration

Fromthepapersexaminedin theprevioussection,wecanrecognizethatsomeveryimportantissuesareemerging.For example,parallelGAsareverypromisingin termsof the gainsin performance.Also, parallelGAs aremorecomplex thantheir serialcounterparts.In particular, themigrationof individualsfrom onedemeto anotheriscontrolledby several parameterslike : (a) the topologythat definesthe connectionsbetweenthesubpopulations,(b) a migrationratethatcontrolshow many individualsmigrate,and(c) amigrationinterval thataffectsthefrequency of migrations.

In the late1980sandearly1990s,theresearchon parallelGAs beganto explorealternativesto make parallelGAs fasterandto understandbetterhow they worked.Aroundthis time thefirst theoreticalstudieson parallelGAs beganto appearandtheempirical researchattemptedto identify favorableparameters.This sectionreviews

Page 13: A Survey of Parallel Genetic Algorithms - TRACER

someof thatearlytheoreticalwork andexperimentalstudiesonmigrationandtopolo-gies.Also in this period,moreresearchersbeganto usemultiple-populationGAs tosolveapplicationproblems,andthissectionendswith abrief review of theirwork.

Oneof thedirectionsin which thefield maturedis thatparallelGAs beganto betestedwith very largeanddifficult testfunctions.A remarkableexampleis theworkby Mühlenbein,Schomisch,andBorn [MÜH 91b] who describeda parallelGA thatfoundtheglobaloptimumof severalfunctionswhichareusedasbenchmarksfor opti-mizationalgorithms.Lateron, thefunctionsusedin thispaperwereadoptedby otherresearchersasbenchmarksfor their own empiricalwork (e.g., [CHE 93, GOR93]).Mühlenbein,Schomisch,andBorn useda local optimizer in the algorithm,andal-thoughthey designedan efficient andpowerful program,it is not clearwhethertheresultsobtainedcanbecreditedto thedistributedpopulationor to thelocaloptimizer.

MühlenbeinbelievesthatparallelGAs canbeusefulin thestudyof evolution ofnaturalpopulations[MÜH 89a, MÜH 89b, MÜH 92, MÜH 91a]. Thisview is sharedby otherswho perceive thata partitionedpopulationwith anoccasionalexchangeofindividualsis a closeranalogyto thenaturalprocessof evolution thana singlelargepopulation.But,while it is truethatin aparallelGA wecanobservesomephenomenawith adirectcounterpartin Nature(for example,a favorablechangein achromosomespreadsfasterin smallerdemesthanin largedemes),themainintentionof themajorityof theresearchersin thisfield is only to designfastersearchalgorithms.

7.1. Early Theoretical Work

Theconclusionsof all thepublicationsreviewedsofar arebasedon experimentalresults,but afterthefirst waveof publicationsdescribingsuccessfulimplementations,a few theoreticalstudiesappeared.A theoreticalstudyof (parallel)GAs is necessaryto achieve a deeperunderstandingof thesealgorithms,sothatwe canutilize themtotheir full potential.

Probablythefirst attemptto provideatheoreticalfoundationto theperformanceofaparallelGA wasapaperby Pettey andLeuze[PET89]. In thisarticlethey describeda derivationof the schematheoremfor parallelGAs whererandomlyselectedindi-vidualsarebroadcastevery generation.Their calculationsshowed that the expectednumberof trials allocatedto schematacanbe boundedby an exponentialfunction,justasfor serialGAs.

A very importanttheoreticalquestionis to determineif (andunderwhat condi-tions)aparallelGA canfind abettersolutionthanaserialGA. Starkweather, Whitley,andMathias[STA 91] madetwo importantobservationsregardingthisquestion.First,relatively isolateddemesarelikely to convergeto differentsolutions,andmigrationandrecombinationmaycombinepartialsolutions.Starkweatheret al. speculatedthatbecauseof this a parallelGA with low migrationmay find bettersolutionsfor sep-arablefunctions.Their secondobservation was that if the recombinationof partialsolutionsresultsin individualswith lowerfitness(perhapsbecausethefunctionis notseparable),thentheserialGA mighthaveanadvantage.

Page 14: A Survey of Parallel Genetic Algorithms - TRACER

7.2. Migration

A signof maturityof thefield is thattheresearchstartedto focusonparticularas-pectsof parallelGAs.Thissectionreviewssomepublicationsthatexplorealternativemigrationschemesto try to makeparallelGAsmoreefficient.

In themajorityof multi-demeparallelGAs,migrationis synchronouswhichmeansthatit occursatpredeterminedconstantintervals.Migrationmayalsobeasynchronousso that the demescommunicateonly after someeventsoccur. An exampleof asyn-chronousmigrationcanbefoundin Grosso’sdissertationwhereheexperimentedwitha “delayed” migrationscheme,wheremigrationis enableduntil the populationwascloseto converge.

Braun [BRA 90] usedthe sameidea and presentedan algorithm wheremigra-tion occurredafter the demesconvergedcompletely(the authorusedthe term “de-generate”)with the purposeof restoringdiversity into the demesto prevent prema-ture convergenceto a low-quality solution.The samemigration strategy was usedlater by Munetomo,Takai, andSato[MUN 93], and recentlyCantú-Paz andGold-berg [CAN 97b] presentedtheoreticalmodelswhich predictthe quality of the solu-tionswhena fully connectedtopologyis used.

Thereis anotherinterestingquestionbeingraisedhere: whenis theright time tomigrate? It seemsthat if migrationoccurstoo early during the run, the numberofcorrectbuilding blocksin themigrantsmaybetoosmallto influencethesearchontheright direction,andexpensivecommunicationresourceswouldbewasted.

A differentapproachto migrationwasdevelopedby Marin, Trelles-Salazar, andSandoval [MAR 94b]. They proposeda centralizedschemein whichslaveprocessorsexecutea GA on their populationandperiodicallysendtheir bestpartial resultstoa masterprocess.Then,the masterprocesschoosesthe fittest individuals found sofar (by any processor)and broadcaststhem to the slaves. Experimentalresultsona network of workstationsshowedthatnear-linearspeed-upscanbeobtainedwith asmallnumberof nodes(aroundsix), andtheauthorsclaimthatthisapproachcanscaleup to largernetworksbecausethecommunicationis infrequent.

7.3. Communication Topologies

A traditionallyneglectedcomponentof parallelGAs is the topologyof the inter-connectionbetweendemes.Thetopologyis animportantfactorin theperformanceoftheparallelGA becauseit determineshow fast(or how slow) agoodsolutiondissem-inatesto otherdemes.If the topologyhasa denseconnectivity (or a shortdiameter,or both) goodsolutionswill spreadfastto all the demesandmay quickly take overthepopulation.Ontheotherhand,if thetopologyis sparselyconnected(or hasa longdiameter),solutionswill spreadslowerandthedemeswill bemoreisolatedfrom eachother, permittingtheappearanceof differentsolutions.Thesesolutionsmaycometo-getherata latertimeandrecombineto form potentiallybetterindividuals.

Thecommunicationtopologyis alsoimportantbecauseit is a major factorin the

Page 15: A Survey of Parallel Genetic Algorithms - TRACER

costof migration.For instance,a denselyconnectedtopologymaypromotea bettermixing of individuals,but it alsoentailshighercommunicationcosts.

Thegeneraltrendon multi-demeparallelGAs is to usestatictopologiesthatarespecifiedat thebeginningof therunandremainunchanged.Most implementationsofparallelGAs with statictopologiesusethenative topologyof thecomputeravailableto theresearchers.For example,implementationsonhypercubes[COH 91a, TAN 89a,TAN 89b] andrings[GOR93] arecommon.

A morerecentempiricalstudyshowedthatparallelGAswith densetopologiesfindtheglobalsolutionusingfewerfunctionevaluationsthanGAswith sparselyconnectedones[CAN 94]. This studyconsideredfully connectedtopologies,4-D hypercubes,a" � "

toroidalmesh,andunidirectionalandbidirectionalrings.The othermajor choiceis to usea dynamictopology. In this method,a demeis

not restrictedto communicatewith somefixedsetof demes,but insteadthemigrantsaresentto demesthatmeetsomecriteria.Themotivationbehinddynamictopologiesis to identify thedemeswheremigrantsarelikely to producesomeeffect.Typically,thecriteriausedto choosea demeasa destinationincludesmeasuresof thediversityof thepopulation[MUN 93] or a measureof thegenotypicdistancebetweenthetwopopulations[LIN 94] (or somerepresentativeindividualof apopulation,likethebest).

7.4. Applications

Many applicationprojectsusingmultiple-populationparallelGAshavebeenpub-lished.Thissectionmentionsonly a few representativeexamples.

Thegraphpartitioningproblemhasbeena popularapplicationof multiple-demeGAs(seefor examplethework by BessièreandTalbi [BÉS91], Cohoon,Martin, andRichards[COH 91c], andTalbiandBessière[TAL 91]). Theworkof Levine[LEV 94]onthesetpartitioningproblemis outstanding.His algorithmfoundglobalsolutionsofproblemswith 36,699and43,749integervariables.He foundthatperformance,mea-suredasthequalityof thesolutionandtheiterationonwhichit wasfound,increasedasmoredemeswereadded.Also, Li andMashford[LI 90] andMühlenbein[MÜH 89b]foundadequatesolutionsfor thequadraticassignmentproblem,anothercombinatorialoptimizationapplication.

Coarse-grainedparallelGAs have alsobeenusedto find solutionsfor the prob-lem of distributing computingloadsto theprocessingnodesof MIMD computersbyNeuhaus[NEU 91], MunteanandTalbi [MUN 91], andSeredynski[SER94].

Anotherchallengingapplicationis thesynthesisof VLSI circuits.Davis, Liu, andElias [DAV 94] useddifferenttypesof parallelGAs to searchfor solutionsfor thisproblem.Two of the parallel GAs usemultiple populations(four demes),and theotheris a globalGA thatuses16 worker processesto evaluatethepopulation.In thefirst of the multiple-demeGAs the demesareisolated,andin the secondthe demescommunicatewith some(unspecified)programmableschedule.Thispaperis very in-terestingbecauseit containstwo uncommonelements: first, theparallelGAswereim-plementedusingLinda,aparallelcoordinationlanguagethatis usually(mis)regarded

Page 16: A Survey of Parallel Genetic Algorithms - TRACER

asbeingslow ; second,thespeed-upsaremeasuredusingthe timesit takesto find asolutionof a certainfixedquality by theserialandtheparallelGAs. Computingthespeed-upin this way givesa moresignificantmeasureof theperformanceof thepar-allel GAsthansimplycomparingthetimeit takesto runtheGAsfor acertainnumberof generations.

8. Non-Traditional GAs

Not all multiple-populationparallelGAs usethetraditionalgenerationalGA suchas the one reviewed in Section2. Somenon-traditionalGAs have alsobeenparal-lelized and someimprovementsover their serial versionshave beenreported.Forexample,Maresky [MAR 94a] usedDavidor’s ECO model [DAV 91] in the nodesfor his distributedalgorithm,andexploreddifferentmigrationschemeswith varyingcomplexity. The simplestschemewasto do no migrationat all, andjust collect theresultsfrom eachnodeat theendof arun.Themorecomplex algorithmsinvolvedthemigrationof completeECOdemes.Severalreplacementpolicieswereexploredwithonly marginalbenefits.

GENITOR is a steady-stateGA whereonly a few individuals changein everygeneration.In thedistributedversionof GENITOR[WHI 90], individualsmigratedatfixedintervalsto neighbors,andreplacedtheworst individualsin thetargetdeme.Asignificantimprovementover theserialversionwasreportedin theoriginalpaperandin laterwork by GordonandWhitley [GOR93].

Therehave alsobeenefforts to parallelizemessyGAs [GOL 89b]. MessyGAshave two phases.In theprimordialphasetheinitial populationis createdusinga par-tial enumeration,andit is reducedusingtournamentselection.Next, in thejuxtaposi-tional phasepartialsolutionsfoundin theprimordialphasearemixedtogether. Sincetheprimordialphasedominatestheexecutiontime,Merkle andLamonttried severaldatadistributionstrategiesto speedupthisphase[MER 93b] extendingpreviousworkby Dymek [DYM 92]. The resultsindicatethat therewereno significantdifferencesin thequality of the solutionsfor thedifferentdistribution strategies,but thereweresignificantgainsin theexecutiontimeof thealgorithm.

Merkle, Gates,Lamont,andPachterparallelizeda fastmessyGA [GOL 93a] tosearchfor optimal conformationsof Met-Enkephalinmolecules[MER 93a]. In thisproblem,thealgorithmshowedgoodspeed-upsfor upto 32processors,but with morethan32nodesthespeed-updid not increasevery fast.

Koza and Andre [KOZ 95] useda network of transputersto parallelizegeneticprogramming.In their report they madean analysisof the computingandmemoryrequirementsof their algorithmandconcludedthat transputerswere the mostcost-effective solutionto implementit. The 64 demesin their implementationwerecon-nectedasa 2-D meshandeachdemehad500individuals.Theauthorsexperimentedwith differentmigrationrates,and reportedsuperlinear speed-upsin the computa-tionaleffort (not thetotal time)usingmoderatemigrationrates(around8%).

Page 17: A Survey of Parallel Genetic Algorithms - TRACER

9. Fine-Grained Parallel GAs

Fine-grainedparallelGAs have only onepopulation,but it is hasa spatialstruc-ture that limits the interactionsbetweenindividuals.An individual canonly competeandmatewith its neighbors,but sincetheneighborhoodsoverlapgoodsolutionsmaydisseminateacrosstheentirepopulation.

Robertson[ROB 87] parallelizedthe geneticalgorithmof a classifiersystemona ConnectionMachine1. He parallelizedthe selectionof parents,the selectionofclassifiersto replace,mating,andcrossover. Theexecutiontimeof hisimplementationwasindependentof the numberof classifiers(up to 16K, the numberof processingelementsin theCM-1).

ASPARAGOSwasintroducedbyGorges-Schleuter[Gor 89a, GOR89b] andMüh-lenbein[MÜH 89a, MÜH 89b]. ASPARAGOSusedapopulationstructurethatresem-blesa ladderwith theupperandlower endstied together. ASPARAGOSwasusedtosolve somedifficult combinatorialoptimizationproblemswith greatsuccess.Later,a linearpopulationstructurewasused[Gor 91], anddifferentmatingstrategieswerecompared[Gor 92b].

ASPARAGOS usesa local hillclimbing algorithmto improve the fitnessof theindividualsin its population.This makesit difficult to isolatethecontribution of thespatialstructureof the populationto the searchquality. However, all the empiricalresultsshow thatASPARAGOSis is a veryeffectiveoptimizationtool.

ManderickandSpiessens[MAN 89] implementedafine-grainedparallelGA withthe populationdistributed on a 2-D grid. Selectionand mating were only possiblewithin a neighborhood,andthe authorsobserved that the performanceof the algo-rithm degradedasthesizeof theneighborhoodincreased.On theextreme,if thesizeof theneighborhoodwasbig enough,thisparallelGA wasequivalentto a singlepan-mictic population.The authorsanalyzedthis algorithm theoreticallyin subsequentyears[SPI90, SPI91], andtheir analysisshowedthat for a demesize # anda stringlength � , the time complexity of thealgorithmwas ���$#% �$� or ���$#&�'��� �(#)�*%+�,� timesteps,dependingon theselectionschemeused.

Recently, SarmaandDe Jong[SAR 96] analyzedthe effectsof the sizeandtheshapeof theneighborhoodontheselectionmechanism,andfoundthattheratioof theradiusof theneighborhoodto theradiusof thewholegrid is acritical parameterwhichdeterminestheselectionpressureover thewholepopulation.Theiranalysisquantifiedhow thetime to propagatea goodsolutionacrossthewholepopulationchangedwithdifferentneighborhoodsizes.

It is commonto placethe individualsof a fine-grainedPGA in a 2-Dimensionalgrid, becausein many massively parallelcomputerstheprocessingelementsarecon-nectedwith this topology. However, mostof thesecomputersalsohaveaglobalrouterthatcansendmessagesto any processorin thenetwork (at a highercost),andothertopologiesmay be simulatedon top of the grid. Schwehm[SCH92] comparedtheeffect of usingdifferentstructureson thepopulationof fine-grainedGAs. In this pa-per, agraphpartitioningproblemwasusedasabenchmarkto testdifferentpopulationstructuressimulatedonaMasParMP-1computer. Thestructurestestedwerearing, a

Page 18: A Survey of Parallel Genetic Algorithms - TRACER

torus,a �!-.��/.�0/ cubea" � " � " � " � "

hypercube,anda10-Dbinaryhypercube.Thealgorithmwith thetorusstructureconvergedfasterthantheotheralgorithms,butthereis nomentionof theresultingquality.

AnderssonandFerris[AND 90] alsotried differentpopulationstructuresandre-placementalgorithms.They experimentedwith two rings,a hypercube,two meshesandan “island” structurewhereonly one individual of eachdemeoverlappedwithotherdemes.They concludedthatfor theparticularproblemthey weretrying to solve(assemblyline balancing)the ring andthe “island” structureswerethe best.Balujacomparedtwo variationson a linear structureanda 2-D mesh [BAL 92, BAL 93],andfoundthatthemeshgavethebestresultsonalmostall theproblemstested.

Of course,fine-grainedparallelGAs have beenusedto solve somedifficult ap-plicationproblems.A popularproblemis two-dimensionalbin packing,andsatisfac-tory solutionshave beenfoundby Kröger, Schwenderling,andVornberger[KRÖ 91,KRÖ 92, KRÖ 93]. Thejob shopschedulingproblemis alsovery popularin GA lit-erature,andTamakiandNishikawa [TAM 92] successfullyappliedfine-grainedGAsto find adequatesolutions.

Shapiroand Naveta [SHA 94] useda fine-grainedGA algorithm to predict thesecondarystructureof RNA. They usedthe native X-Net topologyof the MasPar-2computerto definetheneighborhoodof eachindividual.TheX-Net topologyconnectsaprocessingelementtoeightotherprocessors.Differentlogicaltopologiesweretestedwith the GA : all eight nearestneighbors,four nearestneighbors,andall neighborswithin a specifieddistance1 . The resultsfor 1324� werethe poorest,andthe four-neighborschemeconsistentlyoutperformedthetopologywith eightneighbors.

A few paperscomparefine- andcoarse-grainedparallelGAs [BAL 93, GOR93].In thesecomparisonssometimesfine-grainedGAs comeaheadof thecoarse-grainedGAs,but sometimesit is just theopposite.Theproblemis thatcomparisonscannotbemadein absoluteterms,instead,we mustmake clearwhat do we want to minimize(e.g.,wall clock time) or maximize(e.g.,the quality of the solution)andmake thecomparisonbasedonourparticularcriteria.

Gordon,Whitley, andBöhm [GOR92a] showed that the critical pathof a fine-grainedalgorithmis shorterthanthatof amultiple-populationGA. Thismeansthatifenoughprocessorswereavailable,massively parallelGAswouldneedlesstimeto fin-ish their execution,regardlessof thepopulationsize.However, it is importantto notethat this wasa theoreticalstudythatdid not includeconsiderationssuchasthecom-municationsbandwidthor memoryrequirements.Gordon[GOR94] alsostudiedthespatiallocality of memoryreferencesin differentmodelsof parallelGAs.Thisanaly-sisis usefulin determiningthemostadequatecomputingplatformfor eachmodel.

10. Hierar chical Parallel Algorithms

A few researchershave tried to combinetwo of the methodsto parallelizeGAs,producinghierarchicalparallelGAs.Someof thesenew hybridalgorithmsadda newdegreeof complexity to thealreadycomplicatedsceneof parallelGAs,but otherhy-

Page 19: A Survey of Parallel Genetic Algorithms - TRACER

Figure4. Thishierarchical GAcombinesa multi-demeGA(at theupperlevel)andafine-grainedGA (at thelower level).

bridsmanageto keepthesamecomplexity asoneof their components.Whentwo methodsof parallelizingGAs arecombinedthey form a hierarchy. At

the upperlevel mostof the hybrid parallelGAs aremultiple-populationalgorithms.Somehybridshave a fine-grainedGA at the lower level (seeFigure4). For exampleGruau[GRU 94] inventeda “mixed” parallelGA. In his algorithm,thepopulationof

Figure5. A schematicof a hierarchical parallel GA.At theupperlevel this hybrid isa multi-demeparallel GAwhere each nodeis a master-slaveGA.

Page 20: A Survey of Parallel Genetic Algorithms - TRACER

Figure6. Thishybridusesmulti-demeGAsat boththeupperandthelower levels.Atthe lower level themigration rate is fasterandthecommunicationstopol-ogy is much denserthanat theupperlevel.

eachdemewasplacedon a 2-D grid, andthe demesthemselveswereconnectedasa 2-D torus.Migration betweendemesoccurredat regularintervals,andgoodresultswerereportedfor a novel neuralnetwork designandtrainingapplication.

ASPARAGOS was updatedrecently[GOR97], and its ladderstructurewas re-placedby a ring, becausethe ring hasa longerdiameterandallows a betterdiffer-entiationof the individuals.The new version(Asparagos96)maintainsseveral sub-populationsthatarethemselvesstructuredasrings.Migration acrosssubpopulationsis possible,andwhena demeconvergesit receivesthe bestindividual from anotherdeme.After all thepopulationsconverge,Asparagos96takesthebestandsecondbestindividualsof eachpopulation,andusesthemastheinitial populationfor a final run.

Lin, Goodman,andPunch[LIN 97] alsousedamulti-populationGA with spatially-structuredsubpopulations.Interestingly, they alsouseda ring topology— which isverysparseandhasa long diameter— to connectthesubpopulations,but eachdemewasstructuredasa torus.The authorscomparedtheir hybrid againsta simpleGA,a fine-grainedGA, two multiple-populationGAs with a ring topology (one uses5demesandvariabledemesizes,theotherusesa constantdemesizeof 50 individualsanddifferentnumberof demes),andanothermulti-demeGA with demesconnectedonatorus.Usinga job shopschedulingproblemasabenchmark,they noticedthatthesimpleGA did not find solutionsasgoodastheothermethods,andthataddingmoredemesto themultiple-populationGAs improvedtheperformancemorethanincreas-ing thetotalpopulationsize.Theirhybrid foundbettersolutionsoverall.

Anothertypeof hierarchicalparallelGA usesamaster-slaveoneachof thedemesof a multi-populationGA (seeFigure5). Migration occursbetweendemes,andthe

Page 21: A Survey of Parallel Genetic Algorithms - TRACER

evaluationof the individualsis handledin parallel.This approachdoesnot introducenew analyticproblems,andit canbeusefulwhenworkingwith complex applicationswith objective functionsthatneeda considerableamountof computationtime.Bian-chini andBrown [BIA 93] presentedanexampleof thismethodof hybridizingparallelGAs,andshowedthatit canfind a solutionof thesamequalityof a master-slave par-allel GA or a multi-demeGA in lesstime.

Interestingly, a very similar conceptwasinventedby Goldberg [GOL 89a] in thecontext of an object-orientedimplementationof a “communitymodel” parallelGA.In each“community” therearemultiple houseswhereparentsreproduceandtheoff-springareevaluated.Also, therearemultiplecommunitiesandit is possiblethatindi-vidualsmigrateto otherplaces.

A third methodof hybridizing parallel GAs is to usemulti-demeGAs at boththeupperandthe lower levels (seeFigure6). The ideais to forcepanmicticmixingat the lower level by usinga high migrationrateanda densetopology, while a lowmigrationrateis usedatthehighlevel [GOL 96]. Thecomplexity of thishybridwouldbe equivalent to a multiple-populationGA if we considerthe groupsof panmicticsubpopulationsasa singledeme.Thismethodhasnotbeenimplementedyet.

Hierarchicalimplementationscan reducethe executiontime more than any oftheir componentsalone.For example,considerthat in a particulardomainthe opti-mal speed-upof a master-slave GA is 57698;: , andtheoptimalspeed-upof a multiple-demeGA is 57698;< . Theoverallspeed-upof ahierarchicalGA thatcombinesthesetwomethodswouldbe 57698;:=�>5?6@8(< .11. RecentAdvancements

This sectionsummarizessomerecentadvancementsin the theoreticalstudy ofparallelGAs. First, thesectionpresentsa resulton master-slave GAs, andthenthereis a shortpresentationof a demesizingtheoryfor multiple-populationGAs.

An importantobservationonmaster-slaveGAsis thatasmoreprocessorsareused,the time to evaluatethe fitnessof the populationdecreases.But at the sametime,thecostof sendingthe individualsto theslavesincreases.This tradeoff betweendi-minishingcomputationtimesandincreasingcommunicationtimesentailsthat thereis an optimal numberof slaves that minimizes the total executiontime. A recent

study[CAN 97a] concludedthat the optimal is 5BADCFE G�H�IHKJ , where � is the popu-lation size, LNM is the time it takes to do a single function evaluation,and LNO is thecommunicationstime.Theoptimalspeed-upis PRQTS 5BA .

A naturalstartingpoint in theinvestigationof multiple-populationGAs is thesizeof thedemes,becausethesizeof thepopulationis probablytheparameterthataffectsmostthesolutionqualityof theGA [GOL 91, GOL 92, HAR 97]. Recently, Cantú-PazandGoldberg [CAN 97b] extendeda simpleGA populationsizingmodelto accountfor two boundingcasesof coarse-grainedGAs.Thetwo boundingcaseswerea setofisolateddemesandasetof fully connecteddemes.In thecaseof theconnecteddemes,themigrationrateis setto thehighestpossiblevalue.

Page 22: A Survey of Parallel Genetic Algorithms - TRACER

1 2 3 4Ones

1

2

3

4

Fitness

Figure7. A deceptive4-bit trap functionof unity. Thehorizontalaxis is thenumberof bitssetto 1 andtheverticalaxisis thefitnessvalue. Thedifficultyof thisfunctioncanbevariedeasilyby usingmore bits in thebasinof thedecep-tive optimum,or by reducingthefitnessdifferencebetweentheglobal anddeceptivemaxima.Thefirst testproblemwasconstructedbyconcatenating20 copiesof this functionand the secondtestproblemis formedwith 20copiesof a similar deceptivefunctionof 8 bits.

The populationsize is also the major factor to determinethe time that the GAneedsto find thesolution.Therefore,thedemesizingmodelsmaybeusedto predictthe executiontime of the parallelGA, andto compareit with the time neededby aserialGA to reachasolutionof thesamequality. Cantú-PazandGoldberg [CAN 97c]integratedthe demesizing modelswith a model for the communicationstime, andpredictedtheexpectedparallelspeed-upsfor thetwo boundingcases.

Therearethreemainconclusionsfrom this theoreticalanalysis.First,theexpectedspeed-upwhenthe demesareisolatedis not very significant(seeFigures7 and8).Second,thespeed-upis muchbetterwhenthedemescommunicate.And finally, thereis an optimal numberof demes(and an associateddemesize) that maximizesthespeed-up(seeFigure9).

Theidealizedboundingmodelscanbeextendedin severaldirections,for exampleto considersmallermigrationratesor moresparselyconnectedtopologies.The factthat thereis an optimal numberof demeslimits the processorsthat canbe usedtoreducethe executiontime. Using morethanthe optimal numberof demesis waste-ful, andwould resultin a slower algorithm.HierarchicalparallelGAs canusemoreprocessorseffectively andreducetheexecutiontimemorethana puremultiple-demeGA.

Parallel GAs arevery complex and,of course,therearemany problemsthat arestill unresolved.A few examplesare: (1) to determinethemigrationratethatmakesdistributeddemesbehave like a singlepanmicticpopulation,(2) to determineanad-equatecommunicationstopologythatpermitsthemixing of goodsolutions,but thatdoesnotresultin excessivecommunicationcosts,(3)find if thereis anoptimalnumberof demesthatmaximizesreliability.

Page 23: A Survey of Parallel Genetic Algorithms - TRACER

2.5 5 7.5 10 12.5 15Demes

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Speedup

(a) 4-bit trap

2.5 5 7.5 10 12.5 15Demes

1.1

1.2

1.3

1.4

1.5

1.6

Speedup

(b) 8-bit trap

Figure8. Projectedandexperimentalspeed-upsfor testfunctionswith 20 copiesof4-bit and 8-bit trap functionsusingfrom1 to 16 isolateddemes.Thethinlines showthe experimentalresultsand the thick lines are the theoreticalpredictions.Thequalitydemandedwasto findat least16copiescorrect.

12. Summary and Conclusions

This paperreviewedsomeof themostrepresentative publicationson parallelge-neticalgorithms.Thereview startedby classifyingthework onthisfield into four cat-egories: globalmaster-slave parallelization,fine-grainedalgorithms,multiple-deme,andhierarchicalparallelGAs. Someof the most importantcontributionsin eachofthesecategorieswereanalyzed,to try to identify the issuesthataffect thedesignandtheimplementationof eachclassof parallelGAsonexistingparallelcomputers.

The researchon parallelGAs is dominatedby multiple-demealgorithms,andinconsequence,this survey focusedon them.The survey on multiple-populationGAsrevealedthat thereareseveral fundamentalquestionsthat remainunansweredmanyyearsafter they werefirst identified.This classof parallelGAs is very complex, andits behavior is affectedby many parameters.It seemsthat theonly way to achieve agreaterunderstandingof parallelGAs is to studyindividual facetsindependently, andwehaveseenthatsomeof themostinfluentialpublicationsin parallelGAsconcentrateon only oneaspect(migrationrates,communicationtopology, or demesize)eitherignoringor makingsimplifying assumptionson theothers.

We alsoreviewedpublicationson master-slave andfine-grainedparallelGAs andrealizedthatthecombinationof differentparallelizationstrategiescanresultin fasteralgorithms.It is particularlyimportantto considerthehybridizationof paralleltech-niquesin thelight of recentresultswhich predicttheexistenceof anoptimalnumberof demes.

As GAs areappliedto largerandmoredifficult searchproblems,it becomesnec-essaryto designfasteralgorithmsthatretainthecapabilityof findingacceptablesolu-tions.This survey haspresentednumerousexamplesthatshow thatparallelGAs are

Page 24: A Survey of Parallel Genetic Algorithms - TRACER

2.5 5 7.5 10 12.5 15 Demes

0.5

0.75

1.25

1.5

1.75

2

2.25

Speedup

(a)4-bit trap

5 10 15 20Demes

2

4

6

8

Speedup

(b) 8-bit trap

Figure9. Projectedandexperimentalspeed-upsfor testfunctionswith 20 copiesof4-bit and8-bit trapfunctionsusingfrom1 to 16fully connecteddemes.Thequality requirementwasto findat least16correctcopies.

capableof combiningspeedandefficacy, andthatwearereachingabetterunderstand-ing whichshouldallow usto utilize thembetterin thefuture.

Acknowledgments

I wish to think David E. Goldberg for his commentson anearlierversionof thispaper.

Thisstudywassponsoredby theAir ForceOfficeof ScientificResearch,Air ForceMateriel Command,USAF, undergrantnumbersF49620-94-1-0103, F49620-95-1-0338,andF49620-97-1-0050.The U.S. Governmentis authorizedto reproduceanddistributereprintsfor governmentalpurposesnotwithstandingany copyright notationthereon.

Page 25: A Survey of Parallel Genetic Algorithms - TRACER

The views andconclusionscontainedhereinarethoseof the authorsandshouldnot be interpretedasnecessarilyrepresentingthe official policiesor endorsements,eitherexpressedor implied,of theAir ForceOfficeof ScientificResearchor theU.S.Government.

Erick Cantú-Pazwassupportedby aFulbright-GarcíaRobles-CONACYT Fellow-ship.

References

[ABR 92] ABRAMSON D., ABELA J., « A parallelgeneticalgorithmfor solvingtheschooltimetablingproblem». In Proceedingsof theFifteenthAustralian ComputerScienceCon-ference(ACSC-15), vol. 14,p. 1–11,1992.

[ABR 93] ABRAMSON D., M ILLS G., PERKINS S., « Parallelisationof a geneticalgorithmfor thecomputationof efficient train schedules». Proceedingsof the1993Parallel Com-putingandTransputers Conference, p. 139–149,1993.

[ADA 94] ADAMIDIS P., « Review of parallelgeneticalgorithmsbibliography». Tech.rep.version1, AristotleUniversityof Thessaloniki,Thessaloniki,Greece,1994.

[AND 90] ANDERSON E. J., FERRIS M. C., « A geneticalgorithmfor the assemblylinebalancingproblem». In Integer ProgrammingandCombinatorialOptimization: Proceed-ingsof a 1990ConferenceHeldat theUniversityof Waterloo, p.7–18,Waterloo,ON,1990.Universityof WaterlooPress.

[BAL 92] BALUJA S., « A massively distributedparallelgeneticalgorithm(mdpGA)». Tech.Rep.No. CMU-CS-92-196R,Carnegie Mellon University, Pittsburgh,PA, 1992.

[BAL 93] BALUJA S., « Structureand Performanceof Fine-GrainParallelismin GeneticSearch». In FORREST S., Ed., Proceedingsof the Fifth InternationalConferenceonGeneticAlgorithms, p. 155–162,MorganKaufmann(SanMateo,CA), 1993.

[BEL 95] BELDING T. C., « Thedistributedgeneticalgorithmrevisited». In ESCHELMAN

L., Ed.,Proceedingsof theSixthInternationalConferenceonGeneticAlgorithms, p. 114–121,MorganKaufmann(SanFrancisco,CA), 1995.

[BÉS91] BÉSSIERE P., TALBI E.-G.,«A parallelgeneticalgorithmfor thegraphpartitioningproblem». In ACM Int. Conf. on SupercomputingICS91, Cologne,Germany, July1991.

[BET 76] BETHKE A. D., « Comparisonof GeneticAlgorithmsandGradient-BasedOptimiz-ersonParallelProcessors: Efficiency of Useof ProcessingCapacity». Tech.Rep.No.197,Universityof Michigan,Logic of ComputersGroup,Ann Arbor, MI, 1976.

[BIA 93] BIANCHINI R., BROWN C. M., «Parallelgeneticalgorithmsondistributed-memoryarchitectures». In ATKINS S., WAGNER A. S., Eds.,TransputerResearch andApplica-tions6, p. 67–82,IOSPress(Amsterdam),1993

[BRA 90] BRAUN H. C., « On Solving Travelling salesmanProblemsby GeneticAlgo-rithms». In SCHWEFEL H.-P., MÄNNER R., Eds.,Parallel ProblemSolvingfromNature,p. 129–133,Springer-Verlag(Berlin), 1990.

[BRA 97] BRANKE J., ANDERSEN H. C., SCHMECK H., « ParallelisingGlobalSelectionin EvolutionaryAlgorithms». Submittedto Journalof ParallelandDistributedComputing,January1997.

[CAN 94] CANTÚ-PAZ E., MEJÍA-OLVERA M., « ExperimentalResultsin DistributedGe-neticAlgorithms». In InternationalSymposiumon AppliedCorporateComputing, p. 99–108,Monterrey, Mexico, 1994.

[CAN 97a] CANTÚ-PAZ E., « Designingefficient master-slaveparallelgeneticalgorithms».IlliGAL ReportNo. 97004,University of Illinois at Urbana-Champaign,Illinois GeneticAlgorithmsLaboratory, Urbana,IL, 1997.

Page 26: A Survey of Parallel Genetic Algorithms - TRACER

[CAN 97b] CANTÚ-PAZ E., GOLDBERG D. E., « Modeling IdealizedBoundingCasesofParallelGeneticAlgorithms». In KOZA J., DEB K., DORIGO M., FOGEL D., GARZON

M., IBA H., RIOLO R., Eds.,GeneticProgramming1997: Proceedingsof the SecondAnnualConference, MorganKaufmann(SanFrancisco,CA), 1997.

[CAN 97c] CANTÚ-PAZ E., GOLDBERG D. E., « Predictingspeedupsof idealizedboundingcasesof parallelgeneticalgorithms». In BÄCK T., Ed.,Proceedingsof the SeventhIn-ternationalConferenceon GeneticAlgorithms, p. 113–121,MorganKaufmann(SanFran-cisco),1997.

[CHE 93] CHEN R.-J., MEYER R. R., YACKEL J., « A GeneticAlgorithm for DiversityMinimization and Its Parallel Implementation». In FORREST S., Ed., ProceedingsoftheFifth InternationalConferenceon GeneticAlgorithms, p. 163–170,MorganKaufmann(SanMateo,CA), 1993.

[COH 87] COHOON J. P., HEGDE S. U., MARTIN W. N., RICHARDSD., «Punctuatedequi-libria : A parallelgeneticalgorithm». In GREFENSTETTE J. J., Ed.,Proceedingsof theSecondInternationalConferenceon GeneticAlgorithms, p. 148–154,LawrenceErlbaumAssociates(Hillsdale,NJ),1987.

[COH 91a] COHOON J. P., MARTIN W. N., RICHARDS D. S., « GeneticAlgorithmsandpunctuatedEquilibria in VLSI ». In SCHWEFEL H.-P., MÄNNER R., Eds.,ParallelProblemSolvingfromNature, p. 134–144,Springer-Verlag(Berlin), 1991.

[COH 91b] COHOON J. P., MARTIN W. N., RICHARDS D. S., « A multi-populationge-neticalgorithmfor solvingtheK-partition problemon hyper-cubes». In BELEW R. K.,BOOKER L. B., Eds.,Proceedingsof theFourth InternationalConferenceon GeneticAl-gorithms, p. 244–248,MorganKaufmann(SanMateo,CA), 1991.

[COH 91c] COHOON J. P., MARTIN W. N., RICHARDS D. S., « A Multi-PopulationGe-neticAlgorithm for SolvingtheK-PartitionProblemonHyper-Cubes». In BELEW R. K.,BOOKER L. B., Eds.,Proceedingsof theFourth InternationalConferenceon GeneticAl-gorithms, MorganKaufmann(SanMateo,CA), 1991.

[DAV 91] DAVIDOR Y., « A naturallyoccuringnicheandspeciesphenomenon: Themodelandfirst results». In BELEW R. K., BOOKER L. B., Eds.,Proceedingsof the FourthInternationalConferenceonGeneticAlgorithms, p.257–263,MorganKaufmann(SanMa-teo,CA), 1991.

[DAV 94] DAVIS M., L IU L., EL IAS J. G., « VLSI circuit synthesisusinga parallelgeneticalgorithm». In Proceedingsof theFirst IEEE Conferenceon EvolutionaryComputation,vol. 1, p. 104–109,IEEEPress(Piscataway, NJ),1994.

[DYM 92] DYMEK A., « An examinationof hypercubeimplementationsof geneticalgo-rithms ». umt, Air ForceInstituteof Technology, Wright-PattersonAir ForceBase,OH,1992.

[FOG91] FOGARTY T. C., HUANG R., « ImplementingthegeneticAlgorithm onTransputerBasedParallelProcessingSystems». Parallel ProblemSolvingfromNature, p. 145–149,1991.

[GOL 89a] GOLDBERG D. E., Geneticalgorithmsin search,optimization,andmachinelearn-ing. Addison-Wesley, Reading,MA, 1989.

[GOL 89b] GOLDBERG D. E., KORB B., DEB K., « Messygeneticalgorithms: Motivation,analysis,andfirst results». Complex Systems, vol. 3, nU 5, p. 493–530,1989.(Also TCGAReport89003).

[GOL 91] GOLDBERG D. E., DEB K., « A comparative analysisof selectionschemesusedin geneticalgorithms». Foundationsof GeneticAlgorithms, vol. 1, p. 69–93,1991.(AlsoTCGA Report90007).

[GOL 92] GOLDBERG D. E., DEB K., CLARK J. H., « Geneticalgorithms,noise,andthesizingof populations». Complex Systems, vol. 6, p. 333–362,1992.

Page 27: A Survey of Parallel Genetic Algorithms - TRACER

[GOL 93a] GOLDBERG D. E., DEB K., KARGUPTA H., HARIK G., « Rapid,accurateoptimizationof difficult problemsusingfastmessygeneticalgorithms». In FORREST S.,Ed., Proceedingsof the Fifth InternationalConferenceon GeneticAlgorithms, p. 56–64,MorganKaufmann(SanMateo,CA), 1993.

[GOL 93b] GOLDBERG D. E., DEB K., THIERENS D., « Towarda betterunderstandingofmixing in geneticalgorithms».Journalof theSocietyof InstrumentandControl Engineers,vol. 32,nU 1, p. 10–16,1993.

[GOL 94] GOLDBERG D. E., « Geneticandevolutionaryalgorithmscomeof age». Commu-nicationsof theACM, vol. 37,nU 3, p. 113–119,1994.

[GOL 96] GOLDBERG D. E., June1996.Personalcommunication.[Gor 89a] GORGES-SCHLEUTER M., ASPARAGOS: A populationgeneticsapproachto ge-

neticalgorithms.In VOIGT H.-M., MÜHLENBEIN H., SCHWEFEL H.-P.,Eds.,EvolutionandOptimization’89, p. 86–94.Akademie-Verlag(Berlin), 1989.

[GOR89b] GORGES-SCHLEUTER M., « ASPARAGOS: An asynchronousparallelgeneticoptimizationstrategy ». In SCHAFFER J. D., Ed.,Proceedingsof theThird InternationalConferenceonGeneticAlgorithms, p.422–428,MorganKaufmann(SanMateo,CA), 1989.

[Gor 91] GORGES-SCHLEUTER M., « Explicit Parallelismof geneticAlgorithms throughPopulationStructures». In SCHWEFEL H.-P., MÄNNER R., Eds.,Parallel ProblemSolv-ing fromNature, p. 150–159,Springer-Verlag(Berlin), 1991.

[GOR92a] GORDON V. S., WHITLEY D., BÖHM A. P. W., « Dataflow parallelismin geneticalgorithms». In MÄNNER R., MANDERICK B., Eds.,Parallel ProblemSolvingfromNature, 2, p. 533–542,Elsevier Science(Amsterdam),1992.

[Gor 92b] GORGES-SCHLEUTER M., « Comparisonof local matingstrategiesin massivelyparallelgeneticalgorithms». In MÄNNER R., MANDERICK B., Eds.,Parallel ProblemSolvingfromNature, 2, p. 553–562,Elsevier Science(Amsterdam),1992.

[GOR93] GORDON V. S., WHITLEY D., « SerialandParallelGeneticAlgorithmsasFunc-tion Optimizers». In FORREST S., Ed.,Proceedingsof theFifth InternationalConferenceonGeneticAlgorithms, p. 177–183,MorganKaufmann(SanMateo,CA), 1993.

[GOR94] GORDON V. S., « Locality in geneticalgorithms». In Proceedingsof the FirstIEEE Conferenceon EvolutionaryComputation, vol. 1, p. 428–432,IEEE Press(Piscat-away, NJ),1994.

[GOR97] GORGES-SCHLEUTER M., « Asparagos96andtheTravelingSalesmanProblem».In BÄCK T., Ed., Proceedingsof the Fourth InternationalConferenceon EvolutionaryComputation, p. 171–174,IEEEPress(Piscataway, NJ),1997.

[GRE81] GREFENSTETTE J. J., « Parallel adaptive algorithmsfor functionoptimization».Tech.Rep.No.CS-81-19,VanderbiltUniversity, ComputerScienceDepartment,Nashville,TN, 1981.

[GRO 85] GROSSO P. B., « Computersimulationsof geneticadaptation: Parallel subcompo-nentinteractionin a multilocusmodel». Unpublisheddoctoraldissertation,TheUniversityof Michigan,1985.(UniversityMicrofilms No. 8520908).

[GRU 94] GRUAU F., « Neural networksynthesisusing cellular encodingand the geneticalgorithm». Unpublisheddoctoraldissertation,L’UniversiteClaudeBernard-LyonI, 1994.

[HAR 97] HARIK G., CANTÚ-PAZ E., GOLDBERG D. E., M ILLER B. L., « The gam-bler’s ruin problem,geneticalgorithms,andthesizingof populations». In Proceedingsof1997IEEE InternationalConferenceon EvolutionaryComputation, p. 7–12,IEEE Press(Piscataway, NJ),1997.

[HAU 94] HAUSER R., MÄNNER R., « Implementationof standardgeneticalgorithmonMIMD machines». In DAVIDOR Y., SCHWEFEL H.-P., MÄNNER R., Eds.,ParallelProblemSolvingfronNature, PPSNIII , p. 504–513,Springer-Verlag(Berlin), 1994.

[HOL 59] HOLLAND J. H., « A universalcomputercapableof executingan arbitrarynum-ber of sub-programssimultaneously». Proceedingsof the 1959EasternJoint ComputerConference, p. 108–113,1959.

Page 28: A Survey of Parallel Genetic Algorithms - TRACER

[HOL 60] HOLLAND J. H., « Iterativecircuit computers». In Proceedingsof the1960West-ernJoint ComputerConference, p. 259–265,1960.

[KOZ 95] KOZA J. R., ANDRE D., « Parallelgeneticprogrammingonanetwork of transput-ers». Tech.Rep.No. STAN-CS-TR-95-1542,StanfordUniversity, Stanford,CA, 1995.

[KRÖ 91] KRÖGER B., SCHWENDERLING P. , VORNBERGER O., « Parallelgeneticpackingof rectangles». In SCHWEFEL H.-P., MÄNNER R., Eds.,Parallel ProblemSolvingfromNature, p. 160–164,Springer-Verlag(Berlin), 1991.

[KRÖ 92] KRÖGER B., SCHWENDERLING P. , VORNBERGER O., «Massiveparallelgeneticpacking». Transputingin NumericalandNeural NetworkApplications, p. 214–230,1992.

[KRÖ 93] KRÖGER B., SCHWENDERLING P. , VORNBERGER O., Parallelgeneticpackingon transputers.In STENDER J., Ed.,Parallel GeneticAlgorithms: TheoryandApplica-tions, p. 151–185.IOSPress,Amsterdam,1993.

[LEV 94] LEVINE D., « A parallelgeneticalgorithmfor thesetpartitioningproblem». Tech.Rep.No. ANL-94/23, ArgonneNationalLaboratory, MathematicsandComputerScienceDivision,Argonne,IL, 1994.

[LI 90] L I T., MASHFORD J., « A parallel geneticalgorithm for quadraticassignment».In AMMAR R. A., Ed.,Proceedingsof theISMM InternationalConference. Parallel andDistributedComputingandSystems, p. 391–394,ActaPress(Anaheim,CA), 1990.

[LIN 94] L IN S.-C., PUNCH W., GOODMAN E., « Coarse-GrainParallel GeneticAlgo-rithms: CategorizationandNew Approach». In SixthIEEE Symposiumon Parallel andDistributedProcessing, IEEEComputerSocietyPress(LosAlamitos,CA), October1994.

[LIN 97] L IN S.-H., GOODMAN E. D., PUNCH W. F., « InvestigatingParallel GeneticAlgorithms on JobShopSchedulingProblem». In ANGELINE P., REYNOLDS R., J.M., EBERHART R., Eds.,SixthInternationalConferenceon EvolutionaryProgramming,p. 383–393,Springer-Verlag(Berlin), April 1997.

[MAN 89] MANDERICK B., SPIESSENS P., « Fine-grainedparallel geneticalgorithms».In SCHAFFER J. D., Ed.,Proceedingsof theThird InternationalConferenceon GeneticAlgorithms, p. 428–433,MorganKaufmann(SanMateo,CA), 1989.

[MAR 94a] MARESKY J. G., « On effective communicationin distributed geneticalgo-rithms». Master’s thesis,Hebrew Universityof Jerusalem,Israel,1994.

[MAR 94b] MARIN F. J., TRELLES-SALAZAR O. , SANDOVAL F., « GeneticalgorithmsonLAN-messagepassingarchitecturesusingPVM : Applicationto theroutingproblem».In DAVIDOR Y., SCHWEFEL H.-P., MÄNNER R., Eds.,Parallel ProblemSolvingfronNature, PPSNIII , p. 534–543,Springer-Verlag(Berlin), 1994.

[MER 93a] MERKLE L., GATES G., LAMONT G., PACHTER R., « Applicationof thePar-allel FastMessyGeneticAlgorithm to theProteinFolding Problem». Rapporttechnique,Air ForceInstituteof Technology, Wright-PattersonAFB, Ohio,1993.

[MER 93b] MERKLE L. D., LAMONT G. B., « Comparisonof parallelmessygeneticalgo-rithm datadistribution strategies». In FORREST S., Ed.,Proceedingsof theFifth Inter-nationalConferenceon GeneticAlgorithms, p. 191–198,MorganKaufmann(SanMateo,CA), 1993.

[MÜH 89a] MÜHLENBEIN H., Parallel geneticalgorithms,populationgenetics,andcombi-natorialoptimization.In VOIGT H.-M., MÜHLENBEIN H., SCHWEFEL H.-P., Eds.,EvolutionandOptimization’89, p. 79–85.Akademie-Verlag(Berlin), 1989.

[MÜH 89b] MÜHLENBEIN H., « Parallelgeneticalgorithms,populationgeneticsandcombi-natorialoptimization». In SCHAFFER J. D., Ed.,Proceedingsof theThird InternationalConferenceonGeneticAlgorithms, p.416–421,MorganKaufmann(SanMateo,CA), 1989.

[MÜH 91a] MÜHLENBEIN H., «Evolutionin timeandspace-Theparallelgeneticalgorithm».In RAWLINS G. J. E., Ed., Foundationsof GeneticAlgorithms, p. 316–337,MorganKaufmann(SanMateo,CA), 1991.

Page 29: A Survey of Parallel Genetic Algorithms - TRACER

[MÜH 91b] MÜHLENBEIN H., SCHOMISCH M. , BORN J., «TheParallelGeneticAlgorithmasFunctionOptimizer». In BELEW R. K., BOOKER L. B., Eds.,Proceedingsof theFourth InternationalConferenceon GeneticAlgorithms, MorganKaufmann(SanMateo,CA), 1991.

[MÜH 92] MÜHLENBEIN H., « Darwin’s continentcycle theory and its simulationby theprisoner’s dilemma». In VERELA F. J., BOURGINE P., Eds.,Toward a Practiceof Au-tonomousSystems: Proceedingsof the First EuropeanConferenceon Artifical Life, p.236–244,MIT Press(Cambridge,MA), 1992.

[MUN 91] MUNTEAN T., TALBI E.-G., « A Parallel Genetic Algorithm for process-processorsmapping». In DURAND M., DABAGHI E., Eds.,Proceedingsof theSecondSymposiumII. High PerformanceComputing, p. 71–82,Montpellier, France,October, 7-91991.

[MUN 93] MUNETOMO M., TAKAI Y., SATO Y., « An efficient migration schemeforsubpopulation-basedasynchronouslyparallelgeneticalgorithms». In FORREST S., Ed.,Proceedingsof theFifth InternationalConferenceon GeneticAlgorithms, page649,Mor-ganKaufmann(SanMateo,CA), 1993.

[NEU 91] NEUHAUS P., « SolvingthemappingProblem—Experienceswith a GeneticAlgo-rithm ». In SCHWEFEL H.-P., MÄNNER R., Eds.,Parallel ProblemSolvingfromNature,p. 170–175,Springer-Verlag(Berlin), 1991.

[PET87a] PETTEY C. B., LEUZE M. R., GREFENSTETTE J. J., « A parallelgeneticalgo-rithm».In GREFENSTETTEJ. J., Ed.,Proceedingsof theSecondInternationalConferenceonGeneticAlgorithms, p. 155–161,LawrenceErlbaumAssociates(Hillsdale,NJ),1987.

[PET87b] PETTEY C. C., LEUZE M., GREFENSTETTE J. J., « Geneticalgorithmson ahypercubemultiprocessor». HypercubeMultiprocessors 1987, p. 333–341,1987.

[PET89] PETTEY C. C., LEUZE M. R., « A theoreticalinvestigationof a parallelgeneticalgorithm». In SCHAFFER J. D., Ed.,Proceedingsof theThird InternationalConferenceonGeneticAlgorithms, p. 398–405,MorganKaufmann(SanMateo,CA), 1989.

[ROB 87] ROBERTSON G. G., « Parallelimplementationof geneticalgorithmsin a classifiersystem». In GREFENSTETTE J. J., Ed., Proceedingsof the SecondInternationalCon-ferenceonGeneticAlgorithms, p. 140–147,LawrenceErlbaumAssociates(Hillsdale,NJ),1987.

[SAR 96] SARMA J., JONG K. D., «An analysisof theeffectsof neighborhoodsizeandshapeon local selectionalgorithms». In Parallel ProblemSolvingfromNature IV, p. 236–244,Springer-Verlag(Berlin), 1996.

[SCH92] SCHWEHM M., « Implementationof geneticalgorithmson variousinterconnectionnetworks». In VALERO M., ONATE E., JANE M., LARRIBA J. L. , SUAREZ B., Eds.,Parallel Computingand TransputerApplications, p. 195–203,IOS Press(Amsterdam),1992.

[SER94] SEREDYNSKI F., « Dynamicmappingandloadbalancingwith parallelgenetical-gorithms». In Proceedingsof theFirst IEEE Conferenceon EvolutionaryComputation,vol. 2, p. 834–839,IEEEPress(Piscataway, NJ),1994.

[SHA 94] SHAPIRO B., NAVETTA J., « A massively parallel geneticalgorithm for RNAsecondarystructureprediction». TheJournalof Supercomputing, vol. 8, p.195–207,1994.

[SPI90] SPIESSENS P., MANDERICK B., « A geneticalgorithmfor massively parallelcom-puters». In ECKMILLER R., HARTMANN G., HAUSKE G., Eds.,Parallel ProcessinginNeural SystemsandComputers, Dusseldorf, Germany, p. 31–36,North Holland(Amster-dam),1990.

[SPI91] SPIESSENS P., MANDERICK B., « A massively parallelgeneticalgorithm: Imple-mentationandfirst analysis». In BELEW R. K., BOOKER L. B., Eds.,Proceedingsof theFourth InternationalConferenceon GeneticAlgorithms, p. 279–286,MorganKaufmann(SanMateo,CA), 1991.

Page 30: A Survey of Parallel Genetic Algorithms - TRACER

[STA 91] STARKWEATHER T., WHITLEY D. , MATHIAS K., « OptimizationUsing Dis-tributedGeneticAlgorithms». In SCHWEFEL H.-P., MÄNNER R., Eds.,Parallel ProblemSolvingfromNature, p. 176–185,Springer-Verlag(Berlin), 1991.

[TAL 91] TALBI E.-G., BESSIÈRE P., « A ParallelGeneticAlgorithm for theGraphParti-tioningProblem». In Proc.of theInternationalConferenceonSupercomputing, Cologne,June1991.

[TAM 92] TAMAKI H., NISHIKAWA Y., « A paralleledgeneticalgorihtmbasedon a neigh-borhoodmodelandits applicationto the jobshopscheduling». In MÄNNER R., MAN-DERICK B., Eds.,Parallel ProblemSolvingfromNature, 2, p. 573–582,Elsevier Science(Amsterdam),1992.

[TAN 87] TANESE R., « Parallelgeneticalgorithmfor a hypercube». In GREFENSTETTE

J. J., Ed.,Proceedingsof theSecondInternationalConferenceon GeneticAlgorithms, p.177–183,LawrenceErlbaumAssociates(Hillsdale,NJ),1987.

[TAN 89a] TANESE R., « Distributedgeneticalgorithms». In SCHAFFER J. D., Ed., Pro-ceedingsof theThird InternationalConferenceonGeneticAlgorithms, p.434–439,MorganKaufmann(SanMateo,CA), 1989.

[TAN 89b] TANESE R., « Distributedgeneticalgorithmsfor functionoptimization». Unpub-lisheddoctoraldissertation,Universityof Michigan,Ann Arbor, 1989.

[WHI 90] WHITLEY D., STARKWEATHER T., «GenitorII : A distributedgeneticalgorithm».To appearin Journalof ExperimentalandTheoreticalArtificial Intelligence,1990.