90
Controlled Observations of the Genetic Algorithm in a Changing Environment William Rand Computer Science and Engineering Center for the Study of Complex Systems [email protected] Case Studies Using the Shaky Ladder Hyperplane Defined Functions

Controlled Observations of the Genetic Algorithm in a Changing Environment William Rand Computer Science and Engineering Center for the Study of Complex

  • View
    223

  • Download
    9

Embed Size (px)

Citation preview

Controlled Observations of the Genetic Algorithm in a Changing

Environment

William RandComputer Science and Engineering

Center for the Study of Complex [email protected]

Case Studies Using the Shaky Ladder Hyperplane Defined Functions

Overview

● Introduction– Motivation, GA, Dynamic Environments, Framework,

Measurements● Shaky Ladder Hyperplane-Defined Functions

– Description and Analysis● Varying the Time between Changes

– Performance, Satisficing and Diversity Results● The Effect of Crossover● Experiments with Self-Adaptation● Future Work and Conclusions

Motivation• Despite years of great research examining the GA,

more work still needs to be done, especially within the realm of dynamic environments

• Approach– Applications: GA works in many different environments,

particular results– Theory: Places many limitations on results– Middle Ground: Examine a realistic GA on a set of constructed

test functions, Systematic Controlled Observation

• Benefits– Make recommendations to application practitioners– Provide guidance for theoretical work

1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1

Individual 970

1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1

Individual 971

What is a GA?

Populationof Solutions

EvaluativeMechanism

Inheritancewith Variation

John Holland, Adaptation in Natural and Artificial Systems, 1975

1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1

Individual 972

Dynamic Environments● GAs are a restricted model of evolution

● Evolution is an inherently dynamic system, yet researchers traditionally apply GAs to static problems

● If building blocks exist within a problem framework, the GA can recombine those solutions to solve problems that change in time

● Application examples include: Job scheduling, dynamic routing, and autonomous agent control

● What if we want to understand how the GA works in these environments?

● Applications are too complicated to comprehend all of the interactions, we need a test suite for systematic, controlled observation

Measures of Interest

• Performance– How well the system solves an

objective function– Best Performance - Avg. across runs

of the fitness of the best ind. – Avg. Performance – Avg. across runs

of the fitness of the avg. ind.

• Satisficability– Ability of the system to achieve a

predetermined criteria– Best Satisficability - Fraction of runs

where the best solution exceeds a threshold

– Average Satisficability – Avg. across runs of fraction of population to exceed threshold

• Robustness– How indicative future performance is the

current performance?– Best Robustness – Fitness of current best

divided by fitness of previous generation’s best individual

– Average Robustness – Current population fitness avg. divided by the avg. for the previous generation

• Diversity– Measure of the variation of the genomes

in the population– Best Diversity – Avg. Hamming distance

between genomes of best individuals across runs

– Average Diversity – Avg. across runs of avg. Hamming distance of whole population

Hyperplane Defined Functions● HDFs were designed by John Holland, to model the way

the individuals in a GA search

● In HDFs building blocks are described formally by schemata

● If search space is binary strings then schemata are trinary strings (0, 1, * = wildcard)

● Building blocks are schemata with a positive fitness contribution

● Combine building blocks to create higher level building blocks and reward the individual more for finding them

● Potholes are schemata with a negative fitness contribution

John Holland, “Cohort Genetic Algorithms, Building Blocks, and Hyperplane-Defined Functions”, 2000

Shaky Ladder HDFs

● Shaky ladder HDFs place 3 restrictions on HDFs

1) Elementary schemata do not conflict with each other

2) Potholes have limited costs

3) Final schema is union of elementary schemata● Gaurantees any string which matches the final schema is an

optimally valued string

● Shaking the ladder involves changing intermediate schemata

Three Variants

● Cliffs Variant - All three groups of the fixed schemata are used to create the intermediate schemata

● Smooth Variant - Only elementary schemata are used to create intermediate schemata

● Weight Variant - The weights of the ladder are shaken instead of the form

Analysis of the Sl-hdf’s

● The Sl-hdf’s were devised to resemble an environment where there are regularities and a fixed optima

● There are two ways to show that they have these properties:

1. Match the micro-level componentry

2. Carry out macro-analysis

• Standard technique is the autocorrelation of a random walk

Mutation Landscape

Crossover Landscape

Time Between Shakes Experiment● Vary t which is the time between shakes

● See what effect this has on the performance of the best individual in the current generation

Population Size 1000

Crossover Rate 0.7

Mutation Rate 0.001

Generations 1800

Selection Type Tournament, Size 3

# of Elementary Schemata 50

String Length 500

# of Runs 30

Cliffs Variant

Smooth Variant

Weight Variant

Results of Experiment● tImproves performance,Premature

Convergence prevents finding optimal strings

● tutperforms static environment, intermediate schemata provide little guidance, 19 runs find an optimal string

●tBest performance, tracks and adapts to

changes in environment, intermediate schemata provide guidance but not a dead end, 30 runs find optimal strings

● Smoother variants perform better early on, but then the lack of selection pressure prevents them from finding the global optimum

Comparison of Threshold Satisficability

Cliffs Variant Diversity Results

Smooth Variant Diversity Results

Weight Variant Diversity Results

Discussion of Diversity Results

● Initially thought diversity would decrease as time went on and the system converged, and that shakes would increase diversity as the GA explored the new solution space

● Neutral mutations allow founders to diverge

● A ladder shake decreases diversity because it eliminates competitors to the new best subpopulations

● In the Smooth and Weight variants Diversity increases even more due to lack of selection pressure

● In the Weight variant you can see Diversity leveling off as the population stabilizes around fit, but not optimal individuals

Crossover Experiment

● The Sl-hdf are supposed to explore the use of the GA in dynamic environments

● The GA’s most important operator is crossover● Therefore, if we turn crossover off the GA should not be

able to work as well in the sl-hdf environments● That is exactly what happens● Moreover crossover has a greater effect on the GA

operating in the Weight variant due to the short schemata

Cliffs Variant Crossover Results

Smooth Variant Crossover Results

Weight Variant Crossover Results

Self-Adaptation● By controlling mutation we can control the balance between

exploration and exploitation which is especially useful in a dynamic environment

● Many techniques have been suggested: hypermutation, variable local search, and random immigrants

● Bäck proposed the use of self-adaptation in evolutionary strategies and later in GAs (92)

● Self-adaptation encodes a mutation rate within the genome ● The mutation rate becomes an additional search problem for the

GA to solve

1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 1 | 0 | 1 | 0 | 0 | 0 | 1

Mutation Rate Solution String

Individual 972

The Experiments

Experiment Type & nBits Min & Max Seeded? Cross?

1 Fixed 0.001 No Yes

2 Fixed 0.0005 No Yes

3 Fixed 0.0001 No Yes

4 Self, 10 0.0, 1.0 No Yes

5 Self, 15 0.0001, 0.1 No Yes

6 Self, 15 0.0001, 0.1 Yes Yes

7 Self, 15 0.0001, 0.1 Yes No

Performance Results (1/2)Generation 1800 Best Individual Over 30 Runs

Experiment Avg.Std.

Dev. Avg. Std. Dev. Avg.Std.

Dev.

Fixed 0.001 .9538 .0680 1.00 0.00 1.00 0.00

Fixed 0.0005 .6320 .1534 .7473 .1155 .7892 .1230

Fixed 0.0001 .2340 .0373 .2242 .0439 .2548 .0573

Self 0-1 .1618 .0426 .1696 .0445 .1670 .0524

Self 0.0001-0.01 .3515 .0729 .3921 .1282 .3993 .0964

Seeded .3412 .0506 .3690 .1018 .3974 .0949

No Crossover .2791 .0596 .5648 .1506 .4242 .1526

Performance Results (2/2)Generation 1800 Best Individual Over 30 Runs

  900   1801

Experiment Avg.Std. Dev Avg.

Std. Dev

Fixed 0.001 .7424 .1144 .8428 .0745

Fixed 0.0005 .5796 .1023 .6852 .1041

Fixed 0.0001 .2435 .0530 .2710 .0571

Self 0-1 .1675 .0542 .1693 .0585

Self 0.0001-0.01 .3710 .0828 .4375 .0969

Seeded .3551 .1045 .4159 .1105

No Crossover .3242 .1037 .3782 .1011

Mutation Rates

Discussion of Self-Adaptation Results● Self-adaptation fails to improve the balance

between exploration and exploitation● “Tragedy of the Commons” - it is to the benefit

of each individual to have a low mutation rate, even though a higher average mutation rate is beneficial to the whole population

● Seeding with known “good” material does not always increase performance

● Some level of mutation is always good

Future Work● Further Exploration of sl-hdf parameter space● Schema Analysis

– Analysis of local optima– What levels are attained when?

● Analysis of the sl-hdf landscape– Population based landscape analysis– Other dynamic analysis– Examination of

● Other mechanisms to augment the GA– Meta-GAs, hypermutation, multiploidy

● Co-evolution of sl-hdf's with solutions ● Combining GAs with ABMs to model ecosystems

Applications

● Other Areas of Evolutionary Computation– Coevolution– Other Evolutionary Algorithms

● Computational Economics– Market Behavior and Failures– Generalists vs. Specialists

● Autonomous Agents– Software agents, Robotics

● Evolutionary Biology– Phenotypic Plasticity, Baldwinian Learning

Conclusions● Systematic, Controlled Observation allows us to gather

regularities about an artificial system that is useful to both practitioners and theoreticians

● The sl-hdf's provide and the three variants presented are a useful platform for exploring the GA in dynamic environments

● Standard autocorrelation fails to completely describe some landscapes and some dynamism

● Intermediate rates of change provide a better environment at times by preventing premature convergence

● Self-adaptation is not always successful, sometimes it is better to explicitly control GA parameters

Acknowledgements● John Holland

● Rick Riolo

● Scott Page, John Laird, and Martha Pollack

● Jürgen Branke

● CSCS

Carl Simon

Howard Oishi

Mita Gibson

Cosma Shalizi

Mike Charter & the admins

Lori Coleman

● Sluce Research Group

Dan Brown, Moira Zellner, Derek Robinson

● My Parents and Family

●RR-Group

Robert Lindsay

Ted Belding

Chien-Feng Huang

Lee Ann Fu

Boris Mitavskiy

Tom Bersano

Lee Newman

●SFI, CSSS, GWE

Floortje Alkemade

Lazlo Guylas

Andreas Pape

Kirsten Copren

Nic Geard

Igor Nikolic

Ed Venit

Santiago Jaramillo

Toby Elmhirst

Amy Perfors

Friends

Brooke Haueisen

Kat Riddle

Tami Ursem

Kevin Fan

Jay Blome

Chad Brick

Ben Larson

Mike Curtis

Beckie Curtis

Mike Geiger

Brenda Geiger

William Murphy

Katy Luchini

Dirk Colbry

And Many, Many More

Jason Atkins

Dmitri Dolgov

Anna Osepayshvili

Jeff Cox

Dave Orton

Bil Lusa

Dan Reeves

Jane Coggshall

Brian Magerko

Bryan Pardo

Stefan Nikles

Eric Schlegel

TJ Harpster

Kristin Chiboucas

Cibele Newman

Julia Clay

Bill Merrill

Eric Larson

Josh Estelle

●CWI

Han La Poutré

Tomas Klos

•EECSWilliam Birmingham

Greg Wakefield

CSEG

Any Questions?

My Contributions

● Shaky Ladder Hyperplane Defined Functions

– Three Variants– Description of Parameter Space to be explored

● Work on describing Systematic, Controlled Observation Framework

● Initial experiments on sl-hdf

● Crossover Results on variants of the hdf

● Autocorrelation analysis of the hdf and sl-hdf

● Exploration of Self-Adaptation in GAs when it fails

● Suite of Metrics to better understand GAs in Dynamic Environments

● Proposals of how to extend the results to Coevolution

Motivation● Despite decades of great research, there is more work that needs to be done in

understanding the GA

● Performance metrics are not enough to explain the behavior of the GA, but that is what is reported in most experiments

● What other measures could be used to describe the run of a GA in order to gain a fuller understanding of how the GA behaves?

● The goal is not to understand the landscape or to classify the performance of particular variations of the GA

● Rather the goal is to develop a suite of measures that help to understand the GA via systematic, controlled observations

Exploration vs. Exploitation

● A classic problem in optimization is how to maintain the balance between exploration and exploitation

● k-armed bandit problem

– If we are allowed a limited number of trials at a k-armed bandit what is the best way to allocate those trials in order to maximize our overall utility?

– Given finite computing resources what is the best way to allocate our computational power to maximize our results?

● Classical solution: Allocate exponential trials to the best observed distribution based on historic outcomes

Dubins, L.E. and Savage, L.J. (1965). How to gamble if you must. McGraw Hill, New York. Re-published as Inequalities for stochastic processes. Dover, New York (1976).

The Genetic Algorithm

1 Generate a population of solutions to the search problem at random

2 Evaluate this population

3 Sort the population based on performance

4 Select a part of the population to make a new population

5 Perform mutation and recombination to fill out the new population

6 Go to step 2 until time runs out or performance criteria is met

The Environments

● Static Environment – Hyperplane-defined function (hdf)

● Dynamic Environment – New hdf's are selected from an equivalence set at regular intervals

● Coevolving Environment – A separate problem-GA controls which hdf's the solution-GA faces every generation

Dynamics and the Bandit(Like “Smoky and the Bandit” only without Jackie Gleason)

● Now what if the distributions underlying the various arms changes in time?

● The balance between exploration and exploitation would also have to change in time

● This presentation will attempt to examine one way to do that and why the mechanism presented fails

Qualities of Test Suites

● Whitley (96)– Generated from elementary Building Blocks

– Resistant to hillclimbing

– Scalable in difficulty

– Canonical in form

● Holland (00)– Generated at random, but not reverse-engineerable

– Landscape-like features

– Include all finite functions in the limit

Building Blocks and GAs

● GAs combine building blocks to find the solution to a problem

● Different individuals in a GA have different building blocks, through crossover they merge

● This can be used to define any possible function

Wheel Engine

Car

HDF ExampleBuilding Block Setb

1 = 11****1 = 1

b2 = **00*** = 1

b3 = ****1** = 1

b4 = *****0* = 1

b12

= 1100**1 = 1

b23

= **001** = 1

b123

= 11001*1 = 1

b1234

= 1100101 = 1

Potholesp

12=110***1 = -0.5

p13

= 11**1*1 = -0.5

p2312

= 1* 001*1 = -0.5

Sample Evaluationsf(100111) = b

3 = 1

f(1111111) = b1+b

3-p

13 = 1.5

f(1000100) = b2 + b

3 + b

23 = 3

f(1100111) = b1 + b

2 + b

3 + b

12 + b

23 + b

123

- p12

– p13

– p2312

= 4.5

b1

b2 b

3b

4

b12

b23

b123

b1234

Hyperplane-defined Functions● Defined over the set of all binary strings

● Create an elementary level building block set defined over the set of strings of the alphabet {0, 1, *}

● Create higher level building blocks by combining elementary building blocks

● Assign positive weights to all building blocks

● Create a set of potholes that incorporate parts of multiple elementary building blocks

● Assign the potholes negative weights

● A solution string matches a building block or a pothole if it matches the character of the alphabet or if the building block has a '*' at that location

Problems with the HDFs

● Problems with HDFs for systematic study in dynamic environments

– No way to determine optimum value of a random HDF

– No way to create new HDFs based on old ones● Because of this there is no way to specify a non-random dynamic

HDF

Creating an sl-hdf(1) Generate a set of e non-conflicting elementary schemata of order o (8), and

of string length l (500), set fitness contribution u(s) (2)

(2) Combine all elementary schemata to create highest-level schemata, and set fitness contribution (3)

(3) Create a pothole for each elementary schemata, by copying all defining bits, plus some from another elementary schemata probabilistically (p = 0.5), and set fitness contribution (-1)

(4) Generate intermediate schemata by combining random pairs of elementary schemata to create e / 2 second level schemata

(5) Repeat (4) for each level until the number of schemata to be generated for the next level is <= 1

(6) To generate a new sl-hdf from the same equivalence set delete the previous intermediate schemata and repeat steps (4) and (5)

Mutation Blowup

Crossover Blowup

Measures of Interest

● Average Fitness – average performance of the system over time

● Robustness – ability of the system to maintain steady state performance

● Satisficability – ability of the system to maintain performance above a certain level

● Diversity – difference between solutions that the system is currently examining

Cliffs Variant Performance Results

Smooth Variant Performance Results

Weight Variant Performance Results

Cliffs Variant Robustness Results

Smooth Variant Robustness Results

Weight Variant Robustness Results

Cliffs Variant Satisficability Results

Smooth Variant Satisficability Results

Weight Variant Satisficability Results

Crossover Results Cliffs Variant

Crossover Results Smooth Variant

Crossover Results Weight Variant

Why Bäck’s Mechanism?

● Does not require external knowledge● Allows the GA to choose any mutation rate● Allows control between exploration and

exploitation does not force one or the other● First order approximation of self-adaptive

mutation mechanisms in haploid organisms● Bäck showed self-adaptation to be successful

SA Results Smooth Variant (1/2)Generation 1800 Best Individual Over 30 Runs

Experiment Avg.Std.

Dev. Avg. Std. Dev. Avg.Std.

Dev.

Fixed 0.001 .9183 .0673 .9162 .0686 .9162 .0644

Fixed 0.0005 .6663 .0782 .6785 .0577 .6520 .0891

Fixed 0.0001 .3073 .0521 .3038 .0548 .2965 .0633

Self 0-1 .1836 .0495 .1932 .0558 .1785 .0509

Self 0.0001-0.01 .3932 .0686 .4293 .0574 .4461 .0807

Seeded .4194 .0758 .4339 .0566 .4309 .0762

No Crossover .4030 .0611 .3974 .0686 .3990 .0822

SA Results Smooth Variant (2/2)Generation 1800 Best Individual Over 30 Runs

  900   1801

Experiment Avg.Std. Dev Avg.

Std. Dev

Fixed 0.001 .7597 .1029 .7681 .0621

Fixed 0.0005 .6098 .1011 .6517 .0577

Fixed 0.0001 .3086 .0442 .3164 .0506

Self 0-1 .1794 .0550 .1998 .0476

Self 0.0001-0.01 .4080 .0517 .4243 .0662

Seeded .3738 .0734 .4185 .0673

No Crossover .3588 .1002 .3775 .1017

Mutation Rates - Smooth Variant

SA Results Weight Variant (1/2)Generation 1800 Best Individual Over 30 Runs

Experiment Avg.Std.

Dev. Avg. Std. Dev. Avg.Std.

Dev.

Fixed 0.001 .8390 .0348 .8315 .0637 .8433 .0518

Fixed 0.0005 .7418 .0632 .7496 .0694 .7515 .0665

Fixed 0.0001 .4480 .0689 .4425 .0706 .4503 .0734

Self 0-1 .3348 .0685 .3522 .0802 .3502 .0990

Self 0.0001-0.01 .5997 .0987 .5830 .0758 .6046 .0727

Seeded .5420 .0819 .5705 .0655 .5528 .0821

No Crossover .4886 .1013 .5046 .0946 .5255 .0893

SA Results Weight Variant (2/2)Generation 1800 Best Individual Over 30 Runs

  900   1801

Experiment Avg.Std. Dev Avg.

Std. Dev

Fixed 0.001 .8350 .0601 .8188 .0584

Fixed 0.0005 .7597 .0814 .7513 .0773

Fixed 0.0001 .4585 .0699 .4446 .0755

Self 0-1 .3545 .1053 .3556 .0992

Self 0.0001-0.01 .6010 .0788 .6024 .0728

Seeded .5515 .0731 .5368 .0623

No Crossover .5368 .0940 .5308 .0918

Mutation Rates - Weight Variant

Discussion of Results

● Local optima drive mutation rates down● Hard to recover from low mutation rates● Why was self-adaptation successful in Bäck’s

experiments?– Most of his experiments were unimodal– Strong selection pressure– Bäck’s problems are amenable to hill-climbing

Additional ExperimentsMechanisms for increasing GA performance in

dynamic environments● Adapted Evolution – an external function based

on fitness or diversity controls evolutionary parameters

● Meta-GAs – one GA controls the evolutionary parameters for another GA

● Multiploidy – the use of dominant and recessive genes to maintain a memory of previous solutions

● Niching – increasing diversity by decreasing the fitness of simliar solutions

Evolutionary Biology

● Coevolution and multiple species evolution● Exploration versus Exploitation● Generalists versus Specialists● Phenotypic Plasticity● Baldwinian Learning● Evolution of Evolvability

Autonomous Agent Control

● How do you create an autonomous agent which can adapt to changes in its environment?

● What if other agents are coevolving and interfacing with your agent?

● Is it possible to automatically determine when to switch strategies?

● Examples: robot control, trading agents, personal assistant agents

● Many of the same questions as Evolutionary Biology– Exploration vs. Exploitation– Generalists vs. Specialists

● Many of the same questions as Autonomous Agent Control– Coevolution of agents– When to switch strategies

● Market behavior and failures

Computational Economics

Future and Other Work in Dynamic Environments

● Test suite development and the behavior of a simple GA in a dynamic environment (EvoStoc-2005)

● Diversity of solutions in dynamic environments (EvoDop-2005)

● Explore other ways to balance exploration and exploitation

– Hypermutation, Multiploidy, and Meta-GAs

● Schematic Analysis

● Analysis of the sl-hdf landscape

● Co-evolution of sl-hdf's with solutions

● Combining GAs with a ABMs to model ecosystems

Standard Explorations

● Vary tδ which is the time between shakes

● See what effect this has on the performance of the best individuals

● In the past we explored a simple GA– Fixed mutation of one bit out of a thousand (0.001)– One-point crossover creates 70% of the new population– Cloning creates the other 30%– Population size of 1000– Selection using a tournament of size 3

The Environments

● Static Environment – Hyperplane-defined function (hdf)

● Dynamic Environment – New hdf's are selected from an equivalence set at regular intervals

● Coevolving Environment – A separate problem-GA controls which hdf's the solution-GA faces every generation

Exploration and Exploitation in Dynamic Environments

● Ideal system might not have the same behavior as a static system– Increase exploration during times of change– Increase exploitation during times of quiescence

● The mutation rate is one control of this behavior● Thus a dynamic mutation rate might allow the system to

better adapt to changes● Many techniques: hypermutation, variable local search,

and random immigrants

Additional ExperimentsMechanisms for increasing GA performance in

dynamic environments● Individual Self-Adaptation – individuals can adjust their

own mutation rates

● Adapted Evolution – an external function based on fitness or diversity controls evolutionary parameters

● Meta-GAs – one GA controls the evolutionary parameters for another GA

● Multiploidy – the use of dominant and recessive genes to maintain a memory of previous solutions

● Niching – increasing diversity by decreasing the fitness of simliar solutions

Evolutionary Biology

● Coevolution and multiple species evolution● Exploration versus Exploitation● Generalists versus Specialists● Phenotypic Plasticity● Baldwinian Learning● Evolution of Evolvability

Autonomous Agent Control

● How do you create an autonomous agent which can adapt to changes in its environment?

● What if other agents are coevolving and interfacing with your agent?

● Is it possible to automatically determine when to switch strategies?

● Examples: robot control, trading agents, personal assistant agents

● Many of the same questions as Evolutionary Biology– Exploration vs. Exploitation– Generalists vs. Specialists

● Many of the same questions as Autonomous Agent Control– Coevolution of agents– When to switch strategies

● Market behavior and failures

Computational Economics

Different Ways To Examine Behavior

● Extreme vs. Wholistic behavior – the best / worst a system can do vs. the behavior of the whole population

● Within vs. Across Runs – Are we more interested in how well the system will do within a particular run or across many runs?

● Fitness vs. Composition related – Fitness is an indication of how well an individual is doing in the population, but one could also measure characteristics of the population that are not related to fitness

Discussion of Performance Results

● A GA operating on a regular changing landscape will initially underperform but will eventually outperform a GA operating on a static landscape

● Working Hypothesis: The static landscape results in premature convergence, whereas shaking the landscape forces the GA to explore multiple solution sub-spaces

● The average performance falls farther after a shake than the best performance, this is because the best performance loss is mitigated by individuals that perform well in the new environment

Rand, W. and R. Riolo, “Shaky Ladders, Hyperplane-Defined Functions and Genetic Algorithms: Systematic Controlled Observation in Dynamic Environments”, EvoStoc-2005

Discussion of Satisficability Results

● Both the static environment and the regularly changing environment appear to operate in a similar fashion despite the better overall performance of the changing environment

● Working Hypothesis: Most basic building blocks are found at roughly the same rate, the dynamic environment is better at finding intermediate building blocks

● Average Satisficability closely mirrors Best, despite the fact that Average is within instead of across runs

Discussion of Robustness Results

● Static environment constantly maintains robusteness, except for a few deleterious mutations

● The robustness measure presented here indicates that fitness changes are a good indication of change

● Greatest change in scores in the middle generations, because the GA is concentrating on exploring intermediate schemata

Conclusions● These measures help to provide a better

understanding of how the GA works in dynamic environments

● By using these measurements in combination with each other a great understanding can be gained than by exploring any one of them individually

● This paper is one step toward understanding the behavior of GAs through systematic, controlled experiments

Future Work

● Further explorations of the parameter space of the sl-hdfs (# of elementary schemata, string length)

● Investigations into the difficulty level of the sl-hdf's

● Examining diversity of schemata present in the populations in each run