View
98
Download
1
Category
Preview:
DESCRIPTION
Game Programming (Game AI Technologies). 2011. Spring. What is AI?. Artificial Intelligence (AI) The computer simulation of intelligent behavior What is Intelligence? Behavior that exhibits great ability to adapt and solve complex problems Produce results that are completely unrealistic - PowerPoint PPT Presentation
Citation preview
Game ProgrammingGame Programming(Game AI Technologies)(Game AI Technologies)
Game ProgrammingGame Programming(Game AI Technologies)(Game AI Technologies)
2011. Spring
What is AI?
■ Artificial Intelligence (AI) The computer simulation of intelligent behavior What is Intelligence?
• Behavior that exhibits great ability to adapt and solve complex problems
Produce results that are completely unrealistic
• Behavior that is close to that of human Humans are not always brilliant
Game AIs are not just “problem solving robots” but lifelike entities
Roles of AI in Games
■ Opponents■ Teammates■ Strategic Opponents■ Support Characters■ Autonomous Characters■ Commentators■ Camera Control■ Plot and Story Guides/Directors
AI Entity
■ AI entity Agent
• A virtual character in the game world• Ex) enemies, NPCs, sidekicks, an animated cow in a field• These are structured in a way similar to our brain
Abstract controller• A structure quite similar to the agent, but each subsystem
works on a higher level than an individual• Ex) strategy game The entity that acts like the master
controller of the CPU side of the battle
Goals of AI action game opponent
■ Provide a challenging opponent Not always as challenging as a human -- Quake monsters. What ways should it be subhuman?
■ Not too challenging. Should not be superhuman in accuracy, precision, sensing, ...
■ Should not be too predictable. Through randomness. Through multiple, fine-grained responses. Through adaptation and learning.
Structure of an Intelligent Agent
■ Sensing: perceive features of the environment.■ Thinking: decide what action to take to achieve its
goals, given the current situation and its knowledge.■ Acting: doing things in the world.
Thinking has to make up for limitations in sensing and acting.
The more accurate the models of sensing and acting, the more realistic the behavior.
Simple Behavior
■ Random motion Just roll the dice to pick when and which direction to move
■ Simple pattern Follow invisible tracks: Galaxians
■ Tracking Pure Pursuit
• Move toward agent’s current position
• Ex) Head seeking missile Lead Pursuit
• Move to position in front of agent Collision
• Move toward where agent will be
■ Evasive – opposite of any tracking■ Delay in sensing gives different effects
Moving in the World: Path Following
■ Just try moving toward goal.
Source
Goal
Problem
Source
Goal
Create Avoidance Regions
Source
Goal
Path Finding
■ A* Algorithm find a shortest-path between two points on a map
• Starting with the base node• Expand nodes using the valid moves• Each expanded node is assigned a “score” • Iterate this process
until these paths prove invalid or one of these paths reach the target
Path Finding
■ The overall score for a state f(node) = g(node) + h(node)
• f(node): the total score (the sum of g and h)• g(node): the cost to get from the starting node to this
node( 측정치 ) A component that accounts for our past behavior
• h(node): the estimated cost to get from this node to the goal( 추정치 )
A component that estimate our future behavior The Manhattan distance (4-connected game world)
− Manhattan(p1, p2) = | p2.x – p1.x | + | p2.z – p1.z |
Ex) p1 (2,2), p2 (1,0) h = 3 Euclidean distance (8-connected)
22 ).2.1(p2.x)-(p1.x distance zpzp
Game AI
■ Game AI A* algorithm
• Quite smart, but not very realistic• Need to keep a reasonable degree of imperfection built
into the AI design A balance between generating
• behavior that is both highly evolved and sophisticated• And behavior that is more or less human
Game AIs are not just “problem solving robots” but lifelike entities
Comparison between A* and a human path finder
Execution Flow of an AI Engine
Sense
Think
Act
En
viro nm
ent
Finite-state machines
Decision trees
Neural nets
Fuzzy logic
Rule-based systems
Planning systems
Can be extremely expensive
Can be modulated by “think”
Structure of an AI System
■ Sensing the world (The slowest part of the AI) All AIs need to be aware of their surroundings
• use that information in the reason/analysis phase What is sensed and how largely depend on the type of game
• Ex) an individual enemy in Quake Where is the player and where is he looking? What is the geometry of the surroundings? Which weapons am I using and which is he using?
• Ex) master controller in a strategy game (Age of Empires) What is the balance of power in each subarea of the map? How much of each type of resource do I have? What is the breakdown of unit types: infantry, cavalry, … What is my status in terms of the technology tree? What is the geometry of the game world?
Structure of an AI System
■ Memory Storing AI data is often complex
• the concepts being stored are not straightforward In an individual level AI (the less of a problem)
• Store points, orientations • and use numeric values to depict the “state” the AI is in
A master controller• These data structures are nontrivial• Ex) how do we store the balance of power, a path?
Case-by-case solutions
Can it remember prior events?For how long?How does it forget?
Structure of an AI System
■ Analysis/Reasoning Core (= Think) Uses the sensory data and the memory
• to analyze the current configuration Make a decision
• Can be slow or fast depending on the number of alternatives and the sensory data to evaluate
Slow process Chess playing Fast process moving a character in Quake
Structure of an AI System
■ Action/Output System Permeate actions and responses Many games exaggerate the action system
• Personality and perceive intelligence was enhanced The character’s intensions are obvious Personality is conveyed
• Ex) Super Mario Bros (creatures similar AIs)
Game ProgrammingGame Programming(Action Oriented AI)(Action Oriented AI)
Game ProgrammingGame Programming(Action Oriented AI)(Action Oriented AI)
2011. Spring
Action Oriented AI
■ Action Oriented AI Overview of AI methods used in fast-paced action games. Action: intelligent activity that involves changes of behavior
at a fast speed.• Locomotion behaviors
Ex) a character runs in Mario
• Simple aggression or defense Ex) enemies shooting or ducking in Quake
Object Tracking
■ Object Tracking Eye contact (aim at a target: 조준하기 )
• Given an orientation, a point in space Computing the best rotation to align the orientation with the
point
• Solution 2D Hemiplane Test 3D Version: Semispaces
Eye Contact: 2D Hemiplane Test
■ 2D Hemiplane Test Top-down view, overseeing the X,Z plane
point mypos; // position of the AIfloat myyaw; // yaw angle in radians. I assume top-down viewpoint hispos; // position of the moving target
The line formed by mypos and myyawX = mypos.x + cos(myyaw) * tZ = mypos.z + sin(myyaw) * t
Solving for t
(X – mypos.x)/cos(myyaw) - (Z – mypos.z)/sin(myyaw) = 0 Which side ?
F(X,Z)= (X – mypos.x)/cos(myyaw) - (Z – mypos.z)/sin(myyaw) F(X,Z) > 0 (it lies to one side of the line)F(X,Z) = 0 (it lies exactly on the line)F(X,Z) < 0 (it lies to the opposite side)
3 sub, 2 div, 1 comparison
3D version: Semispaces
■ 3D version: Semispaces Need to work with the pitch and yaw angles The equation of a unit sphere
x = cos(pitch) cos(yaw)
y = sin(pitch)
z = cos(pitch) sin(yaw)
Use two plane to detect both left-right and the above-below test
Which side?if (vertplane.eval(target)>0) yaw-=0.01;
else yaw+=0.01;
if (horzplane.eval(target)>0) pitch-=0.01;
else pitch+=0.01;
Chasing
■ Chasing Moving forward while keeping eye contact with the target
■ Chase 2D: Constant speed Depends on the relationship between
• our speed, the target’s speed, and our turning ability
void chase(point mypos, float myyaw, point hispos)
{
reaim(mypos, myyaw, hispos);
mypos.x = mypos.x + cos(myyaw) * speed;
mypos.z = mypos.z + sin(myyaw) * speed;
}
aiming and advancing
Chasing
■ Predictive chasing Not aim at the target directly but try to anticipate his movement and guess his intensions Tree step approach (instead of aiming and advancing)
1. Calculate a projected position2. Aim at that position3. Advance
void chase(point mypos, float myyaw, point hispos,point prevpos){
point vec=hispos-prevpos; // vec is the 1-frame position differencevec=vec*N; // we project N frames into the futurepoint futurepos=hispos+vec; // and build the future projection
reaim(mypos,myyaw,futurepos);mypos.x = mypos.x + cos(myyaw) * speed;mypos.z = mypos.z + sin(myyaw) * speed;
}
Evasion
■ Evasion The opposite of chasing In stead of trying to decrease the distance to the target
• Try to maximize it
void evade(point mypos, float myyaw, point hispos)
{reaim(mypos, myyaw, hispos); negated
mypos.x = mypos.x + cos(myyaw) * speed;
mypos.z = mypos.z + sin(myyaw) * speed;
}
Patrolling
■ Patrolling Store a set of waypoints that will determine the path followed by the
AI Two configurations (waypoints)
• Cyclic: W1 W2 W3 W4 W5 W1 W2 W3 W4 W5 …• Ping-pong: W1 W2 W3 W4 W5 W4 W3 W2 W1 W2 …
■ Implementation Two state finite machine
• Advance toward the next waypoint• Update next waypoint
■ Adding a third state Chase behavior
• Triggered by using a view cone for the AI• Ex) Commandos
Hiding and Taking Cover
■ Hiding and Taking Cover AIs to run for cover and remain unnoticed by the player Taking cover
• Find a good hiding spot • Move there quickly (= chase routine)
Three data items• Position and orientation of the player• Our position and orientation• The geometry of the level
Hiding and Taking Cover
Actual algorithm• Finding the closest object to the AI’s location
Using the scene graph
• Computing a hiding position for that object Shoot one ray from the player position to the barycenter of
object Compute a point along the ray that’s actually behind the
object
• Ex) Medal of Honor
Shooting
■ Shooting Infinite-Speed Targeting
• Very high speed compared to the speed of the target (virtually zero)
• Aligned with the target at the moment of shooting• Abuse infinite-speed weapon (Ex: laser gun)
Unbalance your game Solution
− The firing rate: very low− The ammunition: limited− The weapon: hard to get
Shooting
■ Real-World Targeting Shoots projectiles at a limited speed Shooting fast moving target is harder than shooting one that stand
still Finite-speed devices (sniper-type AI)
• The Still Shooter Only shoots whenever the enemy is standing still for a certain period
of time Disadvantage : Shooter will have very few opportunities to actually
shoot
• The Tracker Shoot moving target
− Compute the distance from the sniper to the target− Use projectile velocity− Predict where the target will be
Single-shot firing devices
Shooting
■ Machine Guns Fast firing rates at the cost of inferior precision Target not people but areas
• hardly ever moved• Did not have a lot of autonomy• Short firing burst
Putting It All Together
■ Putting It All Together Blend these simple behaviors into a AI system
• Parallel automata• AI synchronization
Parallel automata• The locomotion AI
Control locomotion− Chasing, evading, patrolling, hiding
• The gunner AI Evaluate the firing opportunities
− Targeting and shooting down enemies
Putting It All Together
AI Synchronization• Using shared memory • Ex) group of enemy (half-life)• Synchronization become more complex
more sophisticated interaction Better use artificial life techniques
Action Based Game
■ Platform Games Platform/jump’n run games
• Ex) Mario or Jak and Daxter AIs are not very complex
• Easily coded using finite-state machine Chasers
• Get activated whenever the player is within a certain range The turret
• A fixed entity that rotates, and when aiming at the player, shoots at him Examples
• Gorilla (in Jak and Daxter) Chase the player and kill on contact Make game too boring add chest-hitting routine Weak point can attack the gorilla whenever he is hitting his chest
• Bosses Not much different than basic chaser Complex, long-spanning choreographies Ex) Jak and Daxter: The boss flower
− Fixed to the ground− Spawn small spider become chaser
Action Based Game
■ Shooters A bit more complex than platform
• because the illusion of realistic combat must be conveyed Usually built around FSMs Need a logical way to layout the scenario
• Use a graph structure with graph nodes for every room/zone
Group behavior (Ex: Half-Life)
Action Based Game
■ Fighting Games State machine
• States: attack, stand, retreat, advance, block, duck, jump• Compute the distance to the player• Decide which behavior to trigger• Adding a timer dose not stay in the same state forever
Predictive AI• Enemy learn to detect action sequences as he fights us• FSM plus correlation approach
Higher-difficulty enemies• State-space search
Build a graph with all the movement possibilities Search optimal move sequence
• Tabulated representation A table with a simple attack moves and their associated damage Ex) Distance = 3 meters, high kick, jump, punch, Damage = 5
Action Based Game
■ Racing Titles Implemented by Rule based system
• If we are behind another vehicle and moving ->faster advance• If we are in front of another vehicle and moving slower -> block
his way• Else -> follow the track
Advance behavior• Using prerecorded trajectory• Plug-in
Analyze the track and generate the ideal trajectory
GameAIGameAI((Finite State Machines)Finite State Machines)
GameAIGameAI((Finite State Machines)Finite State Machines)
Finite State Machines
■ Finite State Machines (FSMs) Also called deterministic finite automata (DFA)
or state machine or automata Definition
• A set of states Represent the scenarios or configurations the AI can be
immersed in
• A set of transitions Conditions that connect two states in a directed way
Advantages• Intuitive to understand, Easy to code• Perform well, and represent a broad range of behaviors
Finite State Machines
Example
1. A dog is HUNGRY
2. If you give him a bone, he will not be hungry anymore
3. He’ll be QUIET after eating the bone
4. And he’ll become hungry after four hours of being quiet States (using circles) 1 and 3 Transitions (using lines) 2 and 4
HENGRY QUIET
Give him a born
After 4 hours
Example FSM
Events:
E=Enemy Seen
S=Sound Heard
D=Die
SpawnD
Wander-E, -S, -D
D
-E
E
AttackE, -D
E
-E
D
-S
ChaseS, -E, -D
S
E
D
S
Action (callback) performed when a transition occurs
Code
…
…
Problem: No transition from attack to chase
Example FSM - Better
Events:
E=Enemy Seen
S=Sound Heard
D=Die
S
E
-S
AttackE, -D, -S
E
-E
SpawnD
Wander-E, -S, -D
D
-E
ChaseS, -E, -D
D
D
S
Attack-SE, -D, S
S
-E
-S
ED
Example FSM with Retreat
SpawnD
(-E,-S,-L)
Wander-E,-D,-S,-L
E
-SAttack-EE,-D,-S,-L
E
Chase-E,-D,S,-L
S
D
S
D
Events:E=Enemy SeenS=Sound HeardD=DieL=Low Health
Each feature with N values can require N times as many statesD
Retreat-EE,-D,-S,L
L
-E
Retreat-S-E,-D,S,L
Wander-L-E,-D,-S,L
Retreat-ESE,-D,S,L
Attack-ESE,-D,S,-L
E
E-E
-L
S
-S
L
-E E
L-L
-L
-L
L
D
Hierarchical FSM
■ Expand a state into its own FSM
Wander
Die
S/-S
E/-E
Attack
Chase
Spawn
StartTurn Right
Go-throughDoor
Pick-upPowerup
Non-Deterministic HierarchicalFSM (Markov Model)
Attack
Die
No enemy
Wander
Start
Start
Approach
Aim & Jump &Shoot
Aim & Slide Left& Shoot
Aim & Slide Right
& Shoot .3.3
.4
.3.3
.4
Spring 200549
FSM Evaluation
■ Advantages: Very fast – one array access Can be compiled into compact data structure
• Dynamic memory: current state
• Static memory: state diagram – array implementation Can create tools so non-programmer can build behavior Non-deterministic FSM can make behavior unpredictable
■ Disadvantages: Number of states can grow very fast
• Exponentially with number of events: s=2e
Number of arcs can grow even faster: a=s2
Hard to encode complex memories
Decision TreesDecision TreesDecision TreesDecision Trees
Classification Problems
■ Task: Classify “objects” as one of a discrete set of “categories”
■ Input: set of facts about the object to be classified Is today sunny, overcast, or rainy Is the temperature today hot, mild, or cold Is the humidity today high or normal
■ Output: the category this object fits into Should I play tennis today or not? Put today into the play-tennis category or the no-tennis
category
Example Problem
■ Classify a day as a suitable day to play tennis Facts about any given day include:
• Outlook = <Sunny, Overcast, Rain>• Temperature = <Hot, Mild, Cool>• Humidity = <High, Normal>• Wind = <Weak, Strong>
Output categories include:• PlayTennis = Yes• PlayTennis = No
■ Outlook=Overcast, Temp=Mild, Humidity=Normal, Wind=Weak => PlayTennis=Yes
■ Outlook=Rain, Temp=Cool, Humidity=High, Wind=Strong => PlayTennis=No
■ Outlook=Sunny, Temp=Hot, Humidity=High, Wind=Weak => PlayTennis=No
Classifying with a Decision Tree
Outlook?
Sunny Overcast Rain
NoTemp? Wind?
Hot CoolMild
No Yes Yes
WeakStrong
Yes No
Decision Trees
■ Nodes represent attribute tests One child for each possible value of the attribute
■ Leaves represent classifications■ Classify by descending from root to a leaf
At root test attribute associated with root attribute test Descend the branch corresponding to the instance’s value Repeat for subtree rooted at the new node When a leaf is reached return the classification of that leaf
■ Decision tree is a disjunction of conjunctions of constraints on the attribute values of an instance
Example FSM with Retreat
SpawnD
(-E,-S,-L)
Wander-E,-D,-S,-L
E
-SAttack-EE,-D,-S,-L
E
Chase-E,-D,S,-L
S
D
S
D
Events:
E=Enemy
S=Sound
D=Die
L=Low Health
Each new feature can double number of states
D
Retreat-EE,-D,-S,L
L
-E
Retreat-S-E,-D,S,L
Wander-L-E,-D,-S,L
Retreat-ESE,-D,S,L
Attack-ESE,-D,S,-L
E
E-E
-L
S
-S
L
-E E
L-L
-L
-L
L
D
Decision Tree for Quake
■ Input Sensors: E=<t,f> L=<t,f> S=<t,f> D=<t,f>■ Categories (actions): Attack, Retreat, Chase, Spawn, Wander
D?
Spawn E?
L? S?
Wander
Retreat
Attack
L?
t
t
t t
f
f
f f
Retreat
Chase
t f
Decision Tree Evaluation
■ Advantages Simpler, more compact representation Easy to create and understand
• Can also be represented as rules Decision trees can be learned
■ Disadvantages Decision tree engine requires more coding than FSM Need as many examples as possible Higher CPU cost Learned decision trees may contain errors
Rule-based SystemsRule-based Systems(Production Systems)(Production Systems)Rule-based SystemsRule-based Systems(Production Systems)(Production Systems)
Rule Systems
■ Finite State Machine Not easy to describe in terms of states and transitions
■ FSMs are well suited for behaviors that are Local in nature
• While we are in a certain state, only a few outcomes are possible
Sequential in nature• We carry out tasks after other tasks depending on certain
conditions
Rule Systems
Ex) virtual dog• If there’s a bone nearby and I’m hungry, I’ll eat it• If I’m hungry (but there is no bone around), I’ll wander• If I’m not hungry, but I’m sleepy, I will sleep• If I’m not hungry and not sleepy, I will bark and walk
Eat Wander
SleepBark &Walk
Not local all sates can yield any other stateThere are no visible sequencesPrioritized, global behavior rule systems
Rule Systems
■ Rule Systems Provide a global model of behavior is better when we need to model behavior that is based on
guidelines A set of rules that drive our AI’s behavior
Condition Action• LHS of the rule : condition• RHS of the rule : action
Ex) virtual dog• (Hungry) && (Bone nearby) Eat it• (Hungry) & (No bone nearby) Wander• (Not hungry) & (Sleepy) Sleep• (Not hungry) & (Not sleepy) Bark and walk
A rule closer to the top will have precedence overa rule closer to the bottom of the rule list
priority
Complete Picture
Sensors
Actions
Match
ConflictResolution
Act
Changes to Working Memory
Rule instantiations that match working memory
Selected
Rule
Rule Systems
Ex) the AI for a solider in a large squadron• If in contact with an enemy combat• If an enemy is closer than 10 meters and I'm stronger than
him chase him• If an enemy is closer than 10 meters escape him• If we have a command from our leader pending execute
it• If a friendly soldier is fighting and I have a ranged weapon
shoot at the enemy• Stay still (TRUE-> stay still)
Rule Systems
■ Coding an RS for action games A direct mapping of our rule set, in priority order, to an “if”
tree (If-then-else sentences)
if (contact with an enemy) combatelse { if (closer than 10 meters) { if (stronger than him) chase him else escape him } else { if (command from our leader pending) execute it else { if (friendly soldier is fighting and I have a ranged weapon) shoot at the enemy else stay still } } }
Rule-based System Evaluation
■ Advantages Corresponds to way people often think of knowledge Very expressive Modular knowledge
• Easy to write and debug compared to decision trees• More concise than FSM
■ Disadvantages Can be memory intensive Can be computationally intensive Sometimes difficult to debug
PlanningPlanningPlanningPlanning
Planning
■ FSMs and rule sets Very useful to model simple behavior More complex behaviors? Ex) chess
• Solve a puzzle• Decide the weakest spot in a battlefield to attack the enemy• Select the best route to attack N ground targets in a flight• Trace a path from A to B with obstacles in between
These problem require the following• Thinking in more complex terms than numbers• Planning long-term strategies
What is Planning?
■ Plan: sequence of actions to get from the current situation to a goal situation Higher level mission planning Path planning
■ Planning: generate a plan Initial state: the state the agent starts in or is currently in Goal test: is this state a goal state Operators: every action the agent can perform
• Also need to know how the action changes the current state
Two Approaches
■ State-space search Search through the possible future states that can be reached by
applying different sequences of operators
• Initial state = current state of the world
• Operators = actions that modify the world state
• Goal test = is this state a goal state
■ Plan-space search Search through possible plans by applying operators that modify
plans
• Initial state = empty plan (do nothing)
• Operators = add an action, remove an action, rearrange actions
• Goal test = does this plan achieve the goal
Two Approaches
State-space Search
Plan-space Search
Planning and Problem Solving
■ State-Space Search Exploring candidate transitions and analyzing their suitability
for reaching our goals Search the state-space to find ways to move from the initial
conditions to our goals
The key difference is that any state machine would have to be dynamic, not static
What should I do?
Shoot?Pickup? Pickup?
Self.current-health = 20 Enemy.estimated-health = 50 Powerup.type = health-pak
Self.current-weapon = blaster Powerup.available = yes
Powerup.type = Railgun
Powerup.available = yes
One Step: Pickup Railgun
Shoot?Pickup Pickup?
Self.current-health = 10 Enemy.estimated-health = 50 Powerup.type = health-pak
Self.current-weapon = Railgun Powerup.available = yes
Powerup.type = Railgun
Powerup.available = no
One Step: Shoot
ShootPickup? Pickup?
Self.current-health = 10 Enemy.estimated-health = 40 Powerup.type = health-pak
Self.current-weapon = blaster Powerup.available = yes
Powerup.type = Railgun
Powerup.available = yes
One Step: Pickup Health-pak
Shoot?Pickup? Pickup
Self.current-health = 90 Enemy.estimated-health = 50 Powerup.type = health-pak
Self.current-weapon = blaster Powerup.available = no
Powerup.type = Railgun
Powerup.available = yes
Two Step
Shoot
Pickup
Self.current-health = 80 Enemy.estimated-health = 40 Powerup.type = health-pak
Self.current-weapon = blaster Powerup.available = no
Powerup.type = Railgun
Powerup.available = yes
Three Step Look-ahead
Pickup
Pickup
Self.current-health = 100 Enemy.estimated-health = 0 Powerup.type = health-pak
Self.current-weapon = Railgun Powerup.available = no
Powerup.type = Railgun
Powerup.available = no
Shoot
Opponent: New problems
Shoot?Pickup? Pickup?
Pickup? Pickup?
Self.current-health = 20 Enemy.estimated-health = 50 Powerup.type = health-pak
Self.current-weapon = blaster Enemy.current-weapon = blaster Powerup.available = yes
Powerup.type = Railgun
Powerup.available = yes
Planning Evaluation
■ Advantages Less predictable behavior Can handle unexpected situations
■ Disadvantages Planning takes processor time Planning takes memory Need simple but accurate internal representations
Biology-Inspired AIBiology-Inspired AIBiology-Inspired AIBiology-Inspired AI
Biology-Inspired AI
■ Biology-Inspired AI Previous AI Algorithms
• Not biological concepts but computational structures Not in computer science but in biology and neurology
■ Genetic Programming Inspired by genetic theory (by Mendel) and evolution theory (by Charles Darwin)
• The fittest survive Each generation can create a new generation
• Crossover Whose DNA code will consist of combination of the parent’s
respective DNA code
• Mutation Minor degree of mutations
Genetic Operators
■ Crossover Select two points at random Swap genes between two points
■ Mutate Small probably of randomly changing each part of a gene
Example: Competition
■ 2 virtual Creatures compete to gain control of single cube the cube is placed in the center of the world creature 들은 정반대 (180 도 ) 에 위치한다 . Creatures 는 ground plane 위와 diagonal plane 뒤에서부터
시작한다 .• ground plane 과 diagonal plane 이 만나는 곳이 시작점이다 .
일정한 시간 후 cube 에 대한 지배력이 많은 것이 승자가 된다
Example: Result
< 같은 phenotype 을 가지고 경쟁 > < 녹색 상자에 가까이 가는 것이 승리 >
< 긴 limb 를 갖도록 성장 > < 경쟁자가 오지 못하도록 limb 를 사용 >
Genetic Algorithm Evaluation
■ Advantages Powerful optimization technique Can learn novel solutions No examples required to learn
■ Disadvantages Fitness function must be carefully chosen Evolution takes lots of processing
• Can’t really run a GA during game play Solutions may or may not be understandable
Why aren’t AI’s better?
■ Don’t have realistic models of sensing.■ Not enough processing time to plan ahead.■ Space of possible actions too large to search
efficiently (too high of branching factor).■ Single evaluation function or predefined subgoals
makes them predictable. Only have one way of doing things.
■ Too hard to encode all the relevant knowledge.■ Too hard to get them to learn.
Reference
■ Reference http://www.cis.cornell.edu/courses/cis3000/2011sp/top/
lectures.php http://www.cis.cornell.edu/courses/cis3000/2009sp/top/
lectures.php AI Overview and State Machines AI for Games http://ai.eecs.umich.edu/soar/Classes/494/schedule.htm
Recommended