CS 416Artificial Intelligence
Lecture 2Lecture 2
AgentsAgents
Lecture 2Lecture 2
AgentsAgents
Review
We’ll study systems that act rationallyWe’ll study systems that act rationally• They need not necessarily “think” or act like humansThey need not necessarily “think” or act like humans
• They need not “think” in rational waysThey need not “think” in rational ways
The domain of AI research changes over timeThe domain of AI research changes over time
AI research draws from many fieldsAI research draws from many fields• Philosophy, psychology, neuroscience, mathematics, Philosophy, psychology, neuroscience, mathematics,
economics, mechanical, linguisticseconomics, mechanical, linguistics
AI has had ups and downs since 1950AI has had ups and downs since 1950
We’ll study systems that act rationallyWe’ll study systems that act rationally• They need not necessarily “think” or act like humansThey need not necessarily “think” or act like humans
• They need not “think” in rational waysThey need not “think” in rational ways
The domain of AI research changes over timeThe domain of AI research changes over time
AI research draws from many fieldsAI research draws from many fields• Philosophy, psychology, neuroscience, mathematics, Philosophy, psychology, neuroscience, mathematics,
economics, mechanical, linguisticseconomics, mechanical, linguistics
AI has had ups and downs since 1950AI has had ups and downs since 1950
What is an agent?
PerceptionPerception• Sensors receive input from environmentSensors receive input from environment
– Keyboard clicksKeyboard clicks
– Camera dataCamera data
– Bump sensorBump sensor
ActionAction
• Actuators impact the environmentActuators impact the environment
– Move a robotic armMove a robotic arm
– Generate output for computer displayGenerate output for computer display
PerceptionPerception• Sensors receive input from environmentSensors receive input from environment
– Keyboard clicksKeyboard clicks
– Camera dataCamera data
– Bump sensorBump sensor
ActionAction
• Actuators impact the environmentActuators impact the environment
– Move a robotic armMove a robotic arm
– Generate output for computer displayGenerate output for computer display
Perception
PerceptPercept
• Perceptual inputs at an instantPerceptual inputs at an instant
• May include perception of internal stateMay include perception of internal state
Percept SequencePercept Sequence
• Complete history of all prior perceptsComplete history of all prior percepts
Do you need a Do you need a percept sequencepercept sequence to play Chess? to play Chess?
PerceptPercept
• Perceptual inputs at an instantPerceptual inputs at an instant
• May include perception of internal stateMay include perception of internal state
Percept SequencePercept Sequence
• Complete history of all prior perceptsComplete history of all prior percepts
Do you need a Do you need a percept sequencepercept sequence to play Chess? to play Chess?
An agent as a function
Agent maps percept sequence to actionAgent maps percept sequence to action
• Agent:Agent:
– Set of all inputs known as Set of all inputs known as state spacestate space
• Repeating loop:Repeating loop:
We must construct f( ), our agentWe must construct f( ), our agent
• It must act rationallyIt must act rationally
Agent maps percept sequence to actionAgent maps percept sequence to action
• Agent:Agent:
– Set of all inputs known as Set of all inputs known as state spacestate space
• Repeating loop:Repeating loop:
We must construct f( ), our agentWe must construct f( ), our agent
• It must act rationallyIt must act rationally
The agent’s environment
What is known about percepts?What is known about percepts?
• Quantity, range, certainty…Quantity, range, certainty…
– If percepts are finite, could a table store mapping?If percepts are finite, could a table store mapping?
What is known about environment?What is known about environment?
• Is Is f (a, e)f (a, e) a known function and predictable? a known function and predictable?
More on this laterMore on this later
What is known about percepts?What is known about percepts?
• Quantity, range, certainty…Quantity, range, certainty…
– If percepts are finite, could a table store mapping?If percepts are finite, could a table store mapping?
What is known about environment?What is known about environment?
• Is Is f (a, e)f (a, e) a known function and predictable? a known function and predictable?
More on this laterMore on this later
Evaluating agent programs
We agree on what an agent must doWe agree on what an agent must do
Can we evaluate its quality?Can we evaluate its quality?
Performance MetricsPerformance Metrics
• Very ImportantVery Important
• Frequently the hardest part of the research problemFrequently the hardest part of the research problem
• Design these to suit what you really want to happenDesign these to suit what you really want to happen
We agree on what an agent must doWe agree on what an agent must do
Can we evaluate its quality?Can we evaluate its quality?
Performance MetricsPerformance Metrics
• Very ImportantVery Important
• Frequently the hardest part of the research problemFrequently the hardest part of the research problem
• Design these to suit what you really want to happenDesign these to suit what you really want to happen
Performance vis-à-vis rationality
For each percept sequence, a rational agent For each percept sequence, a rational agent should select an action that maximizes its should select an action that maximizes its performance measureperformance measure
Example: autonomous vacuum cleanerExample: autonomous vacuum cleaner
• What is the performance measure?What is the performance measure?
For each percept sequence, a rational agent For each percept sequence, a rational agent should select an action that maximizes its should select an action that maximizes its performance measureperformance measure
Example: autonomous vacuum cleanerExample: autonomous vacuum cleaner
• What is the performance measure?What is the performance measure?• Penalty for eating the cat? How much?Penalty for eating the cat? How much?
• Penalty for missing a spot?Penalty for missing a spot?
• Reward for speed?Reward for speed?
• Reward for conserving power?Reward for conserving power?
• Penalty for eating the cat? How much?Penalty for eating the cat? How much?
• Penalty for missing a spot?Penalty for missing a spot?
• Reward for speed?Reward for speed?
• Reward for conserving power?Reward for conserving power?
Learning and Autonomy
LearningLearning• To update the agent function, ,in light of observed To update the agent function, ,in light of observed
performance of percept-sequence to action pairsperformance of percept-sequence to action pairs
– Does the agent control observations?Does the agent control observations?
What parts of state space to explore?What parts of state space to explore?
Learn from trial and errorLearn from trial and error
– How do observations affect agent function?How do observations affect agent function?
Change internal variables that influence action Change internal variables that influence action selectionselection
LearningLearning• To update the agent function, ,in light of observed To update the agent function, ,in light of observed
performance of percept-sequence to action pairsperformance of percept-sequence to action pairs
– Does the agent control observations?Does the agent control observations?
What parts of state space to explore?What parts of state space to explore?
Learn from trial and errorLearn from trial and error
– How do observations affect agent function?How do observations affect agent function?
Change internal variables that influence action Change internal variables that influence action selectionselection
Adding intelligence to agent function
At design timeAt design time• Some agents are designed with clear procedure to improve Some agents are designed with clear procedure to improve
performance over time. Really the engineer’s intelligence.performance over time. Really the engineer’s intelligence.
– Camera-based user identificationCamera-based user identification
At run-timeAt run-time• Agent executes complicated equation to map input to outputAgent executes complicated equation to map input to output
Between trialsBetween trials• With experience, agent changes its program (parameters)With experience, agent changes its program (parameters)
At design timeAt design time• Some agents are designed with clear procedure to improve Some agents are designed with clear procedure to improve
performance over time. Really the engineer’s intelligence.performance over time. Really the engineer’s intelligence.
– Camera-based user identificationCamera-based user identification
At run-timeAt run-time• Agent executes complicated equation to map input to outputAgent executes complicated equation to map input to output
Between trialsBetween trials• With experience, agent changes its program (parameters)With experience, agent changes its program (parameters)
How big is your percept?
Dung BeetleDung Beetle• Almost no perception (percept)Almost no perception (percept)
– Rational agents fine-tune actions based on feedbackRational agents fine-tune actions based on feedback
Sphex WaspSphex Wasp• Has percepts, but lacks percept sequenceHas percepts, but lacks percept sequence
– Rational agents change plans entirely when fine tuning failsRational agents change plans entirely when fine tuning fails
A DogA Dog• Equipped with percepts and percept sequencesEquipped with percepts and percept sequences
– Reacts to environment and can significantly alter behavior Reacts to environment and can significantly alter behavior
Dung BeetleDung Beetle• Almost no perception (percept)Almost no perception (percept)
– Rational agents fine-tune actions based on feedbackRational agents fine-tune actions based on feedback
Sphex WaspSphex Wasp• Has percepts, but lacks percept sequenceHas percepts, but lacks percept sequence
– Rational agents change plans entirely when fine tuning failsRational agents change plans entirely when fine tuning fails
A DogA Dog• Equipped with percepts and percept sequencesEquipped with percepts and percept sequences
– Reacts to environment and can significantly alter behavior Reacts to environment and can significantly alter behavior
Qualities of a task environment
Fully ObservableFully Observable• Agent need not store any aspects of stateAgent need not store any aspects of state
– The Brady Bunch as intelligent agents (lost in Hawaii)The Brady Bunch as intelligent agents (lost in Hawaii)
– Volume of observables may be overwhelmingVolume of observables may be overwhelming
Partially ObservablePartially Observable• Some data is unavailableSome data is unavailable
– MazeMaze
– Noisy sensorsNoisy sensors
Fully ObservableFully Observable• Agent need not store any aspects of stateAgent need not store any aspects of state
– The Brady Bunch as intelligent agents (lost in Hawaii)The Brady Bunch as intelligent agents (lost in Hawaii)
– Volume of observables may be overwhelmingVolume of observables may be overwhelming
Partially ObservablePartially Observable• Some data is unavailableSome data is unavailable
– MazeMaze
– Noisy sensorsNoisy sensors
Qualities of a task environment
DeterministicDeterministic
• Always the same outcome for environment/action pairAlways the same outcome for environment/action pair
StochasticStochastic
• Not always predictable – randomNot always predictable – random
Partially Observable vs. StochasticPartially Observable vs. Stochastic
• My cats think the world is stochastic (lack of perception)My cats think the world is stochastic (lack of perception)
• Physicists think the world is deterministicPhysicists think the world is deterministic
DeterministicDeterministic
• Always the same outcome for environment/action pairAlways the same outcome for environment/action pair
StochasticStochastic
• Not always predictable – randomNot always predictable – random
Partially Observable vs. StochasticPartially Observable vs. Stochastic
• My cats think the world is stochastic (lack of perception)My cats think the world is stochastic (lack of perception)
• Physicists think the world is deterministicPhysicists think the world is deterministic
Qualities of a task environment
MarkovianMarkovian
• Future environment depends only on current environment and actionFuture environment depends only on current environment and action
EpisodicEpisodic
• Percept sequence can be segmented into independent temporal Percept sequence can be segmented into independent temporal categoriescategories
– Behavior at traffic light independent of previous trafficBehavior at traffic light independent of previous traffic
SequentialSequential
• Current decision could affect all future decisionsCurrent decision could affect all future decisions
Which is easiest to program?Which is easiest to program?
MarkovianMarkovian
• Future environment depends only on current environment and actionFuture environment depends only on current environment and action
EpisodicEpisodic
• Percept sequence can be segmented into independent temporal Percept sequence can be segmented into independent temporal categoriescategories
– Behavior at traffic light independent of previous trafficBehavior at traffic light independent of previous traffic
SequentialSequential
• Current decision could affect all future decisionsCurrent decision could affect all future decisions
Which is easiest to program?Which is easiest to program?
Qualities of a task environment
StaticStatic
• Environment doesn’t change over timeEnvironment doesn’t change over time
– Crossword puzzleCrossword puzzle
DynamicDynamic
• Environment changes over timeEnvironment changes over time
– Driving a carDriving a car
Semi-dynamicSemi-dynamic
• Environment is static, but performance metrics are dynamicEnvironment is static, but performance metrics are dynamic
– Drag racing (reward for reaching finish line after 12 seconds is different Drag racing (reward for reaching finish line after 12 seconds is different from reward for reaching it after 14 seconds)from reward for reaching it after 14 seconds)
StaticStatic
• Environment doesn’t change over timeEnvironment doesn’t change over time
– Crossword puzzleCrossword puzzle
DynamicDynamic
• Environment changes over timeEnvironment changes over time
– Driving a carDriving a car
Semi-dynamicSemi-dynamic
• Environment is static, but performance metrics are dynamicEnvironment is static, but performance metrics are dynamic
– Drag racing (reward for reaching finish line after 12 seconds is different Drag racing (reward for reaching finish line after 12 seconds is different from reward for reaching it after 14 seconds)from reward for reaching it after 14 seconds)
Qualities of a task environment
DiscreteDiscrete• Values of a state space feature (dimension) are constrained Values of a state space feature (dimension) are constrained
to distinct values from a finite setto distinct values from a finite set
– Blackjack:Blackjack:
ContinuousContinuous• Variable has infinite variationVariable has infinite variation
– Antilock brakes:Antilock brakes:
– Are computers really continuous?Are computers really continuous?
DiscreteDiscrete• Values of a state space feature (dimension) are constrained Values of a state space feature (dimension) are constrained
to distinct values from a finite setto distinct values from a finite set
– Blackjack:Blackjack:
ContinuousContinuous• Variable has infinite variationVariable has infinite variation
– Antilock brakes:Antilock brakes:
– Are computers really continuous?Are computers really continuous?
Qualities of a task environment
Towards a terse description of problem domainsTowards a terse description of problem domains• Environment: Environment: features, dimensionality, degrees of freedomfeatures, dimensionality, degrees of freedom
• Observable?Observable?
• Predictable?Predictable?
• Dynamic?Dynamic?
• Continuous?Continuous?
• Performance metricPerformance metric
Towards a terse description of problem domainsTowards a terse description of problem domains• Environment: Environment: features, dimensionality, degrees of freedomfeatures, dimensionality, degrees of freedom
• Observable?Observable?
• Predictable?Predictable?
• Dynamic?Dynamic?
• Continuous?Continuous?
• Performance metricPerformance metric
Building Agent Programs
The table approachThe table approach• Build a table mapping states to actionsBuild a table mapping states to actions
– Chess has 10Chess has 10150 150 entries (10entries (108080 atoms in the universe) atoms in the universe)
– I’ve said memory is free, but keep it within the confines of I’ve said memory is free, but keep it within the confines of the boundable universethe boundable universe
• Still, tables have their placeStill, tables have their place
Discuss four agent program principlesDiscuss four agent program principles
The table approachThe table approach• Build a table mapping states to actionsBuild a table mapping states to actions
– Chess has 10Chess has 10150 150 entries (10entries (108080 atoms in the universe) atoms in the universe)
– I’ve said memory is free, but keep it within the confines of I’ve said memory is free, but keep it within the confines of the boundable universethe boundable universe
• Still, tables have their placeStill, tables have their place
Discuss four agent program principlesDiscuss four agent program principles
Simple Reflex Agents
• Sense environmentSense environment
• Match sensations with rules in databaseMatch sensations with rules in database
• Rule prescribes an actionRule prescribes an action
Reflexes can be badReflexes can be bad
• Don’t put your hands down when falling backwards!Don’t put your hands down when falling backwards!
Inaccurate informationInaccurate information
• Misperception can trigger reflex when inappropriateMisperception can trigger reflex when inappropriate
But rules databases can be made large and complexBut rules databases can be made large and complex
• Sense environmentSense environment
• Match sensations with rules in databaseMatch sensations with rules in database
• Rule prescribes an actionRule prescribes an action
Reflexes can be badReflexes can be bad
• Don’t put your hands down when falling backwards!Don’t put your hands down when falling backwards!
Inaccurate informationInaccurate information
• Misperception can trigger reflex when inappropriateMisperception can trigger reflex when inappropriate
But rules databases can be made large and complexBut rules databases can be made large and complex
Simple Reflex Agents w/ Incomplete Sensing
How can you react to things you cannot see?How can you react to things you cannot see?
• Vacuum cleaning the room w/o any sensorsVacuum cleaning the room w/o any sensors
• Vacuum cleaning room w/ bump sensorVacuum cleaning room w/ bump sensor
• Vacuum cleaning room w/ GPS and perfect map of static Vacuum cleaning room w/ GPS and perfect map of static environmentenvironment
How can you react to things you cannot see?How can you react to things you cannot see?
• Vacuum cleaning the room w/o any sensorsVacuum cleaning the room w/o any sensors
• Vacuum cleaning room w/ bump sensorVacuum cleaning room w/ bump sensor
• Vacuum cleaning room w/ GPS and perfect map of static Vacuum cleaning room w/ GPS and perfect map of static environmentenvironment
Model-based Reflex Agents
So when you can’t see something, you model it!So when you can’t see something, you model it!• Create an internal variable to store your expectation of Create an internal variable to store your expectation of
variables you can’t observevariables you can’t observe
• If I throw a ball to you and it falls short, do I know why?If I throw a ball to you and it falls short, do I know why?
– I don’t really know why…I don’t really know why…
Aerodynamics, mass, my energy levels…Aerodynamics, mass, my energy levels…
– I do have a modelI do have a model
Ball falls short, throw harderBall falls short, throw harder
So when you can’t see something, you model it!So when you can’t see something, you model it!• Create an internal variable to store your expectation of Create an internal variable to store your expectation of
variables you can’t observevariables you can’t observe
• If I throw a ball to you and it falls short, do I know why?If I throw a ball to you and it falls short, do I know why?
– I don’t really know why…I don’t really know why…
Aerodynamics, mass, my energy levels…Aerodynamics, mass, my energy levels…
– I do have a modelI do have a model
Ball falls short, throw harderBall falls short, throw harder
Model-based Reflex Agents
Admit it, you can’t see and understand everythingAdmit it, you can’t see and understand everything
Models are very important!Models are very important!
• We all use models to get through our livesWe all use models to get through our lives
– Psychologists have many names for these context-Psychologists have many names for these context-sensitive modelssensitive models
• Agents need models tooAgents need models too
Admit it, you can’t see and understand everythingAdmit it, you can’t see and understand everything
Models are very important!Models are very important!
• We all use models to get through our livesWe all use models to get through our lives
– Psychologists have many names for these context-Psychologists have many names for these context-sensitive modelssensitive models
• Agents need models tooAgents need models too
Goal-based Agents
Overall goal is known, but lacking moment-to-moment Overall goal is known, but lacking moment-to-moment performance measureperformance measure
• Don’t exactly know what performance maximizing action is at each stepDon’t exactly know what performance maximizing action is at each step
Example:Example:
• How to get from A to B?How to get from A to B?
– Current actions have future consequencesCurrent actions have future consequences
– SearchSearch and and PlanningPlanning are used to explore paths through state space are used to explore paths through state space from A to Bfrom A to B
Overall goal is known, but lacking moment-to-moment Overall goal is known, but lacking moment-to-moment performance measureperformance measure
• Don’t exactly know what performance maximizing action is at each stepDon’t exactly know what performance maximizing action is at each step
Example:Example:
• How to get from A to B?How to get from A to B?
– Current actions have future consequencesCurrent actions have future consequences
– SearchSearch and and PlanningPlanning are used to explore paths through state space are used to explore paths through state space from A to Bfrom A to B
Utility-based Agents
Goal-directed agents that have a utility functionGoal-directed agents that have a utility function
• Function that maps internal and external states into a scalarFunction that maps internal and external states into a scalar
– A scalar is a number used to make moment-to-moment A scalar is a number used to make moment-to-moment evaluations of candidate actions evaluations of candidate actions
Goal-directed agents that have a utility functionGoal-directed agents that have a utility function
• Function that maps internal and external states into a scalarFunction that maps internal and external states into a scalar
– A scalar is a number used to make moment-to-moment A scalar is a number used to make moment-to-moment evaluations of candidate actions evaluations of candidate actions
Learning Agents
Desirable to build a system that “figures it out”Desirable to build a system that “figures it out”• GeneralizableGeneralizable
• Compensates for absence of designer knowledgeCompensates for absence of designer knowledge
• ReusableReusable
• Learning by example isn’t easy to accomplishLearning by example isn’t easy to accomplish
– What exercises do you do to learn?What exercises do you do to learn?
– What outcomes do you observe?What outcomes do you observe?
– What inputs to your alter?What inputs to your alter?
Desirable to build a system that “figures it out”Desirable to build a system that “figures it out”• GeneralizableGeneralizable
• Compensates for absence of designer knowledgeCompensates for absence of designer knowledge
• ReusableReusable
• Learning by example isn’t easy to accomplishLearning by example isn’t easy to accomplish
– What exercises do you do to learn?What exercises do you do to learn?
– What outcomes do you observe?What outcomes do you observe?
– What inputs to your alter?What inputs to your alter?
Learning Agents
Performance ElementPerformance Element• Selecting actions (this is the “agent” we’ve been discussing)Selecting actions (this is the “agent” we’ve been discussing)
Problem GeneratorProblem Generator• Provides suggestions for new tasks to explore state spaceProvides suggestions for new tasks to explore state space
CriticCritic• Provides learning element with feedback about progress (are we doing Provides learning element with feedback about progress (are we doing
good things or should we try something else?)good things or should we try something else?)
Learning ElementLearning Element• Making improvements (how is agent changed based on experience)Making improvements (how is agent changed based on experience)
Performance ElementPerformance Element• Selecting actions (this is the “agent” we’ve been discussing)Selecting actions (this is the “agent” we’ve been discussing)
Problem GeneratorProblem Generator• Provides suggestions for new tasks to explore state spaceProvides suggestions for new tasks to explore state space
CriticCritic• Provides learning element with feedback about progress (are we doing Provides learning element with feedback about progress (are we doing
good things or should we try something else?)good things or should we try something else?)
Learning ElementLearning Element• Making improvements (how is agent changed based on experience)Making improvements (how is agent changed based on experience)
A taxi driver
Performance ElementPerformance Element• Knowledge of how to drive in trafficKnowledge of how to drive in traffic
Problem GeneratorProblem Generator• Proposes new routes to try to hopefully improve driving skillsProposes new routes to try to hopefully improve driving skills
CriticCritic• Observes tips from customers and horn honking from other carsObserves tips from customers and horn honking from other cars
Learning ElementLearning Element• Relates low tips to actions that may be the causeRelates low tips to actions that may be the cause
Performance ElementPerformance Element• Knowledge of how to drive in trafficKnowledge of how to drive in traffic
Problem GeneratorProblem Generator• Proposes new routes to try to hopefully improve driving skillsProposes new routes to try to hopefully improve driving skills
CriticCritic• Observes tips from customers and horn honking from other carsObserves tips from customers and horn honking from other cars
Learning ElementLearning Element• Relates low tips to actions that may be the causeRelates low tips to actions that may be the cause
Review
Outlined families of AI problems and solutionsOutlined families of AI problems and solutions
I consider AI to be a problem of I consider AI to be a problem of searchingsearching
• Countless things differentiate search problemsCountless things differentiate search problems
– Number of percepts, number of actions, amount of a priori Number of percepts, number of actions, amount of a priori knowledge, predictability of world…knowledge, predictability of world…
• Textbook is divided into sections based on these differencesTextbook is divided into sections based on these differences
Outlined families of AI problems and solutionsOutlined families of AI problems and solutions
I consider AI to be a problem of I consider AI to be a problem of searchingsearching
• Countless things differentiate search problemsCountless things differentiate search problems
– Number of percepts, number of actions, amount of a priori Number of percepts, number of actions, amount of a priori knowledge, predictability of world…knowledge, predictability of world…
• Textbook is divided into sections based on these differencesTextbook is divided into sections based on these differences
Sections of book
• Problem solving:Problem solving: Searching through predictable, discrete environments Searching through predictable, discrete environments
• Knowledge and Reasoning:Knowledge and Reasoning: Searching when a model of the world is Searching when a model of the world is knownknown
– aa leads to leads to bb and and bb leads to leads to cc… so go to … so go to aa to reach to reach cc
• Planning:Planning: Refining search techniques to take advantage of domain Refining search techniques to take advantage of domain knowledgeknowledge
• Uncertainty:Uncertainty: Using statistics and observations to collect knowledge Using statistics and observations to collect knowledge
• Learning:Learning: Using observations to understand the way the world works Using observations to understand the way the world works and to act rationally within itand to act rationally within it
• Problem solving:Problem solving: Searching through predictable, discrete environments Searching through predictable, discrete environments
• Knowledge and Reasoning:Knowledge and Reasoning: Searching when a model of the world is Searching when a model of the world is knownknown
– aa leads to leads to bb and and bb leads to leads to cc… so go to … so go to aa to reach to reach cc
• Planning:Planning: Refining search techniques to take advantage of domain Refining search techniques to take advantage of domain knowledgeknowledge
• Uncertainty:Uncertainty: Using statistics and observations to collect knowledge Using statistics and observations to collect knowledge
• Learning:Learning: Using observations to understand the way the world works Using observations to understand the way the world works and to act rationally within itand to act rationally within it