62
1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example of state estimation – robot and door to pass 5.Simple example of modeling actions Used in Spring 2013

1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example

Embed Size (px)

Citation preview

1.Examples of using probabilistic ideas in robotics

2.Reverend Bayes and review of probabilistic ideas

3.Introduction to Bayesian AI4.Simple example of state estimation –

robot and door to pass5.Simple example of modeling actions6.Bayes Filters.7.Probabilistic Robotics

Used in Spring 2013

Probabilistic Robotics:

Sensing and Planning in Robotics

Examples of Examples of probabilistic ideas probabilistic ideas

in roboticsin robotics

Robotics Yesterday

Robotics Today

Robotics Tomorrow?

More like a human

1. Boolean Logic and Differential equations are based of classical robotics

2. Probabilistic Bayes methods are fundaments of all math in humanities and future robotics.

What is robotics today?

1. Definition (Brady): Robotics is the intelligent connection of perception and action

• Trend to human-like reasoning emotional service robots.

• Perception, action, reasoning, emotions – all need probability.

Trends in Robotics Research

Reactive Paradigm (mid-80’s)• no models• relies heavily on good sensing

Probabilistic Robotics (since mid-90’s)• seamless integration of models and sensing• inaccurate models, inaccurate sensors

Hybrids (since 90’s)• model-based at higher levels• reactive at lower levels

Classical Robotics (mid-70’s)• exact models• no sensing necessary

Robots are moving away from factory floors to Entertainment, Toys, Personal service. Medicine, Surgery, Industrial automation (mining, harvesting), Hazardous environments (space, underwater)

Examples of robots that need stochastic reasoning

and stochastic learning

Entertainment Robots: Toys

Entertainment Robots: RoboCup

Mobile Robots as Museum Tour-Guides

Life is full of uncertainties

Tasks to be Solved by Robots Planning Perception Modeling Localization Interaction Acting Manipulation Cooperation Recognition of environment that changes Recognition of human behavior Recognition of human gestures ...

Uncertainty is Inherent/Fundamental

• Uncertainty arises from four major factors:factors:

1.1. Environment is stochastic,Environment is stochastic, unpredictable

2. Robots actions are stochastic

3. Sensors are limited and noisynoisy

4. Models are inaccurate, incompleteinaccurate, incomplete

Nature of Sensor Data

Odometry Data Range Data

Main probabilistic Main probabilistic ideas in roboticsideas in robotics

Probabilistic Robotics

Key idea:

Explicit representation of uncertainty

using the calculus of probability theory

• Perception = state estimation

• Action = utility optimization

Advantages and Pitfalls of probabilistic robotics

1. Can accommodate inaccurate models

2. Can accommodate imperfect sensors

3. Robust in real-world applications

4. Best known approach to many hard robotics problems

5. Computationally demanding

6. False assumptions

7. Approximate

Introduction to Introduction to “Bayesian Artificial “Bayesian Artificial

Intelligence”Intelligence”• Reasoning under uncertainty• Probabilities• Bayesian approach

– Bayes’ Theorem – conditionalization– Bayesian decision theory

Reasoning under Uncertainty

• UncertaintyUncertainty – the quality or state of being not clearly known– distinguishes deductive knowledge from

inductive belief

• SourcesSources of uncertainty– Ignorance– Complexity– Physical randomness– Vagueness

Reminder of Bayes Formula

evidence

prior likelihood

)(

)()|()(

)()|()()|(),(

yP

xPxyPyxP

xPxyPyPyxPyxP

likelihood

Normalization

)()|(

1)(

)()|()(

)()|()(

1

xPxyPyP

xPxyPyP

xPxyPyxP

x

yx

xyx

yx

yxPx

xPxyPx

|

|

|

aux)|(:

aux

1

)()|(aux:

Algorithm: likelihood

prior

Conditional knowledge has many applications

1. Total probability:

2. Bayes rule and background knowledge:

)|(

)|(),|(),|(

zyP

zxPzxyPzyxP

dzyzPzyxPyxP )|(),|()(

See law of total probability earlier

examples

I will present I will present many examples many examples of using of using Bayes Bayes probability probability in in mobilemobile robot robot

Simple Example of Simple Example of State EstimationState Estimation

The door opening The door opening problemproblem

Simple Example of State Estimation

• Suppose a robot obtains measurement z• What is P(open|z)?

Simple Example of StateState EstimationEstimation

• Suppose a robot obtains measurement z• What is P(open|z)?

What is the probability that door is openopen if the measurement is z

)()()|(

)|(zP

openPopenzPzopenP

Causal vs. Diagnostic Reasoning

• P(open|z) is diagnostic.

• P(z|open) is causal.

• Often causal knowledge is easier to obtain.

• Bayes rule allows us to use causal knowledge:

)()()|(

)|(zP

openPopenzPzopenP

count frequencies!

We open the door and repeatedly measure z.

We count frequencies

Examples of calculating probabilities for door opening problem

• P(z|open) = 0.6 P(z|open) = 0.3• P(open) = P(open) = 0.5

67.03

2

5.03.05.06.0

5.06.0)|(

)()|()()|(

)()|()|(

zopenP

openpopenzPopenpopenzP

openPopenzPzopenP

• z raises the probability that the door is open.

Probability that measurement z=open was for door open

Probability that measurement z=open was for door not open

Combining Evidence

• Suppose our robot obtains another observation z2.

• How can we integrate this new information?

• More generally, how can we estimate

P(x| z1...zn )?

What we do when more information will come?

Recursive Bayesian Updating

),,|(

),,|(),,,|(),,|(

11

11111

nn

nnnn

zzzP

zzxPzzxzPzzxP

Markov assumption: zn is independent of z1,...,zn-1 if we know x.

)()|(

),,|()|(

),,|(

),,|()|(),,|(

...1...1

11

11

111

xPxzP

zzxPxzP

zzzP

zzxPxzPzzxP

ni

in

nn

nn

nnn

Example: Second Measurement added in our “robot and door” problem

• P(z2|open) = 0.5 P(z2|open) = 0.6

• P(open|z1)=2/3

625.08

5

31

53

32

21

32

21

)|()|()|()|(

)|()|(),|(

1212

1212

zopenPopenzPzopenPopenzP

zopenPopenzPzzopenP

• z2 lowers the probability that the door is open.

If z2 says that door is open, now our probability that it is open is lower

What we do if we have a new measurement result?

35

A Typical Pitfall• Two possible locations x1 and x2

• P(x1)=0.99

• P(z|x2)=0.09 P(z|x1)=0.07

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

5 10 15 20 25 30 35 40 45 50

p( x

| d)

Number of integrations

p(x2 | d)p(x1 | d)

Number of integrations

probabilities

x1

x2

The The behavior/recognition behavior/recognition model model for the robot for the robot

shouldshould take into take into account account robot actionsrobot actions

Actions Change the world? How to use this knowledge?

• Often the world is dynamic since– actions carried out by the robot change

the world,– actions carried out by other agents change

the world,– or just the time passing by change the

world.

• How can we incorporate such actions?

Typical Actions of a Robot

• The robot turns its wheels to move• The robot uses its manipulator to grasp an

object• Plants grow over time…

• Actions are never carried out with absolute certainty.

• In contrast to measurements, actions generally increase the uncertainty.

Modeling Actions Probabilistically

• To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf

P(x|u,x’)

• This term specifies the pdf that executing u changes the state from x’ to x.

pdf = probability distribution function

Simple example of Simple example of modeling Actionsmodeling Actions

Example: Closing the door

State TransitionsState Transitions

P(x|u,x’) for u = “close door”:

If the door is open, the action “close door” succeeds in 90% of all cases.

open closed0.1 1

0.9

0

Closing the door succeeded

State Transitions for closing door example

Integrating the Outcome of Actions

')'()',|()|( dxxPxuxPuxP

)'()',|()|( xPxuxPuxP

Continuous case:

Discrete case:

Example: The Resulting Belief for the door problemExample: The Resulting Belief for the door problem

)|(1161

83

10

85

101

)(),|(

)(),|(

)'()',|()|(

1615

83

11

85

109

)(),|(

)(),|(

)'()',|()|(

uclosedP

closedPcloseduopenP

openPopenuopenP

xPxuopenPuopenP

closedPcloseduclosedP

openPopenuclosedP

xPxuclosedPuclosedP

Probability that the door is closed after action u

Probability that the door is open after action u

Continue open/closed door example

open closed0.1 1

0.9

0

Probability that door open

Concepts of Concepts of Probabilistic RoboticsProbabilistic Robotics

1.1. ProbabilitiesProbabilities are base concept

2. Bayes rule used in most applications

3.3. Bayes filters Bayes filters used for estimation

4.4. Bayes networksBayes networks

5.5. Markov ChainsMarkov Chains

6.6. Bayesian Decision TheoryBayesian Decision Theory

7.7. Bayes concepts in AIBayes concepts in AI

Bayes Filters:1.Kalman Filters

2.Particle Filters

3.Other filters

Key idea Key idea of Probabilistic of Probabilistic Robotics Robotics repeatedrepeated

Key idea: Explicit representation of uncertainty using the calculus of probability theory

– Perception = state estimation

– Action = utility optimization

1. Probability Calculus1. Probability Calculus

)Pr()Pr()Pr( ,

0)Pr(1)Pr(

thenif YXYXYXUYX

XUXU

)Pr(

)Pr()|Pr(

Y

YXYX

)Pr()|Pr( XYX

Markov assumption: zn is independent of z1,...,zn-1 if we know x.

48

2. Main ideas of using Bayes in 2. Main ideas of using Bayes in roboticsrobotics

1. Bayes rule allows us to compute probabilities that are hard to assess otherwise.

2. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence.

3. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

Bayes Bayes FiltersFilters

3. Bayes Filters: 3. Bayes Filters: FrameworkFramework

• Given:– Stream of observations z and action data u:

– Sensor model P(z|x).– Action model P(x|u,x’).– Prior probability of the system state P(x).

• Wanted: – Estimate of the state X of a dynamical system.– The posterior of the state is also called Belief:

),,,|()( 11 tttt zuzuxPxBel

},,,{ 11 ttt zuzud

Markov assumption: zn is independent of z1,...,zn-1 if we know x.

111 )(),|()|( ttttttt dxxBelxuxPxzP

Bayes FiltersBayes Filters

),,,|(),,,,|( 1111 ttttt uzuxPuzuxzP Bayes

),,,|()( 11 tttt zuzuxPxBel

Markov ),,,|()|( 11 tttt uzuxPxzP

Markov11111 ),,,|(),|()|( tttttttt dxuzuxPxuxPxzP

1111

111

),,,|(

),,,,|()|(

ttt

ttttt

dxuzuxP

xuzuxPxzP

Total prob.

Markov111111 ),,,|(),|()|( tttttttt dxzzuxPxuxPxzP

z = observationu = actionx = state

Derivation of rule for beliefDerivation of rule for belief

We derive the posterior of the state

1. Algorithm Bayes_filter( Bel(x),d ):

2. 0

3. If d is a perceptual data item z then

4. For all x do

5.

6. For all x do

7.

8.

9. Else if d is an action data item u then

10. For all x do

11.

12. Return Bel’(x)

)()|()(' xBelxzPxBel

)(' xBel)(')(' 1 xBelxBel

')'()',|()(' dxxBelxuxPxBel

111 )(),|()|()( tttttttt dxxBelxuxPxzPxBel

We derived an important formula:

Calculate new belief

Update new belief for all knowledge

Calculate new belief after new action for all knowledge

Now we use it in Bayes Filter

Bayes Filters are fundaments of many methods!

• Kalman filters• Particle filters• Hidden Markov models (HMM)• Dynamic Bayesian networks• Partially Observable Markov Decision Processes

(POMDPs)

111 )(),|()|()( tttttttt dxxBelxuxPxzPxBel

4. Bayesian Networks and 4. Bayesian Networks and Markov Models – Markov Models –

main conceptsmain concepts

• Bayesian AI• Bayesian networks• Decision networks• Reasoning about changes over time

• Dynamic Bayesian Networks• Markov models

5. Markov Assumption and HMMs5. Markov Assumption and HMMs

Underlying Assumptions of HMMs• Static world• Independent noise• Perfect model, no approximation errors

),|(),,|( 1:1:11:1 ttttttt uxxpuzxxp )|(),,|( :1:1:0 tttttt xzpuzxzp

states

states

controls

outputs

6. Bayesian Decision Theory6. Bayesian Decision Theory

1. Frank Ramsey (1926)

2. Decision making under uncertainty – what action to take when the state of the world is unknown

3. Bayesian answer –Find the utility of each possible outcome (action-state pair), and take the action that maximizes expected utility

Story of my friend how he wanted to get married scientifically

Bayesian Decision Theory – Example

Action Rain (p=0.4) Shine (1-p=0.6)

Take umbrella 30 10

Leave umbrella -100 50

Expected utilitiesExpected utilities: E(Take umbrella) = 300.4+100.6=18 E(Leave umbrella) = -1000.4+500.6=-10

7. Bayesian Conception of an AI7. Bayesian Conception of an AI

1. An autonomous agent that1. has a utility structure (preferences)

2. can learn about its world and the relationship (probabilities) between its actions and future states maximizes its expected utility

2. The techniques used to learn about the world are mainly statisticalData mining

Conclusion on Conclusion on Bayesian AIBayesian AI

• Reasoning under uncertainty

• Probabilities

• Bayesian approach– Bayes’ Theorem – conditionalization– Bayesian decision theory

Summary• Bayes rule allows us to compute

probabilities that are hard to assess otherwise.

• Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence.

• Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

Sources

Gaurav S. Sukhatme

Computer Science

Robotics Research Laboratory

University of Southern California