132
Intro to AI (2nd Part) Reasoning with Probabilities Paolo Turrini Department of Computing, Imperial College London Introduction to Artificial Intelligence 2nd Part Paolo Turrini Intro to AI (2nd Part)

Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

  • Upload
    buidang

  • View
    228

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Reasoning with Probabilities

Paolo Turrini

Department of Computing, Imperial College London

Introduction to Artificial Intelligence2nd Part

Paolo Turrini Intro to AI (2nd Part)

Page 2: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The main reference

Stuart Russell and Peter NorvigArtificial Intelligence: a modern approachChapter 14

Paolo Turrini Intro to AI (2nd Part)

Page 3: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Today

Bayes’ rule

Conditional independence

Back to the Wumpus World

Bayesian Networks

Paolo Turrini Intro to AI (2nd Part)

Page 4: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Today

Bayes’ rule

Conditional independence

Back to the Wumpus World

Bayesian Networks

Paolo Turrini Intro to AI (2nd Part)

Page 5: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule

Paolo Turrini Intro to AI (2nd Part)

Page 6: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

How worried should you be?

Paolo Turrini Intro to AI (2nd Part)

Page 7: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news.

The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

How worried should you be?

Paolo Turrini Intro to AI (2nd Part)

Page 8: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate.

The good news is that the disease isvery rare (1 in 10.000 get it).

How worried should you be?

Paolo Turrini Intro to AI (2nd Part)

Page 9: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

How worried should you be?

Paolo Turrini Intro to AI (2nd Part)

Page 10: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

How worried should you be?

Paolo Turrini Intro to AI (2nd Part)

Page 11: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional probability and Bayes’ Rule

Definition of conditional probability:

P(a|b) =P(a ∧ b)

P(b)if P(b) 6= 0

Product rule gives an alternative formulation:

P(a ∧ b) = P(a|b)P(b) = P(b|a)P(a) but then...

Paolo Turrini Intro to AI (2nd Part)

Page 12: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional probability and Bayes’ Rule

Definition of conditional probability:

P(a|b) =P(a ∧ b)

P(b)if P(b) 6= 0

Product rule gives an alternative formulation:

P(a ∧ b) = P(a|b)P(b) = P(b|a)P(a)

but then...

Paolo Turrini Intro to AI (2nd Part)

Page 13: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional probability and Bayes’ Rule

Definition of conditional probability:

P(a|b) =P(a ∧ b)

P(b)if P(b) 6= 0

Product rule gives an alternative formulation:

P(a ∧ b) = P(a|b)P(b) = P(b|a)P(a) but then...

Paolo Turrini Intro to AI (2nd Part)

Page 14: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule

P(a|b) =P(b|a)P(a)

P(b)

Paolo Turrini Intro to AI (2nd Part)

Page 15: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule

Useful for assessing diagnostic probability from causal probability:

P(Cause|Effect) =P(Effect|Cause)P(Cause)

P(Effect)

E.g., let c be cold, s be sore throat:

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

Paolo Turrini Intro to AI (2nd Part)

Page 16: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule

Useful for assessing diagnostic probability from causal probability:

P(Cause|Effect) =P(Effect|Cause)P(Cause)

P(Effect)

E.g., let c be cold, s be sore throat:

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

Paolo Turrini Intro to AI (2nd Part)

Page 17: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ rule

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

We might not know the prior probability of the evidence P(S)In this case...

we compute the posterior probability for each value of thequery variable (c ,¬c)

and then normalise

P(C |s) = α 〈P(s|c)P(c),P(s|¬c)P(¬c)〉

Paolo Turrini Intro to AI (2nd Part)

Page 18: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ rule

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

We might not know the prior probability of the evidence P(S)

In this case...

we compute the posterior probability for each value of thequery variable (c ,¬c)

and then normalise

P(C |s) = α 〈P(s|c)P(c),P(s|¬c)P(¬c)〉

Paolo Turrini Intro to AI (2nd Part)

Page 19: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ rule

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

We might not know the prior probability of the evidence P(S)In this case...

we compute the posterior probability for each value of thequery variable (c ,¬c)

and then normalise

P(C |s) = α 〈P(s|c)P(c),P(s|¬c)P(¬c)〉

Paolo Turrini Intro to AI (2nd Part)

Page 20: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ rule

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

We might not know the prior probability of the evidence P(S)In this case...

we compute the posterior probability for each value of thequery variable (c ,¬c)

and then normalise

P(C |s) = α 〈P(s|c)P(c),P(s|¬c)P(¬c)〉

Paolo Turrini Intro to AI (2nd Part)

Page 21: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ rule

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

We might not know the prior probability of the evidence P(S)In this case...

we compute the posterior probability for each value of thequery variable (c ,¬c)

and then normalise

P(C |s) = α 〈P(s|c)P(c),P(s|¬c)P(¬c)〉

Paolo Turrini Intro to AI (2nd Part)

Page 22: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ rule

P(c |s) =P(s|c)P(c)

P(s)=

0.9× 0.001

0.005= 0.18

We might not know the prior probability of the evidence P(S)In this case...

we compute the posterior probability for each value of thequery variable (c ,¬c)

and then normalise

P(C |s) = α 〈P(s|c)P(c),P(s|¬c)P(¬c)〉

Paolo Turrini Intro to AI (2nd Part)

Page 23: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule

P(X |Y ) = αP(Y |X )P(X )

Paolo Turrini Intro to AI (2nd Part)

Page 24: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 25: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 26: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 27: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p)

= P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 28: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d)

= 0.99×0.00010.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 29: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999

= 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 30: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 31: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Holiday solved

Upon your return from a holiday on an exotic island, yourdoctor has bad news and good news. The bad news isthat you’ve been diagnosed a serious disease and the testis 99% accurate. The good news is that the disease isvery rare (1 in 10.000 get it).

E.g., let D be disease, P be that you scored positive at the test:

P(d |p) = P(p|d)P(d)P(p|d)P(d)+P(p|¬d)P(¬d) = 0.99×0.0001

0.99×0.0001+0.01×0.9999 = 0.0098

Notice:posterior probability of disease still quite small!

Paolo Turrini Intro to AI (2nd Part)

Page 32: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

P(Cavity |toothache ∧ catch) =

α 〈0.108, 0.016〉 = 〈0.871, 0.129〉

Page 33: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

P(Cavity |toothache ∧ catch) = α 〈0.108, 0.016〉

= 〈0.871, 0.129〉

Page 34: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

P(Cavity |toothache ∧ catch) = α 〈0.108, 0.016〉 =

〈0.871, 0.129〉

Page 35: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

P(Cavity |toothache ∧ catch) = α 〈0.108, 0.016〉 = 〈0.871, 0.129〉

Page 36: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

It doesn’t scale up to a large number of variables

Absolute Independence is very rare

Can we use Bayes’ rule?

Page 37: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

P(Cavity |toothache ∧ catch) =αP(toothache ∧ catch|Cavity)P(Cavity)

Still not good: with n evidence variables 2n possible combinationsfor which we would need to know the conditional probabilities

Page 38: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

Paolo Turrini Intro to AI (2nd Part)

Start with the joint distribution:

P(Cavity |toothache ∧ catch) =αP(toothache ∧ catch|Cavity)P(Cavity)Still not good: with n evidence variables 2n possible combinationsfor which we would need to know the conditional probabilities

Page 39: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

We can’t use absolute independence:

Toothache and Catch are not independent: If the probecatches in the tooth then it is likely the tooth has a cavity,which means that toothache is likely too.

But they are independent given the presence or the absenceof cavity! Toothache depends on the state of the nerves in thetooth, catch depends on the dentist’s skills, to whichtoothache is irrelevant

Paolo Turrini Intro to AI (2nd Part)

Page 40: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

We can’t use absolute independence:

Toothache and Catch are not independent:

If the probecatches in the tooth then it is likely the tooth has a cavity,which means that toothache is likely too.

But they are independent given the presence or the absenceof cavity! Toothache depends on the state of the nerves in thetooth, catch depends on the dentist’s skills, to whichtoothache is irrelevant

Paolo Turrini Intro to AI (2nd Part)

Page 41: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

We can’t use absolute independence:

Toothache and Catch are not independent: If the probecatches in the tooth then it is likely the tooth has a cavity,which means that toothache is likely too.

But they are independent given the presence or the absenceof cavity! Toothache depends on the state of the nerves in thetooth, catch depends on the dentist’s skills, to whichtoothache is irrelevant

Paolo Turrini Intro to AI (2nd Part)

Page 42: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

We can’t use absolute independence:

Toothache and Catch are not independent: If the probecatches in the tooth then it is likely the tooth has a cavity,which means that toothache is likely too.

But they are independent given the presence or the absenceof cavity!

Toothache depends on the state of the nerves in thetooth, catch depends on the dentist’s skills, to whichtoothache is irrelevant

Paolo Turrini Intro to AI (2nd Part)

Page 43: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Back to joint distributions: combining evidence

We can’t use absolute independence:

Toothache and Catch are not independent: If the probecatches in the tooth then it is likely the tooth has a cavity,which means that toothache is likely too.

But they are independent given the presence or the absenceof cavity! Toothache depends on the state of the nerves in thetooth, catch depends on the dentist’s skills, to whichtoothache is irrelevant

Paolo Turrini Intro to AI (2nd Part)

Page 44: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional Independence

Paolo Turrini Intro to AI (2nd Part)

Page 45: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

1 P(catch|toothache, cavity) = P(catch|cavity), the sameindependence holds if I haven’t got a cavity:

2 P(catch|toothache,¬cavity) = P(catch|¬cavity)

Catch is conditionally independent of Toothache given Cavity :

Paolo Turrini Intro to AI (2nd Part)

Page 46: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

1 P(catch|toothache, cavity) = P(catch|cavity), the sameindependence holds if I haven’t got a cavity:

2 P(catch|toothache,¬cavity) = P(catch|¬cavity)

Catch is conditionally independent of Toothache given Cavity :

Paolo Turrini Intro to AI (2nd Part)

Page 47: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

1 P(catch|toothache, cavity) = P(catch|cavity), the sameindependence holds if I haven’t got a cavity:

2 P(catch|toothache,¬cavity) = P(catch|¬cavity)

Catch is conditionally independent of Toothache given Cavity :

Paolo Turrini Intro to AI (2nd Part)

Page 48: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

1 P(catch|toothache, cavity) = P(catch|cavity), the sameindependence holds if I haven’t got a cavity:

2 P(catch|toothache,¬cavity) = P(catch|¬cavity)

Catch is conditionally independent of Toothache given Cavity :

Paolo Turrini Intro to AI (2nd Part)

Page 49: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

P(Catch|Toothache,Cavity) = P(Catch|Cavity)

Equivalent statements:

P(Toothache|Catch,Cavity) = P(Toothache|Cavity)P(Toothache,Catch|Cavity) =P(Toothache|Cavity)P(Catch|Cavity)

Paolo Turrini Intro to AI (2nd Part)

Page 50: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

P(Catch|Toothache,Cavity) = P(Catch|Cavity)Equivalent statements:

P(Toothache|Catch,Cavity) = P(Toothache|Cavity)P(Toothache,Catch|Cavity) =P(Toothache|Cavity)P(Catch|Cavity)

Paolo Turrini Intro to AI (2nd Part)

Page 51: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

P(Catch|Toothache,Cavity) = P(Catch|Cavity)Equivalent statements:

P(Toothache|Catch,Cavity) = P(Toothache|Cavity)

P(Toothache,Catch|Cavity) =P(Toothache|Cavity)P(Catch|Cavity)

Paolo Turrini Intro to AI (2nd Part)

Page 52: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence

P(Catch|Toothache,Cavity) = P(Catch|Cavity)Equivalent statements:

P(Toothache|Catch,Cavity) = P(Toothache|Cavity)P(Toothache,Catch|Cavity) =P(Toothache|Cavity)P(Catch|Cavity)

Paolo Turrini Intro to AI (2nd Part)

Page 53: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

Write out full joint distribution using chain rule:

P(Toothache,Catch,Cavity)

= P(Toothache|Catch,Cavity)P(Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch|Cavity)P(Cavity)= P(Toothache|Cavity)P(Catch|Cavity)P(Cavity)

I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2remove 2)

Paolo Turrini Intro to AI (2nd Part)

Page 54: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

Write out full joint distribution using chain rule:

P(Toothache,Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch,Cavity)

= P(Toothache|Catch,Cavity)P(Catch|Cavity)P(Cavity)= P(Toothache|Cavity)P(Catch|Cavity)P(Cavity)

I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2remove 2)

Paolo Turrini Intro to AI (2nd Part)

Page 55: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

Write out full joint distribution using chain rule:

P(Toothache,Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch|Cavity)P(Cavity)

= P(Toothache|Cavity)P(Catch|Cavity)P(Cavity)

I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2remove 2)

Paolo Turrini Intro to AI (2nd Part)

Page 56: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

Write out full joint distribution using chain rule:

P(Toothache,Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch|Cavity)P(Cavity)= P(Toothache|Cavity)P(Catch|Cavity)P(Cavity)

I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2remove 2)

Paolo Turrini Intro to AI (2nd Part)

Page 57: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

Write out full joint distribution using chain rule:

P(Toothache,Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch,Cavity)= P(Toothache|Catch,Cavity)P(Catch|Cavity)P(Cavity)= P(Toothache|Cavity)P(Catch|Cavity)P(Cavity)

I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2remove 2)

Paolo Turrini Intro to AI (2nd Part)

Page 58: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

In most cases, the use of conditional independence reduces the sizeof the representation of the joint distribution from exponential in nto linear in n.

Conditional independence is our most basic and robustform of knowledge about uncertain environments.

Paolo Turrini Intro to AI (2nd Part)

Page 59: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Conditional independence contd.

In most cases, the use of conditional independence reduces the sizeof the representation of the joint distribution from exponential in nto linear in n.

Conditional independence is our most basic and robustform of knowledge about uncertain environments.

Paolo Turrini Intro to AI (2nd Part)

Page 60: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule and conditional independence

P(Cavity |toothache ∧ catch)

= αP(toothache ∧ catch|Cavity)P(Cavity)

= αP(toothache|Cavity)P(catch|Cavity)P(Cavity)

This is an example of a naive Bayes model:

P(Cause,Effect1, . . . ,Effectn) = P(Cause)ΠiP(Effecti |Cause)

Total number of parameters is linear in n

Paolo Turrini Intro to AI (2nd Part)

Page 61: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule and conditional independence

P(Cavity |toothache ∧ catch)

= αP(toothache ∧ catch|Cavity)P(Cavity)

= αP(toothache|Cavity)P(catch|Cavity)P(Cavity)

This is an example of a naive Bayes model:

P(Cause,Effect1, . . . ,Effectn) = P(Cause)ΠiP(Effecti |Cause)

Total number of parameters is linear in n

Paolo Turrini Intro to AI (2nd Part)

Page 62: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule and conditional independence

P(Cavity |toothache ∧ catch)

= αP(toothache ∧ catch|Cavity)P(Cavity)

= αP(toothache|Cavity)P(catch|Cavity)P(Cavity)

This is an example of a naive Bayes model:

P(Cause,Effect1, . . . ,Effectn) = P(Cause)ΠiP(Effecti |Cause)

Total number of parameters is linear in n

Paolo Turrini Intro to AI (2nd Part)

Page 63: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule and conditional independence

P(Cavity |toothache ∧ catch)

= αP(toothache ∧ catch|Cavity)P(Cavity)

= αP(toothache|Cavity)P(catch|Cavity)P(Cavity)

This is an example of a naive Bayes model:

P(Cause,Effect1, . . . ,Effectn) = P(Cause)ΠiP(Effecti |Cause)

Total number of parameters is linear in n

Paolo Turrini Intro to AI (2nd Part)

Page 64: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule and conditional independence

P(Cavity |toothache ∧ catch)

= αP(toothache ∧ catch|Cavity)P(Cavity)

= αP(toothache|Cavity)P(catch|Cavity)P(Cavity)

This is an example of a naive Bayes model:

P(Cause,Effect1, . . . ,Effectn) = P(Cause)ΠiP(Effecti |Cause)

Total number of parameters is linear in n

Paolo Turrini Intro to AI (2nd Part)

Page 65: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Bayes’ Rule and conditional independence

P(Cavity |toothache ∧ catch)

= αP(toothache ∧ catch|Cavity)P(Cavity)

= αP(toothache|Cavity)P(catch|Cavity)P(Cavity)

This is an example of a naive Bayes model:

P(Cause,Effect1, . . . ,Effectn) = P(Cause)ΠiP(Effecti |Cause)

Total number of parameters is linear in nPaolo Turrini Intro to AI (2nd Part)

Page 66: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumps World

Paolo Turrini Intro to AI (2nd Part)

Page 67: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 68: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 69: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 70: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 71: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 72: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 73: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 74: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 75: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

The Wumpus World

Paolo Turrini Intro to AI (2nd Part)

Page 76: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Wumpus World

Pij = true iff [i , j ] contains a pitBij = true iff [i , j ] is breezy

Paolo Turrini Intro to AI (2nd Part)

Page 77: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Wumpus World

Pij = true iff [i , j ] contains a pitBij = true iff [i , j ] is breezy

Paolo Turrini Intro to AI (2nd Part)

Page 78: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Wumpus World

Pij = true iff [i , j ] contains a pitBij = true iff [i , j ] is breezy

Paolo Turrini Intro to AI (2nd Part)

Page 79: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 80: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!

The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 81: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 82: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 83: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 84: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 85: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Specifying the probability model

Include only B1,1,B1,2,B2,1 in the probability model!The full joint distribution is P(P1,1, . . . ,P4,4,B1,1,B1,2,B2,1)

Apply product rule:P(B1,1,B1,2,B2,1 |P1,1, . . . ,P4,4)P(P1,1, . . . ,P4,4)(Do it this way to get P(Effect|Cause).)

First term: 1 if pits are adjacent to breezes, 0 otherwise

Second term: pits are placed randomly, probability 0.2 per square:

P(P1,1, . . . ,P4,4) = Π4,4i ,j =1,1P(Pi ,j) = 0.2n× 0.816−n

for n pits.

Paolo Turrini Intro to AI (2nd Part)

Page 86: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Observations and query

We know the following facts:

b = ¬b1,1 ∧ b1,2 ∧ b2,1

explored = ¬p1,1 ∧ ¬p1,2 ∧ ¬p2,1

Query is P(P1,3|explored , b)

Define Unexplored = Pijs other than P1,3 and Explored

Paolo Turrini Intro to AI (2nd Part)

Page 87: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Observations and query

We know the following facts:

b = ¬b1,1 ∧ b1,2 ∧ b2,1

explored = ¬p1,1 ∧ ¬p1,2 ∧ ¬p2,1

Query is P(P1,3|explored , b)

Define Unexplored = Pijs other than P1,3 and Explored

Paolo Turrini Intro to AI (2nd Part)

Page 88: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Observations and query

We know the following facts:

b = ¬b1,1 ∧ b1,2 ∧ b2,1

explored = ¬p1,1 ∧ ¬p1,2 ∧ ¬p2,1

Query is P(P1,3|explored , b)

Define Unexplored = Pijs other than P1,3 and Explored

Paolo Turrini Intro to AI (2nd Part)

Page 89: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Observations and query

We know the following facts:

b = ¬b1,1 ∧ b1,2 ∧ b2,1

explored = ¬p1,1 ∧ ¬p1,2 ∧ ¬p2,1

Query is P(P1,3|explored , b)

Define Unexplored = Pijs other than P1,3 and Explored

Paolo Turrini Intro to AI (2nd Part)

Page 90: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Observations and query

We know the following facts:

b = ¬b1,1 ∧ b1,2 ∧ b2,1

explored = ¬p1,1 ∧ ¬p1,2 ∧ ¬p2,1

Query is P(P1,3|explored , b)

Define Unexplored = Pijs other than P1,3 and Explored

Paolo Turrini Intro to AI (2nd Part)

Page 91: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Complexity

For inference by enumeration, we have

P(P1,3|explored , b) = αΣunexploredP(P1,3, unexplored , explored , b)

There are 12 unknown squares

The summation contains 212 = 4096 terms

In general the summation grows exponentiatlly with the number ofsquares!

And now?

Paolo Turrini Intro to AI (2nd Part)

Page 92: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Complexity

For inference by enumeration, we have

P(P1,3|explored , b) = αΣunexploredP(P1,3, unexplored , explored , b)

There are 12 unknown squares

The summation contains 212 = 4096 terms

In general the summation grows exponentiatlly with the number ofsquares!

And now?

Paolo Turrini Intro to AI (2nd Part)

Page 93: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Complexity

For inference by enumeration, we have

P(P1,3|explored , b) = αΣunexploredP(P1,3, unexplored , explored , b)

There are 12 unknown squares

The summation contains 212 = 4096 terms

In general the summation grows exponentiatlly with the number ofsquares!

And now?

Paolo Turrini Intro to AI (2nd Part)

Page 94: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Complexity

For inference by enumeration, we have

P(P1,3|explored , b) = αΣunexploredP(P1,3, unexplored , explored , b)

There are 12 unknown squares

The summation contains 212 = 4096 terms

In general the summation grows exponentiatlly with the number ofsquares!

And now?

Paolo Turrini Intro to AI (2nd Part)

Page 95: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Complexity

For inference by enumeration, we have

P(P1,3|explored , b) = αΣunexploredP(P1,3, unexplored , explored , b)

There are 12 unknown squares

The summation contains 212 = 4096 terms

In general the summation grows exponentiatlly with the number ofsquares!

And now?

Paolo Turrini Intro to AI (2nd Part)

Page 96: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Basic insight: observations are conditionally independent of otherhidden squares given neighbouring hidden squares

Define Unexplored = Fringe ∪ OtherP(b|P1,3,Explored ,Unexplored) = P(b|P1,3,Explored ,Fringe)Manipulate query into a form where we can use this!

Paolo Turrini Intro to AI (2nd Part)

Page 97: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Basic insight: observations are conditionally independent of otherhidden squares given neighbouring hidden squares

Define Unexplored = Fringe ∪ OtherP(b|P1,3,Explored ,Unexplored) = P(b|P1,3,Explored ,Fringe)Manipulate query into a form where we can use this!

Paolo Turrini Intro to AI (2nd Part)

Page 98: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Basic insight: observations are conditionally independent of otherhidden squares given neighbouring hidden squares

Define Unexplored = Fringe ∪ Other

P(b|P1,3,Explored ,Unexplored) = P(b|P1,3,Explored ,Fringe)Manipulate query into a form where we can use this!

Paolo Turrini Intro to AI (2nd Part)

Page 99: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Basic insight: observations are conditionally independent of otherhidden squares given neighbouring hidden squares

Define Unexplored = Fringe ∪ OtherP(b|P1,3,Explored ,Unexplored) = P(b|P1,3,Explored ,Fringe)

Manipulate query into a form where we can use this!

Paolo Turrini Intro to AI (2nd Part)

Page 100: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Basic insight: observations are conditionally independent of otherhidden squares given neighbouring hidden squares

Define Unexplored = Fringe ∪ OtherP(b|P1,3,Explored ,Unexplored) = P(b|P1,3,Explored ,Fringe)Manipulate query into a form where we can use this!

Paolo Turrini Intro to AI (2nd Part)

Page 101: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Inference by enumeration

P(P1,3|explored , b)

= α∑

unexplored P(P1,3, unexplored , explored , b)

Page 102: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Inference by enumeration

P(P1,3|explored , b) = α∑

unexplored P(P1,3, unexplored , explored , b)

Page 103: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Inference by enumeration

P(P1,3|explored , b) = α∑

unexplored P(P1,3, unexplored , explored , b)

Page 104: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Product rule

α∑

unexplored P(P1,3, unexplored , explored , b)

= α∑

unexplored P(b|explored ,P1,3, unexplored)××P(P1,3, explored , unexplored)

Page 105: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Product rule

α∑

unexplored P(P1,3, unexplored , explored , b)

= α∑

unexplored P(b|explored ,P1,3, unexplored)××P(P1,3, explored , unexplored)

Page 106: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Product rule

α∑

unexplored P(P1,3, unexplored , explored , b)

= α∑

unexplored P(b|explored ,P1,3, unexplored)××P(P1,3, explored , unexplored)

Page 107: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Distinguishing the unknown

α∑

unexplored P(b|P1,3, unexplored , explored)P(P1,3, unexplored , explored)

= α∑

fringe

∑other P(b|explored ,P1,3, fringe, other)×

×P(P1,3, explored , fringe, other)

Page 108: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Distinguishing the unknown

α∑

unexplored P(b|P1,3, unexplored , explored)P(P1,3, unexplored , explored)

= α∑

fringe

∑other P(b|explored ,P1,3, fringe, other)×

×P(P1,3, explored , fringe, other)

Page 109: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Distinguishing the unknown

α∑

unexplored P(b|P1,3, unexplored , explored)P(P1,3, unexplored , explored)

= α∑

fringe

∑other P(b|explored ,P1,3, fringe, other)×

×P(P1,3, explored , fringe, other)

Page 110: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Conditional Independence

α∑

fringe

∑other P(b|explored ,P1,3, fringe, other)×

×P(P1,3, explored , fringe, other)

= α∑

fringe

∑other P(b|explored ,P1,3, fringe)×

×P(P1,3, explored , fringe, other)

Page 111: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Conditional Independence

α∑

fringe

∑other P(b|explored ,P1,3, fringe, other)×

×P(P1,3, explored , fringe, other)

= α∑

fringe

∑other P(b|explored ,P1,3, fringe)×

×P(P1,3, explored , fringe, other)

Page 112: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Conditional Independence

α∑

fringe

∑other P(b|explored ,P1,3, fringe, other)×

×P(P1,3, explored , fringe, other)

= α∑

fringe

∑other P(b|explored ,P1,3, fringe)×

×P(P1,3, explored , fringe, other)

Page 113: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Pushing the sums inwards

α∑

fringe

∑other P(b|explored ,P1,3, fringe)×

×P(P1,3, explored , fringe, other)

= α∑

fringeP(b|explored ,P1,3, fringe)××∑

otherP(P1,3, explored , fringe, other)

Page 114: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Pushing the sums inwards

α∑

fringe

∑other P(b|explored ,P1,3, fringe)×

×P(P1,3, explored , fringe, other)

= α∑

fringeP(b|explored ,P1,3, fringe)××∑

otherP(P1,3, explored , fringe, other)

Page 115: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Pushing the sums inwards

α∑

fringe

∑other P(b|explored ,P1,3, fringe)×

×P(P1,3, explored , fringe, other)

= α∑

fringeP(b|explored ,P1,3, fringe)××∑

otherP(P1,3, explored , fringe, other)

Page 116: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Independence

α∑

fringeP(b|explored ,P1,3, fringe)×∑otherP(P1,3, explored , fringe, other)

= α∑

fringe P(b|explored ,P1,3, fringe)×∑other P(P1,3)P(explored)P(fringe)P(other)

Page 117: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Independence

α∑

fringeP(b|explored ,P1,3, fringe)×∑otherP(P1,3, explored , fringe, other)

= α∑

fringe P(b|explored ,P1,3, fringe)×∑other P(P1,3)P(explored)P(fringe)P(other)

Page 118: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Independence

α∑

fringeP(b|explored ,P1,3, fringe)×∑otherP(P1,3, explored , fringe, other)

= α∑

fringe P(b|explored ,P1,3, fringe)×∑other P(P1,3)P(explored)P(fringe)P(other)

Page 119: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Reorderingand pushing sums inwards

α∑

fringe P(b|explored ,P1,3, fringe)××∑

other P(P1,3)P(explored)P(fringe)P(other)

= αP(explored)P(P1,3)××∑

fringe P(b|explored ,P1,3, fringe)P(fringe)∑

other P(other)

Page 120: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Reorderingand pushing sums inwards

α∑

fringe P(b|explored ,P1,3, fringe)××∑

other P(P1,3)P(explored)P(fringe)P(other)

= αP(explored)P(P1,3)××∑

fringe P(b|explored ,P1,3, fringe)P(fringe)∑

other P(other)

Page 121: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Reorderingand pushing sums inwards

α∑

fringe P(b|explored ,P1,3, fringe)××∑

other P(P1,3)P(explored)P(fringe)P(other)

= αP(explored)P(P1,3)××∑

fringe P(b|explored ,P1,3, fringe)P(fringe)∑

other P(other)

Page 122: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Simplifying

αP(explored)P(P1,3)××∑

fringe P(b|explored ,P1,3, fringe)P(fringe)∑

other P(other)

= α′ P(P1,3)∑

fringe P(b|explored ,P1,3, fringe)P(fringe)

Page 123: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Simplifying

αP(explored)P(P1,3)××∑

fringe P(b|explored ,P1,3, fringe)P(fringe)∑

other P(other)

= α′ P(P1,3)∑

fringe P(b|explored ,P1,3, fringe)P(fringe)

Page 124: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence

Paolo Turrini Intro to AI (2nd Part)

Simplifying

αP(explored)P(P1,3)××∑

fringe P(b|explored ,P1,3, fringe)P(fringe)∑

other P(other)

= α′ P(P1,3)∑

fringe P(b|explored ,P1,3, fringe)P(fringe)

Page 125: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(b|explored ,P1,3, fringe)

= 1 when the frontier is consistent with the observations

= 0 otherwise

We can sum over the possible configurations for the frontiervariables that are consistent with the known facts.

Page 126: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(b|explored ,P1,3, fringe)

= 1 when the frontier is consistent with the observations

= 0 otherwise

We can sum over the possible configurations for the frontiervariables that are consistent with the known facts.

Page 127: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(b|explored ,P1,3, fringe)

= 1 when the frontier is consistent with the observations

= 0 otherwise

We can sum over the possible configurations for the frontiervariables that are consistent with the known facts.

Page 128: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(P1,3|explored , b)=

α′ 〈0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)〉≈ 〈0.31, 0.69〉

P(P2,2|explored , b) ≈ 〈0.86, 0.14〉

Page 129: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(P1,3|explored , b)=α′ 〈0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)〉

≈ 〈0.31, 0.69〉

P(P2,2|explored , b) ≈ 〈0.86, 0.14〉

Page 130: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(P1,3|explored , b)=α′ 〈0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)〉≈ 〈0.31, 0.69〉

P(P2,2|explored , b) ≈ 〈0.86, 0.14〉

Page 131: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(P1,3|explored , b)=α′ 〈0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)〉≈ 〈0.31, 0.69〉

P(P2,2|explored , b) ≈ 〈0.86, 0.14〉

Page 132: Reasoning with Probabilities - dcs.warwick.ac.ukpturrini/Slides/Probability2.pdf · Arti cial Intelligence: a modern approach Chapter 14 ... doctor has bad news and good news. The

Intro to AI (2nd Part)

Using conditional independence contd.

Paolo Turrini Intro to AI (2nd Part)

P(P1,3|explored , b)=α′ 〈0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)〉≈ 〈0.31, 0.69〉

P(P2,2|explored , b) ≈ 〈0.86, 0.14〉