Quantifying uncertainty & Bayesian networksCE417: Introduction to Artificial Intelligence
Sharif University of Technology
Spring 2016
βArtificial Intelligence: A Modern Approachβ, 3rd Edition, Chapter 13 & 14.1-14.2
Soleymani
Outline
Uncertainty
Probability & basic notations
Inference in probability theory
Bayes rule
Baysian networks
representing probabilistic knowledge as a Bayesian network
2
Automated taxi example
3
At :βleave for airport t minutes before flightβ
βWill At get us to the airport on time?β
βA90 will get us there on time if the car does not break
down or run out of gas and there's no accident on the
bridge and I donβt get into an accident and β¦β
Qualification problem
Probability: summarizing uncertainty
Problems with logic
laziness: failure to enumerate exceptions, qualifications, etc.
Theoretical or practical ignorance: lack of complete theory or lack of all
relevant facts and information
Probability summarizes the uncertainty
Probabilities are made w.r.t the current knowledge state (not
w.r.t the real world)
Probabilities of propositions can change with new evidence
e.g., P(A90 gets us there on time) = 0.7
P(A90 gets us there on time | time= 5 p.m.) = 0.6
4
Automated taxi example: rational decision
5
A90 can be a rational decision depending on performance
measure
Performance measure: getting to the airport in time, avoiding a long wait
in the airport, avoiding speeding tickets along the way,β¦
Maximizing expected utility
Decision theory = probability theory + utility theory
Utility theory: assigning a degree of usefulness for each state and preferring
states with higher utility
Basic probability notations (sets)
6
Sample space (Ξ©): set of all possible worlds
Probability model
0 β€ π(π) β€ 1 (βπ β Ξ©) π is an atomic event
π Ξ© = 1
π(π΄) = πβπ΄π(π) (βπ΄ β Ξ©) π΄ is a set of possible worlds
Basic probability notations (propositions)
7
π: set of all propositional sentences
Probability model
0 β€ π(π) β€ 1 (βπ β π)
If π is a tautology π π = 1
π(π) = πβΞ©: πβ¨ππ(π) (βπ β π)
Basic probability notations (propositions)
8
Each proposition corresponds to a set of possible worlds
A possible world is defined as an assignment of values to all
random variables under consideration.
Elementary proposition
e.g.,π·πππ1 = 4
Complex propositions
e.g.,π·πππ1 = 4 β¨ π·πππ2 = 6
Random variables
Random variables: Variables in probability theory
Domain of random variables:Boolean,discrete or continuous
Boolean
e.g.,The domain of πΆππ£ππ‘π¦: {π‘ππ’π, ππππ π}
πΆππ£ππ‘π¦ = π‘ππ’π is abbreviated as πππ£ππ‘π¦
Discrete
e.g.,The domain ofππππ‘βππ is π π’πππ¦, πππππ¦, ππππ’ππ¦, π πππ€
Continuous
Probability distribution: the function describing probabilities of
possible values of a random variable
π(ππππ‘βππ = π π’πππ¦) = 0.6, π ππππ‘βππ = ππππ = 0.1, β¦
9
Probabilistic inference
10
Joint probability distribution
specifies probability of every atomic event
Prior and posterior probabilities
belief in the absence or presence of evidences
Bayesβ rule
used when we donβt have π(π|π) but we have π(π|π)
Independence
Probabilistic inference
11
If we consider full joint distribution as KB
Every query can be answered by it
Probabilistic inference: Computing posterior distribution
of variables given evidence
An agent needs to make decisions based on the obtained evidence
Joint probability distribution
12
Joint probability distribution
Probability of all combinations of the values for a set of random vars.
Queries can be answered by summing over atomic events
Weather=
snow
Weather=
cloudy
Weather=
rainy
Weather=
sunny
0.020.0160.020.144Cavity = true
0.080.0640.080.576Cavity = false
π(ππππ‘βππ, πΆππ£ππ‘π¦) : joint probability as a 4 Γ 2 matrix of values
Prior and posterior probabilities
Prior or unconditional probabilities of propositions: belief in
the absence of any other evidence
e.g.,π πππ£ππ‘π¦ = 0.2
π(ππππ‘βππ = π π’πππ¦) = 0.72
Posterior or conditional probabilities: belief in the presence of
evidences
e.g.,π(πππ£ππ‘π¦ | π‘πππ‘βππβπ)
π(π|π) = π(π β§ π) π(π) if π(π) > 0
13
Sum rule and product rule
Product rule (obtained from the formulation of the conditional
probability):
π(π, π) = π(π|π) π(π) = π(π|π) π(π)
Sume rule:
π(π) =
π
π(π, π)
14
Inference: example
Joint probability distribution:
Conditional probabilities:
π(Β¬πππ£ππ‘π¦ | π‘πππ‘βππβπ)
=π(Β¬πππ£ππ‘π¦ β§ π‘πππ‘βππβπ)
π(π‘πππ‘βππβπ)
= πππ‘πβ=π,πΉ π(Β¬πππ£ππ‘π¦ β§ π‘πππ‘βππβπ, πππ‘πβ)
πππ‘πβ=π,πΉ πππ£ππ‘π¦=π,πΉ π(πππ£ππ‘π¦ β§ π‘πππ‘βππβπ, πππ‘πβ)
=0.016 + 0.064
0.108 + 0.012 + 0.016 + 0.064= 0.4
15
Β¬π‘πππ‘ππβππ‘πππ‘ππβπ
Β¬πππ‘πβπππ‘πβΒ¬πππ‘πβπππ‘πβ
0.0080.0720.0120.108πππ£ππ‘π¦
0.5760.1440.0640.016Β¬πππ£ππ‘π¦
π π|π = ππ π, π, π²
π ππ π, π, π²
Bayes' rule
Bayes' rule:π(π|π) = π(π|π) π(π) / π(π)
π ππ = π π π π π = π π π π π
In many problems, it may be difficult to compute π(π|π)directly, yet we might have information about π(π|π).
Computing diagnostic probability from causal probability:
π(πΆππ’π π|πΈπππππ‘) = π(πΈπππππ‘|πΆππ’π π) π(πΆππ’π π) / π(πΈπππππ‘)
16
Bayes' rule: example
17
Meningitis(π) & Stiff neck (π)
π π =1
5000
π π = 0.01
π π π = 0.7
π π π =?
π(π|π ) = π π π π π /π(π ) = 0.7 Γ 0.0002/0.01 = 0.0014
Independence
Propositions π and π are independent iff
π(π|π) = π(π)
π π π = π π
π(π, π) = π(π) π(π)
π independent biased coins
The number of required independent probabilities
is reduced from 2π β 1 to π
18
Independence
π π‘πππ‘βππβπ, πππ‘πβ, πππ£ππ‘π¦, ππππ’ππ¦
= π(π‘πππ‘βππβπ, πππ‘πβ, πππ£ππ‘π¦) π(ππππ’ππ¦)
19
The number of required independent probabilities is reduced from
21 = (23 β 1) Γ (4 β 1) to 10 = (23β1) + (4 β 1)
Independent and conditional independent
20
Independent random variables:
Conditionally independent random variables: A and B are
only conditionally independent given C
A B
C
π π, π π = π π|π π(π|π)
A B
π π, π = π π π(π)
Conditional independence
NaΓ―ve Bayesmodel:
π(πΆππ’π π, πΈπππππ‘1, β¦ , πΈπππππ‘π) = π(πΆππ’π π)
π=1
π
π(πΈπππππ‘π|πΆππ’π π)
Example
21
Cavity example
Topology of network encodes conditional independencies
22
ππππ‘βππ is independent of the other variables
ππππ‘βππβπ and πΆππ‘πβ are conditionally
independent given πΆππ£ππ‘π¦
Wumpus example
23
π = Β¬π1,1 β§ π1,2 β§ π2,1
ππππ€π = Β¬π1,1 β§ Β¬π1,2 β§ Β¬π2,1
π π1,3 ππππ€π, π =?
Wumpus example
24
Possible worlds with π1,3 = π‘ππ’π Possible worlds with π1,3 = ππππ π
π π1,3 = πππ’π ππππ€π, π β 0.2 Γ 0.2 Γ 0.2 + 0.2 Γ 0.8 + 0.8 Γ 0.2
π π1,3 = πΉπππ π ππππ€π, π β 0.8 Γ 0.2 Γ 0.2 + 0.2 Γ 0.8
β π π1,3 = πππ’π ππππ€π, π = 0.31
Bayesian networks
Importance of independence and conditional independence
relationships (to simplify representation)
Bayesian network: a graphical model to represent
dependencies among variables
compact specification of full joint distributions
easier for human to understand
Bayesian network is a directed acyclic graph
Each node shows a random variable
Each link from π to π shows a "direct influenceβ of π on π (π is a
parent of π)
For each node, a conditional probability distribution π(ππ|ππππππ‘π (ππ))shows the effects of parents on the node
25
Burglary example
βA burglar alarm, respond occasionally to minor earthquakes.
Neighbors John and Mary call you when hearing the alarm.
John nearly always calls when hearing the alarm.
Mary often misses the alarm.β
Variables:
π΅π’ππππππ¦
πΈπππ‘βππ’πππ
π΄ππππ
π½πβππΆππππ
ππππ¦πΆππππ
26
Burglary example
27
Conditional Probability Table
(CPT)
π(π½ = ππππ ππ(π½ = π‘ππ’ππ΄
0.10.9π‘
0.950.05π
Burglary example
28
John do not perceive
burglaries directly
John do not perceive
minor earthquakes
The full joint distribution can be defined as the product of the
local conditional distributions (using chain rule):
π(π1, β¦ , ππ) =
π=1
π
π (ππ| ππππππ‘π (ππ))
Chain rule is derived by successive application of product rule:
π·(π1, β¦ , ππ)= π·(π1, β¦ , ππβ1 ) π·(ππ|π1, β¦ , ππβ1)= π·(π1, β¦ , ππβ2) π·(ππβ1|π1, β¦ , ππβ2) π·(ππ|π1, β¦ , ππβ1)= β¦
= π(π1) π=2
π
π·(ππ|π1, β¦ , ππβ1)
Semantics of Bayesian networks
29
Burglary example (joint probability)
30
We can compute joint probabilities from CPTs:
π π β§ π β§ π β§ π β§ π
= π (π | π) π (π | π) π π π β§ π) π (π) π (π)
= 0.9 Γ 0.7 Γ 0.001 Γ 0.999 Γ 0.998 = 0.000628
Burglary example
31
βJohn calls to say my alarm is ringing, but Mary
doesn't call. Is there a burglar?β
π(π|π β§ Β¬π)?
This conditional probability can be computed from the
joint probabilities as discussed earlier:
π π|π = ππ π, π, π²
π ππ π, π, π²
π βͺ π βͺ πΈ = π
π shows the set of random
variables in the Bayesian network
Compactness
Locally structured
A CPT for a Boolean variable with k Boolean parents requires:
2π rows for the combinations of parent values
π = 0 (one row showing prior probabilities of each possible value of
that variable)
If each variable has no more than π parents
Baysian network requires π(π Γ 2π) numbers (linear with π)
Full joint distribution requires π 2π numbers
32
Node ordering: burglary example
The structure of the network and so the number of required probabilities
for different node orderings can be different
33
1 + 2 + 4 + 2 + 4 = 131 + 1 + 4 + 2 + 2 = 10
4
2 2
1 1
1 + 2 + 4 + 8 + 16 = 31
4
4
2
2
1 1
2
4
8
16
Constructing Bayesian networks
I. Nodes:
determine the set of variables and order them as π1, β¦ , ππ
(More compact network if causes precede effects)
II. Links:
for π = 1 to π
1) select a minimal set of parents for ππ from π1, β¦ , ππβ1 such that
π(ππ | ππππππ‘π (ππ)) = π(ππ| π1, β¦ ππβ1)
2) For each parent insert a link from the parent to ππ
3) CPT creation based on π(ππ | π1, β¦ ππβ1)
34
Suppose we choose the ordering π, π½, π΄, π΅, πΈ
π π½ π) = π(π½)?
Node ordering: Burglary example
35
Suppose we choose the ordering π, π½, π΄, π΅, πΈ
π π½ π) = π(π½)? No
π π΄ π½,π = π π΄ π½ ?
π π΄ π½,π) = π(π΄)?
Node ordering: Burglary example
36
Suppose we choose the ordering π, π½, π΄, π΅, πΈ
π π½ π) = π(π½)? No
π π΄ π½,π = π π΄ π½ ?No
π π΄ π½,π) = π(π΄)? No
π π΅ π΄, π½,π) = π π΅ π΄)?
π π΅ π΄, π½,π) = π(π΅)?
Node ordering: Burglary example
37
Suppose we choose the ordering π, π½, π΄, π΅, πΈ
π π½ π) = π(π½)? No
π π΄ π½,π = π π΄ π½ ?No
π π΄ π½,π) = π(π΄)? No
π π΅ π΄, π½,π) = π π΅ π΄)? Yes
π π΅ π΄, π½,π) = π π΅ ? No
π(πΈ | π΅, π΄ , π½,π) = π·(πΈ | π΄)?
π(πΈ | π΅, π΄, π½,π) = π·(πΈ | π΄, π΅)?
Node ordering: Burglary example
38
Suppose we choose the ordering π, π½, π΄, π΅, πΈ
π π½ π) = π(π½)? No
π π΄ π½,π = π π΄ π½ ?No
π π΄ π½,π) = π(π΄)? No
π π΅ π΄, π½,π) = π π΅ π΄)? Yes
π π΅ π΄, π½,π) = π(π΅)? No
π(πΈ | π΅, π΄ , π½,π) = π·(πΈ | π΄)?No
π(πΈ | π΅, π΄, π½,π) = π·(πΈ | π΄, π΅)?Yes
Node ordering: Burglary example
39
Causal models
40
Some new links represent relationships that require difficult and unnatural
probability judgments
Deciding conditional independence is hard in non-causal directions
Why using Bayesian networks?
41
Compact representation of probability distributions: smaller
number of parameters
Instead of storing full joint distribution requiring large number of
parameters
Incorporation of domain knowledge and causal structures
Algorithm for systematic and efficient inference/learning
Exploiting the graph structure and probabilistic semantics
Take advantage of conditional and marginal independences among
random variables
Inference query
42
Nodes:πΏ = {π1, β¦ , ππ}
Evidence: an assignment of values to a set πΏπ½ of nodes in the
network
Likelihood: π ππ = πΏπ― π(πΏπ―, ππ) (πΏ = πΏπ― βͺ πΏπ½)
A posteriori belief: π πΏπ―|ππ =π(πΏπ―,ππ)
πΏπ―π(πΏπ―,ππ)
π π|ππ = π π(π,π,ππ)
π π π(π,π,ππ)(πΏπ― = π βͺ π)
Partial joint and conditional examples
43
Partial joint probability distribution (joint probabilitydistribution for a subset variables) can be computed from thefull joint distribution through marginalization π(π, Β¬π, π)?
π π, Β¬π, π =
π΄
πΈπ(π, Β¬π, π, π΄, πΈ)
Conditional probability distribution: Can be computed from the full joint distribution through marginalization
and definition of conditionals
Burglary example:π(π|π, Β¬π)?
π(π|π, Β¬π) =π(π, Β¬π, π)
π(π, Β¬π)= π΄ πΈ π(π, Β¬π, π, π΄, πΈ)
π΅ π΄ πΈ π(π, Β¬π, π, π΄, πΈ)
Burglary example: full joint probability
44
π π π, Β¬π =π π, Β¬π, π
π π, Β¬π= π΄ πΈ π π, Β¬π, π, π΄, πΈ
π΅ π΄ πΈ π π, Β¬π, π, π΄, πΈ
= π΄ πΈ π π π΄ π Β¬π π΄ π π΄ π, πΈ π π π(πΈ)
π΅ π΄ πΈ π π π΄ π Β¬π π΄ π π΄ π΅, πΈ π π΅ π(πΈ)
π: π½πβππΆππππ = πππ’πΒ¬π: π΅π’ππππππ¦ = πΉπππ π
β¦
Short-hands
Inference in Bayesian networks
45
Computing π πΏπ―|ππ in an arbitrary GM is NP-hard
Exact inference: enumeration intractable (NP-Hard)
Special cases are tractable
Enumeration
46
π π ππ β π π, ππ
π π, ππ = π π(π, ππ, π) exponential in general (with respect to the number of nodes in π)
we cannot find a general procedure that works efficiently forarbitrary networks
Sometimes the structure of the network allows us to infermore efficiently
avoiding exponential cost
Can be improved by re-using calculations
similar to dynamic programming
Distribution of products on sums
47
Exploiting the factorization properties to allow sums and
products to be interchanged
π Γ π + π Γ π needs three operations while π Γ (π + π)requires two
Variable elimination: example
48
π π, π = π΄
πΈ
π
π π π πΈ π π΄ π, πΈ π π π΄ π π π΄
= π π
πΈ
π πΈ
π΄
π π΄ π, πΈ π π π΄ ππ π π΄
π π|π β π(π, π)Intermediate results are
probability distributions
Variable elimination: example
49
π π΅, π = π΄
πΈ
π
π π΅ π πΈ π π΄ π΅, πΈ π π π΄ π π π΄
= π π΅
πΈ
π πΈ
π΄
π π΄ π΅, πΈ π π π΄ ππ π π΄
π4(π΄)
11
π7 π΅, πΈ = π΄π3(π΄, π΅, πΈ) Γ π4(π΄) Γ π6(π΄)
π8 π΅ = πΈπ2(πΈ) Γ π7 π΅, πΈ
π π΅|π β π(π΅, π)
π3(π΄, π΅, πΈ)π1(π΅) π2(πΈ) π5(π΄,π)
π6(π΄)
Intermediate results are
probability distributions
Variable elimination: Order of summations
50
An inefficient order:
π π΅, π = π
πΈ
π΄
π π΅ π πΈ π π΄ π΅, πΈ π π π΄ π π π΄
= π π΅ π πΈπ πΈ
π΄
π π΄ π΅, πΈ π π π΄ π π π΄
π(π΄, π΅, πΈ,π)
Variable elimination:
Pruning irrelevant variables
51
Any variable that is not an ancestor of a query variable or
evidence variable is irrelevant to the query.
Prune all non-ancestors of query or evidence variables:
π π, π
Burglary
Alarm
John Calls
=True
Earthquake
Mary
Calls
XY
Z
Variable elimination algorithm
52
Given: BN, evidence π, a query π(π|ππ)
Prune non-ancestors of {π, πΏπ½}
Choose an ordering on variables, e.g.,π1,β¦,ππ For i = 1 to n, If ππ β {π, πΏπ½}
Collect factors π1, β¦ , ππ that include ππ Generate a new factor by eliminating ππ from these factors:
π = ππ
π=1
π
ππ
Normalize π(π, ππ) to obtain π(π|ππ)
After this summation, ππ is eliminated
Variable elimination algorithm
53
β’ Evaluating expressions in a proper order
β’ Storing intermediate results
β’ Summation only for those portions of the expression that
depend on that variable
Given: BN, evidence π, a query π(π|ππ)
Prune non-ancestors of {π, πΏπ½}
Choose an ordering on variables, e.g.,π1,β¦,ππ For i = 1 to n, If ππ β {π, πΏπ½}
Collect factors π1, β¦ , ππ that include ππ Generate a new factor by eliminating ππ from these factors:
π = ππ
π=1
π
ππ
Normalize π(π, ππ) to obtain π(π|ππ)
Variable elimination
54
Eliminates by summation non-observed non-query variables
one by one by distributing the sum over the product
Complexity determined by the size of the largest factor
Variable elimination can lead to significant costs saving but its
efficiency depends on the network structure .
there are still cases in which this algorithm we lead to exponential time.
Summary
Probability is the most common way to represent uncertain
knowledge
Whole knowledge: Joint probability distribution in probability
theory (instead of truth table for KB in two-valued logic)
Independence and conditional independence can be used to
provide a compact representation of joint probabilities
Bayesian networks provides a compact representation of joint
distribution including network topology and CPTs
Using independencies and conditional independencies
They can also lead to more efficient inference using these
independencies
55