35
Naïve Bayes CS 760@UW-Madison

Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Naïve BayesCS 760@UW-Madison

Page 2: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Goals for the lecture• understand the concepts

• generative/discriminative models • MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori)

• Naïve Bayes• Naïve Bayes assumption• Generic Naïve Bayes• model 1: Bernoulli Naïve Bayes

• Other Naïve Bayes• model 2: Multinomial Naïve Bayes• model 3: Gaussian Naïve Bayes• model 4: Multiclass Naïve Bayes

Page 3: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Review: supervised learning

problem setting• set of possible instances (feature space):• unknown target function (concept): • set of hypotheses (hypothesis class):

given• training set of instances of unknown target function f

X

output• hypothesis that best approximates target functionHh∈

( ) ( ) ( ))()()2()2()1()1( , ... , ,, mm yyy xxx

Page 4: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Parametric hypothesis class• hypothesis is indexed by parameter• learning: find the such that best approximate the target

• different from “nonparametric” approaches like decision trees and nearest neighbor

• advantages: various hypothesis class encoding inductive bias/prior knowledge; easy to use math/optimization

Hh∈ Θ∈θθ Hh ∈θ

Θ

Hh ∈θ

Page 5: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Discriminative approaches• hypothesis directly predicts the label given the features

• then define a loss function and find hypothesis with min. loss

• example: linear regression

)()|( generally, moreor )( xhxypxhy ==

Hh∈

)(hL

∑=

−=

=m

i

ii yxhm

hL

xxh

1

2)()( ))((1)(

,)(

θθ

θ θ

Page 6: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Generative approaches• hypothesis specifies a generative story for how the data was

created

• then pick a hypothesis by maximum likelihood estimation (MLE) or Maximum A Posteriori (MAP)

• example: roll a weighted die• weights for each side ( ) define how the data are generated• use MLE on the training data to learn

),(),( yxpyxh =

Hh∈

θθ

or simply ℎ 𝑥𝑥 = 𝑝𝑝(𝑥𝑥)

Page 7: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Comments on discriminative/generative

• usually for supervised learning, parametric hypothesis class• can also for unsupervised learning

• k-means clustering (discriminative flavor) vs Mixture of Gaussians (generative)

• can also for nonparametric• nonparametric Bayesian: a large subfield of ML

• when discriminative/generative is likely to be better? Discussed in later lecture

• typical discriminative: linear regression, logistic regression, SVM, many neural networks (not all!), …

• typical generative: Naïve Bayes, Bayesian Networks, …

Page 8: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

MLE and MAP

Page 9: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

MLE vs. MAP

9

Maximum Likelihood Estimate (MLE)

Page 10: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Background: MLE

Example: MLE of Exponential Distribution

10

Page 11: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Background: MLE

Example: MLE of Exponential Distribution

11

Page 12: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Background: MLE

Example: MLE of Exponential Distribution

12

Page 13: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

MLE vs. MAP

13

Prior

Maximum Likelihood Estimate (MLE)

Maximum a posteriori(MAP) estimate

Page 14: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Naïve Bayes

Page 15: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Spam News

The Economist The Onion

15

Page 16: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 0: Not-so-naïve Model?

Generative Story:1. Flip a weighted coin (Y)2. If heads, sample a document ID (X) from the Spam

distribution3. If tails, sample a document ID (X) from the Not-Spam

distribution

16

Page 17: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 0: Not-so-naïve Model?

Generative Story:1. Flip a weighted coin (Y)2. If heads, roll the yellow many sided die to sample a document

vector (X) from the Spam distribution3. If tails, roll the blue many sided die to sample a document

vector (X) from the Not-Spam distribution

17

This model is computationally naïve!

Page 18: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 0: Not-so-naïve Model?

18

If HEADS, roll yellow die

Flip weighted coin

If TAILS, roll blue die

0 1 0 1 … 1

y x1 x2 x3 … xK

1 0 1 0 … 1

1 1 1 1 … 1

0 0 0 1 … 1

0 1 0 1 … 0

1 1 0 1 … 0

Each side of the die is labeled with a

document vector (e.g. [1,0,1,…,1])

Page 19: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Naïve Bayes Assumption

Conditional independence of features:

19

Page 20: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Estimating a joint from conditional probabilities

A C P(A|C)

0 0 0.2

0 1 0.5

1 0 0.8

1 1 0.5

B C P(B|C)

0 0 0.1

0 1 0.9

1 0 0.9

1 1 0.1

A B C P(A,B,C)

0 0 0 …

0 0 1 …

0 1 0 …

0 1 1

1 0 0

1 0 1

1 1 0

1 1 1

C P(C)

0 0.33

1 0.67

Page 21: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

C P(C)

0 0.33

1 0.67 A B D C P(A,B,D,C)

0 0 0 0 …

0 0 1 0 …

0 1 0 0 …

0 1 1 0

1 0 0 0

1 0 1 0

1 1 0 0

1 1 1 0

0 0 0 1

0 0 1 0

… … .. … …

A C P(A|C)

0 0 0.2

0 1 0.5

1 0 0.8

1 1 0.5

B C P(B|C)

0 0 0.1

0 1 0.9

1 0 0.9

1 1 0.1

D C P(D|C)

0 0 0.1

0 1 0.1

1 0 0.9

1 1 0.1

Estimating a joint from conditional probabilities

Page 22: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

),...,()()|...,(

),...,|(:,1

,11 x

xxx

===

===∀n

nn XXP

yYPYXXPXXyYPy

Assuming conditional independence, the conditional probabilities encode the same information as the joint table.

They are very convenient for estimating P( X1,…,Xn|Y)=P( X1|Y)*…*P( Xn|Y)

They are almost as good for computing

Estimating a joint from conditional probabilities

Page 23: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model: Product of prior and the event model

Generic Naïve Bayes Model

23

Support: Depends on the choice of event model, P(Xk|Y)

Training: Find the class-conditional MLE parameters

For P(Y), we find the MLE using the data. For each P(Xk|Y)we condition on the data with the corresponding class.

Classification: Find the class that maximizes the posterior

Page 24: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Generic Naïve Bayes Model

24

Classification:

Page 25: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Various Naïve Bayes Models

Page 26: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 1: Bernoulli Naïve Bayes

26

Support: Binary vectors of length K

Generative Story:

Model:

Page 27: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 1: Bernoulli Naïve Bayes

27

If HEADS, flip each yellow coin

Flip weighted coin

If TAILS, flip each blue coin

0 1 0 1 … 1

y x1 x2 x3 … xK

1 0 1 0 … 1

1 1 1 1 … 1

0 0 0 1 … 1

0 1 0 1 … 0

1 1 0 1 … 0Each red coin

corresponds to an xk

… …

We can generate data in this fashion. Though in

practice we never would since our data is given.

Instead, this provides an explanation of how the

data was generated (albeit a terrible one).

Page 28: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 1: Bernoulli Naïve Bayes

28

Support: Binary vectors of length K

Generative Story:

Model:

Classification: Find the class that maximizes the posterior

Same as Generic Naïve Bayes

Page 29: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Generic Naïve Bayes Model

29

Classification:

Page 30: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 1: Bernoulli Naïve Bayes

30

Training: Find the class-conditional MLE parameters

For P(Y), we find the MLE using all the data. For each P(Xk|Y) we condition on the data with the corresponding class.

Page 31: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 2: Multinomial Naïve Bayes

31

Integer vector (word IDs)Support:

Generative Story:

Model:

(Assume 𝑀𝑀𝑖𝑖 = 𝑀𝑀 for all 𝑖𝑖)

Page 32: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 3: Gaussian Naïve Bayes

32

Model: Product of prior and the event model

Support:

Page 33: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Model 4: Multiclass Naïve Bayes

33

Model:

Page 34: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

Summary: Generative Approach

• Step 1: specify the joint data distribution (generative story)• Step 2: use MLE or MAP for training• Step 3: use Bayes’ rule for inference on test instances

Page 35: Naïve Bayes - pages.cs.wisc.edupages.cs.wisc.edu/~yliang/cs760_spring21/slides/lecture8-naive-bayes.pdf• MLE (Maximum Likelihood Estimate) and MAP (Maximum A Posteriori) • Naïve

THANK YOUSome of the slides in these lectures have been adapted/borrowed

from materials developed by Mark Craven, David Page, Jude Shavlik, Tom Mitchell, Nina Balcan, Matt Gormley, Elad Hazan,

Tom Dietterich, and Pedro Domingos.