23

Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

  • Upload
    others

  • View
    21

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression
Page 2: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Naïve Bayes and Logistic Regression

Naïve Bayes• Generative Model

�̂� = argmax) 𝑃(𝑐)(𝑑|𝑐)• Features assumed to be

independent

Logistic Regression• Discriminative Model

�̂� = argmax) 𝑃(𝑐|𝑑)• Features don’t have to be

independent

Page 3: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Logistic Regression Summary

• Input features: 𝑓(𝑥) → [𝑓3, 𝑓5, … , 𝑓7]• Output: estimate 𝑃 𝑦 = 𝑐 𝑥 for each class 𝑐

Need to model 𝑃 𝑦 = 𝑐 𝑥 with a family of functions• Train phase: Learn parameters of model to minimize loss function• Need Loss function and Optimization algorithm

• Test phase: Apply parameters to predict class given a new input

Page 4: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Example Features: 1, count “𝑎𝑚𝑎𝑧𝑖𝑛𝑔” , count “ℎ𝑜𝑟𝑟𝑖𝑏𝑙𝑒 , …Weights: [-1.0, 0.8, -0.4, …]

Binary Logistic Regression

• Input features: 𝑓(𝑥) → [𝑓3, 𝑓5, … , 𝑓7]• Output: 𝑃 𝑦 = 1 𝑥 and 𝑃 𝑦 = 0 𝑥

• Classification function: 𝜎 𝑧 = 33PQRS

𝑧 = 𝐯 U 𝐟 𝐱

Sigmoid

bias term

Page 5: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Learning the weights

• Goal: predict label X𝑦 as close as possible to actual label 𝑦• Distance metric/Loss function: 𝐿 X𝑦, 𝑦• Maximum likelihood estimate:

Choose parameters so that log P y x is maximized over the training dataset

Maximize logabc3

d

𝑃 𝑦(b) 𝑥(b)

where (𝑥(b), 𝑦(b)) are paired documents and labels

Page 6: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Binary Cross Entropy Loss

• Let X𝑦 = 𝜎(𝐯 U 𝐟 𝐱 )• Classifier probability: 𝑃 𝑦 𝑥 = X𝑦e 1 − X𝑦 3ge

• Log probability: log 𝑃 𝑦 𝑥 = 𝑦 logh𝑦 + 1 − 𝑦 log 1 − X𝑦

𝑦 = 1: 𝑃 𝑦 𝑥 = X𝑦 𝑦 = 0: 𝑃 𝑦 𝑥 = 1 − X𝑦

Page 7: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Binary Cross Entropy Loss

• Let X𝑦 = 𝜎(𝐯 U 𝐟 𝐱 )• Classifier probability: 𝑃 𝑦 𝑥 = X𝑦e 1 − X𝑦 3ge

• Log probability: log 𝑃 𝑦 𝑥 = 𝑦 logX𝑦 + 1 − 𝑦 log 1 − X𝑦• Loss:

𝐿 X𝑦, 𝑦 = −logabc3

d

𝑃 𝑦(b) 𝑥(b) = −kbc3

d

log 𝑃 𝑦(b) 𝑥(b)

= −kbc3

d

[𝑦 b log X𝑦(b) + 1 − 𝑦(b) log 1 − X𝑦(b) ]

Cross-entropy between the true distribution 𝑃 𝑦 𝑥 and predicted distribution 𝑃 X𝑦 𝑥

Page 8: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Binary Cross Entropy Loss

• Cross Entropy Loss:

Lmn = −kbc3

d

log 𝑦 b log X𝑦 b + 1 − 𝑦 b log 1 − X𝑦 b

• Rangesfrom0(perfectpredictions)to+∞• Lower loss = better classifier

Page 9: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Multinomial Logistic Regression

• Input features: 𝑓(𝑥) → [𝑓3, 𝑓5, … , 𝑓7]• Output: 𝑃 𝑦 = 𝑐 𝑥 for each class 𝑐• Classification function

vwx(𝐯U𝐟 𝐱,e )∑z{ vwx(𝐯U𝐟 𝐱,e| )

Softmax

NormalizationFeatures are a function of both input x and output class c

Page 10: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Multinomial Logistic Regression

Page 11: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Multinomial Logistic Regression

• Generalize binary loss to multinomial CE loss

Lmn X𝑦, y = −k}c3

~

1 y = c log 𝑃 𝑦 = 𝑐 𝑥

=k}c3

~

1 y = cexp(𝐯𝐜 U 𝐟 𝐱, 𝑐 )

∑e|c3~ exp(𝐯𝐲| U 𝐟 𝐱, 𝑦′ )

Page 12: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Multinomial Logistic Regression

• Generalize binary loss to multinomial CE loss

Lmn X𝑦, y = −k}c3

~

1 y = c log 𝑃 𝑦 = 𝑐 𝑥

=k}c3

~

1 y = cexp(𝐯𝐜 U 𝐟 𝐱, 𝑐 )

∑e|c3~ exp(𝐯𝐲| U 𝐟 𝐱, 𝑦′ )

Page 13: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Optimization

• We have our loss function and our estimator X𝑦 = 𝜎(𝐯 U 𝐟 𝐱 )• How do we find the best set of parameters/weights: 𝐯

X𝐯 = �𝜃 = argmin1𝑛kbc3

d

𝐿��(𝑦(b), 𝑥(b); 𝜃)

• Use gradient descent!• Find direction of steepest slope• Move in opposite direction

Page 14: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Gradient descent (1-D)

Page 15: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Gradient descent for LR

• Cross entropy loss for logistic regression is convex (i.e. has only one global minimum)• No local minima to get stuck in

• Deep neural networks are not so easy• Non-convex

Page 16: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Learning Rate

• Updates:

• Higher/faster learning rate = larger update

Magnitude of movement

Page 17: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Gradient descent with vector weights

Updates:

Page 18: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Computing the gradients

• From last lecture: argmaxkbc3

d

log 𝑃 (𝑦 b |𝑥(b); 𝜃)

Page 19: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Stochastic Gradient Descent

• Online optimization• Compute loss and minimize after each training examples

(or mini-batch)

Page 20: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

Regularization

• May overfit on the training data!

• Use regularization to prevent overfitting!

• Objective function:

�𝜃 = argmaxkbc3

d

log 𝑃 𝑦 b 𝑥 b − 𝛼 𝑅(𝜃)

Page 21: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

L2 Regularization

Page 22: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

L1 Regularization

Page 23: Naïve Bayes and Logistic Regression · Naïve Bayes and Logistic Regression Naïve Bayes •Generative Model "̂=argmax)*(")(-|") •Features assumed to be independent Logistic Regression

L2 vs L1 regularization

• L2 is easier to optimize • L1 is complex since the derivative of |θ| is not continuous at 0

• L2 leads to many small weights• L1 prefers sparse weight vectors with many weights set to 0