29
Machine Learning Srihari 1 Learning MN Parameters with Alternative Objective Functions Sargur Srihari [email protected]

Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

  • Upload
    lyhanh

  • View
    225

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

1

Learning MN Parameters with Alternative Objective Functions

Sargur Srihari [email protected]

Page 2: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Topics

•  Max Likelihood & Contrastive Objectives •  Contrastive Objective Learning Methods

– Pseudo-likelihood •  Gradient descent •  Ex: Use in Collaborative Filtering

– Contrastive Optimization criteria •  Contrastive Divergence •  Margin-based Training

2

Page 3: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Recapitulate ML learning

•  Task: Estimate parameters of a MN

•  with a fixed structure given data set D•  Simplest is to maximize log-likelihood objective

given samples ξ1,…ξM, which is

•  Although concave, no analytical form for maximum •  Can use iterative gradient ascent in param. space

– Good news: Likelihood is concave, Guaranteed to converge –  Bad news: Each step needs inference; estimation is intractable

3

ℓ(θ :D ) = θii∑ fi ξ[m]( )

m∑⎛⎝⎜

⎞⎠⎟−M lnZ(θ )

Z θ( ) = exp θ

ifiξ( )

i∑⎧⎨⎪⎪⎩⎪⎪

⎫⎬⎪⎪⎭⎪⎪ξ

∂∂θi

lnZ(θ ) = 1Z(θ )

fi (ξ )exp θ j f j ξ( )j∑

⎧⎨⎪

⎩⎪

⎫⎬⎪

⎭⎪ξ∑ = Eθ [ fi ]

∂∂θi

1Mℓ(θ :D) = ED[ fi χ( )]− Eθ [ fi ]

where

since

P(X1,..Xn;θ ) =1

Z(θ )exp θi fi (Di )

i=1

k

∑⎧⎨⎩

⎫⎬⎭

Page 4: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Replacing ML objective

•  Previously seen approx. inference methods: •  Belief propagation, MaxEnt, Sampling +SGD

•  Now we look at replacing the objective function with one that is more tractable •  The ML objective is: •  For simplicity focus on the case of a single data

instance ξ:

4

ℓ(θ :D ) = θii∑ fi ξ[m]( )

m∑⎛⎝⎜

⎞⎠⎟−M lnZ(θ )

Z θ( ) = exp θ

ifiξ( )

i∑⎧⎨⎪⎪⎩⎪⎪

⎫⎬⎪⎪⎭⎪⎪ξ

ℓ θ : ξ( ) = ln "P ξ | θ( )− lnZ θ( )

Page 5: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Log-likelihood of a single sample •  In the case of a single instance ξ

•  Expanding the partition function Z(θ)

•  Maximizing l is to increase distance (contrast) between the two terms

•  Consider each of the two terms separately and

5

ℓ θ : ξ( ) = ln "P ξ | θ( )− lnZ θ( )

ℓ θ : ξ( )= ln !P ξ | θ( )− ln !P ξ ' | θ( )ξ '∑⎛

⎝⎜⎜⎜⎜

⎠⎟⎟⎟⎟

Summing over all possible values of dummy variable ξ’

ln !P(ξ |θ ) ln !P(ξ |θ ) = − θi fi (Di )

i=1

k

Z θ( ) = exp θ

ifiξ( )

i∑⎧⎨⎪⎪⎩⎪⎪

⎫⎬⎪⎪⎭⎪⎪ξ

Page 6: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari First Term of Objective

•  First term is:

•  Objective aims to increase the log measure –  i.e., Logarithm of unnormalized probability of

observed data instance ξ–  log measure is a linear function of parameters in

log-linear representation, – Thus that goal can be accomplished by

•  Increasing all parameters associated with positive empirical expectations in ξ and decreasing all parameters associated with negative empirical expectations

– We can increase it unboundedly using this approach

6

ln !P(ξ |θ )

ln !P(ξ |θ ) = − θi fi (Di )

i=1

k

Page 7: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Second Term of Objective

•  Second Term: •  It is the logarithm of the sum of unnormalized

measures of all possible instances in Val(χ) –  It is the aggregate of the measures of all instances

7

ln !P ξ ' | θ( )

ξ '∑⎛

⎝⎜⎜⎜⎜

⎠⎟⎟⎟⎟

Page 8: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

The Contrastive Objective Approach •  Can view log-likelihood objective as

– aiming to increase distance between log measure of ξ and aggregate of the measures of all instances

•  The key difficulty with this formulation –  second term has exponential instances in Val(χ)

and requires inference in the network •  It suggests an approach to approximation:

–  We can move our parameters in the right direction if we aim to increase the difference between 1.  The log-measure of data instances and 2.  A more tractable set of other instances, one not

requiring summation over an exponential space 8

Page 9: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Two Approaches to increase probability gap

1.  Pseudolikelihood and its generalizations – Easiest method circumventing intractability of

network inference – Simplifies likelihood by replacing exponential no. of

summations with several summations, each more tractable

2.  Contrastive Optimization – Drive probability of observed samples higher – Contrast data with a randomly perturbed set of

neighbors

9

Page 10: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Pseudolikelihood for Tractability •  Consider likelihood of single instance ξ :

– Approximate by replacing each product term by conditional probability of xj given all other variables

– Gives the pseudolikelihood (PL) objective

where x-j stands for x1,.., xj-1, xj+1,.., xn

– This objective measures ability to predict each variable given full observation of all other variables

10

P ξ( ) = P x

j| x

1,..,x

j−1( )j=1

n

P ξ( )= P xj| x

1,..,x

j−1, x

j+1,..,x

n( )j=1

n

From chain rule P(x1,x2) = P(x1) P(x2|x1) and P(x1,x2,x3) =P(x1) P(x2|x1) P(x3|x1.x2)

ℓPLθ :D( )= 1

MlnP x

jm⎡⎣⎢⎤⎦⎥ |x−j m

⎡⎣⎢⎤⎦⎥ ,θ( )

j∑

m∑

Page 11: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Pseudolikelihood and Multinomial

•  The predictive model takes a form that generalizes the multinomial logistic CPD P(Y|X1,…,Xk) such that –  Identical to it when the network consists of only

pairwise features •  Factors over edges in the network

– We can use conditional independence properties in the network to simplify the expression,

–  removing from the rhs of P(Xj|X-j) any variable that is not a neighbor of Xj

11

P(y j | X1,...,X

k) =

exp(ℓj(X

1,...,X

k))

exp(ℓj '(X

1,...,X

k))

j '=1

m

∑ ℓ

j(X

1,...,X

k) = w

j ,0+ w

j ,ii=1

k

∑ Xi

ℓPLθ :D( )= 1

MlnP x

jm⎡⎣⎢⎤⎦⎥ |x−j m

⎡⎣⎢⎤⎦⎥ ,θ( )

j∑

m∑

Page 12: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Simplification in Pseudolikelihood

•  The pseudolikelihood (PL) objective is

– Whereas likelihood is •  Pseudolikelihood eliminates exponential

summation over instances with several summations, each of which is more tractable –  In particular

•  Global partition function has disappeared •  Requires only summation over Xj

•  But there is a contrastive perspective –  Described next 12

ℓPLθ :D( )= 1

MlnP x

jm⎡⎣⎢⎤⎦⎥ |x−j m

⎡⎣⎢⎤⎦⎥ ,θ( )

j∑

m∑

P(xj|x−j

) =P(x

j,x−j

)

P(x−j

)=

!P(xj,x−j

)!P(x

j ',x−j

)xj

'∑

ℓ(θ :D ) = θii∑ fi ξ[m]( )

m∑⎛⎝⎜

⎞⎠⎟−M lnZ(θ ) lnZ θ( ) = ln exp θi fi ξ( )

i∑⎧⎨

⎫⎬⎭ξ

Page 13: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Contrastive perspective of PL

•  Pseudolikelihood objective of single data ξ:

•  Each term of final sum is a contrastive term – Where we aim to increase difference between log-

measure of training instance ξ and an aggregate of log-measures of instances that differ from ξ in the assignment to precisely one variable

•  In other words we are increasing the contrast between our training instance ξ and the instances in a local neighborhood around it

13

lnP xj|x−j( )

j∑ = ln !P x

j,x−j( )− ln !P x '

j,x−j( )

x j '∑

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟j∑

= ln !P ξ( )− ln !P x 'j,x−j( )

x 'j

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟j∑

Page 14: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Pseudolikelihood is concave

•  Further simplification of summands in the expression obtains

– Each term is a log-conditional-likelihood term for a

MN for a single variable Xj conditioned on rest – Thus it follow that the function is concave in the

parameters θ

•  Since a sum of concave functions is concave the pseudolikelihood objective –  is also concave

14

lnP(xj|x−j)= θ

ifixj,x−j( )

i:Scope fi⎡⎣⎢⎤⎦⎥∍Xj

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟− ln exp θ

ifix 'j,x−j( )

i:Scope fi⎡⎣⎢⎤⎦⎥∍Xj

∑⎧⎨⎪⎪⎪

⎩⎪⎪⎪

⎫⎬⎪⎪⎪

⎭⎪⎪⎪x j '

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟

lnP xj|x−j( )

j∑ = ln !P ξ( )− ln !P x '

j,x−j( )

x 'j

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟j∑

ℓPLθ :D( )= 1

MlnP x

jm⎡⎣⎢⎤⎦⎥ |x−j m

⎡⎣⎢⎤⎦⎥ ,θ( )

j∑

m∑

Page 15: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Gradient of pseudolikelihood

•  To compute gradient we use –  to obtain

•  If Xj is not in the scope of fi then fi(xj,x-j)=fi(xj’,x-j) for any xj’

and the two terms are identical making the derivative 0. Inserting this into lPL we get

–  It is much easier to compute this than – Each expectation term requires a summation over a

single random variable Xj, conditioned on all of its neighbors

•  A computation that can be performed very efficiently 15

∂∂θi

ln xj|x−j( )= fi(x j ,x−j )−Exj '~Pθ Xj |x−j( ) fi(x j ',x−j )

⎡⎣⎢

⎤⎦⎥

lnP(xj| x−j

) = θifi

xj,x−j( )

i:Scope fi⎡⎣⎢⎤⎦⎥∍Xj

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟− ln exp θ

ifi

x 'j,x−j( )

i:Scope fi⎡⎣⎢⎤⎦⎥∍Xj

∑⎧⎨⎪⎪⎪

⎩⎪⎪⎪

⎫⎬⎪⎪⎪

⎭⎪⎪⎪xj '

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟

∂∂θ

i

lPL

θ :D( ) = 1M

fi(ξ[m])− E

xj '~Pθ(Xj |x− j [m ])fi(x

j',x− j

[m])⎡⎣ ⎤⎦m∑⎛

⎝⎜⎞⎠⎟j :Xj∈Scope[ fi ]

∂∂θi

1Mℓ(θ :D) = ED[ fi χ( )]− Eθ [ fi ]

Page 16: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Relationship between likelihood and pseudolikelihood

•  Theorem: Assume that our data are generated by a log-linear model Pθ*, i.e., – Then as M goes to infinity, with probability

approaching 1, θ* is a global optimum of the pseudolikelihood objective

•  Result is an important, but there are limitations – Model being learnt must be expressive

•  But model never perfectly represents true distribution

– Data distribution must be near generating distribution

•  Often not enough data to approach large sample limit 16

P(X1,..Xn;θ ) =1

Z(θ )exp θi fi (Di )

i=1

k

∑⎧⎨⎩

⎫⎬⎭

ℓPLθ :D( )= 1

MlnP x

jm⎡⎣⎢⎤⎦⎥ |x−j m

⎡⎣⎢⎤⎦⎥ ,θ( )

j∑

m∑

Page 17: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Type of queries determine method •  How good is the pseudolikelihood objective

depends on the type of queries –  If we condition on most of variables and condition

on few, pseudolikelihood is a very close match to the type of predictions we would like to make

•  So pseudolikelihood may well be better than likelihood •  Ex: in learning a MN for collaborative filtering, we take

user’s preference for all items as observed except the query item

– Conversely, if a typical query involves most or all variables, the likelihood objective is better

•  In learning a CRF model for image segmentation 17

Page 18: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Ex: Collaborative Filtering •  We wan’t to recommend to a user an item

based on previous items he bought/liked •  Because we don’t have enough data for any

single user to determine his/her preference, we use collaborative filtering –  i.e., use observed preferences of others to

determine preferences for any other user – One approach: learn dependency structure between

different purchases as observed in the population •  Item i is a variable Xi in a joint distribution and each user

is an instance •  View purchase/non-purchase as values of a variable

18

Page 19: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari A Collaborative Filtering Task

•  Given a set of purchases S we can compute the probability that user would like new item i

•  This is a probabilistic inference task where all purchases other than i are set to False; thus all variables other than Xi are observed

19

If we condition on most of variables and condition on few, Pseudolikelihood (PL) is a very close match to the type of predictions we would like to make

Network learned from Nielsen TV rating data capturing viewing record of sample viewers. Variables denote whether a TV program was watched

Page 20: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Limitation of pseudolikelihood (PL) •  Ex: MN with 3 variables: X1—X2—Y

• X1 and X2 highly correlated, Both not as correlated to Y •  Best predictor for X1 is X2 and vice versa

–  Pseudolikelihood likely to overestimate X1—X2 parameters and almost entirely dismiss X1—Y and X2—Y

–  Excellent predictor for X2 when X1 is observed, but useless when only Y, not X1, is observed

•  PL assumes local neighborhood is observed – cannot exploit weaker or long-range dependencies

•  Solution: generalized pseudolikelihood (GPL)

20

We define a set of subsets of variables {Xs: s ∈S} and define an objective:

lGPL

(θ :D ) = 1M

lnP(xs

s∑

m∑ [m] |x−s

[m],θ)

where X−s= χ −X

s

Page 21: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Contrastive Optimization Criteria

•  Both Likelihood and Pseudolikelihood – attempt to increase log-probability gap between

probability of observed set of m instances D and logarithm of the aggregate probability of a set of instances:

•  Based on this intuition, a range of methods developed to increase log-probability gap – By driving probability of observed data higher

relative to other instances, •  we are tuning the parameters to predict the data better

ℓ θ : ξ( )= ln !P ξ | θ( )− ln !P ξ ' | θ( )ξ '∑⎛

⎝⎜⎜⎜⎜

⎠⎟⎟⎟⎟

ℓPLθ :D( )= 1

MlnP x

jm⎡⎣⎢⎤⎦⎥ |x−j m

⎡⎣⎢⎤⎦⎥ ,θ( )

j∑

m∑

lnP(xj|x−j)= θ

ifixj,x−j( )

i:Scope fi⎡⎣⎢⎤⎦⎥∍Xj

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟− ln exp θ

ifix 'j,x−j( )

i:Scope fi⎡⎣⎢⎤⎦⎥∍Xj

∑⎧⎨⎪⎪⎪

⎩⎪⎪⎪

⎫⎬⎪⎪⎪

⎭⎪⎪⎪x j '

∑⎛

⎜⎜⎜⎜⎜

⎟⎟⎟⎟⎟⎟

ℓ(θ :D ) = θii∑ fi ξ[m]( )

m∑⎛⎝⎜

⎞⎠⎟−M lnZ(θ )

Page 22: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Contrastive Objective Definition •  Consider a single training instance ξ

– We aim to maximize the log-probability gap

•  where ξ’ is some other instance, whose selection we discuss shortly

•  Importantly, the expression takes a simple form

– For a fixed instantiation of ξ’, this expression is a linear function of θ and hence is unbounded

– For a coherent optimization objective, choice of ξ’ has to change throughout the optimization

•  Even then, take care to prevent unbounded parameters

22

ln !P ξ | θ( )− ln !P ξ ' | θ( )

ln !P ξ |θ( )− ln !P ξ ' |θ( ) = θT [f(ξ) − f(ξ’)]

Page 23: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Two Contrastive Optimization Methods

•  One can construct many variants of this method –  i,e., methods to increase log-probability gap

•  Two methods for choosing ξ’ that have been useful in practice: 1. Contrastive divergence

•  Popularity of method has grown •  Used in deep learning– for training layers of RBMs

2. Margin-based training

23

Page 24: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Contrastive Divergence (CD) •  In this method we contrast our data instances D with set of randomly perturbed neighbors D -

– We aim to maximize

•  where PD and PD- are the empirical distributions relative to D and D-

•  The set of contrasted instances D- will necessarily differ at different stages in the search – Choosing instances to contrast is next

24

ℓCDθ :D ||D −( )= Eξ~ !PD ln !Pθ ξ( )⎡

⎣⎢⎤⎦⎥−E

ξ~ !PD−ln !P

θξ( )⎡

⎣⎢⎤⎦⎥

Page 25: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Choice of Data instances

•  Given current θ, what instances to which we want to contrast our data instances D? •  One intuition: move θ in a direction that increases probability of instances in D relative to “typical” instances in current distribution –  i.e., increase probability gap between instances ξε D and instances ξ sampled randomly from Pθ

•  Thus, we can generate a contrastive set D - by sampling from Pθ and then maximizing the objective in

25 ℓCDθ :D ||D −( )= Eξ~ !PD ln !Pθ ξ( )⎡

⎣⎢⎤⎦⎥−E

ξ~ !PD−ln !P

θξ( )⎡

⎣⎢⎤⎦⎥

Page 26: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari How to sample from Pθ?

•  We can run a Markov chain defined by the MN Pθ using Gibbs sampling and initializing from the instances in D – Once the chain mixes we can collect samples from

the distribution – Unfortunately, sampling from the chain for long

enough to achieve mixing takes far too long for the inner loop of a learning algorithm

– So we initialize from instances in D and run the chain only for a few steps

•  Instances from these short sampling runs define D - 26

Page 27: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Updating parameters in CD

•  We want model to give high probability to instances in D relative to the perturbed instances in D- – Thus we want to move our parameters in a direction

that increases the probability of instances in D relative to the perturbed instances in D-

•  Gradient of objective is easy to compute

–  In practice approximation we get by taking only a few steps in the Markov chain provides a good direction for the search 27

∂∂θ

i

ℓCDθ :D ||D−( ) = E "PD

fiχ( )⎡

⎣⎢⎤⎦⎥ −E "P

D−fiχ( )⎡

⎣⎢⎤⎦⎥

Page 28: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari Margin-Based Training

•  Very different intuition in settings where our goal is to use the network for predicting a MAP assignment

•  Ex: in image segmentation we want the lerned network to predict a single high probability assignment to the pixels that will encode pur final segmentation output

•  Occurs only when queries are conditional •  So we describe objective for a CRF

28

Page 29: Learning MN Parameters with Alternative Objective Functionssrihari/CSE674/Chap20/20... · • Max Likelihood & Contrastive Objectives ... – Each expectation term requires a summation

Machine Learning Srihari

Margin-Based Training •  Training set consists of pairs •  Given observation x[m] we would like learned

model to give highest probability to y[m] –  i.e., we would like Pθ(y[m]|x[m]) to be higher than

any other probability Pθ(y|x[m]) for y≠y[m] –  i.e., Maximize the margin

•  the difference between the log-probability of the target assignment y[m] and “next best” assignment

29

D = y m⎡⎣⎢⎤⎦⎥ ,x m⎡⎣⎢⎤⎦⎥( ){ }m=1

M

lnPθ

y m⎡⎣⎢⎤⎦⎥ | x m⎡⎣⎢

⎤⎦⎥( )− max

y ≠ y m⎡⎣⎢⎤⎦⎥

lnPθ

y m⎡⎣⎢⎤⎦⎥ | x m⎡⎣⎢

⎤⎦⎥( )

⎢⎢⎢

⎥⎥⎥