9
1 Convergence rate of SGD Theorem: (see Nemirovski et al ‘09 from readings) Let f be a strongly convex stochastic function Assume gradient of f is Lipschitz continuous and bounded Then, for step sizes: The expected loss decreases as O(1/t): ©Emily Fox 2014 24 Convergence rates for gradient descent/ascent versus SGD Number of Iterations to get to accuracy Gradient descent: If func is strongly convex: O(ln(1/ϵ)) iterations Stochastic gradient descent: If func is strongly convex: O(1/ϵ) iterations Seems exponentially worse, but much more subtle: Total running time, e.g., for logistic regression: Gradient descent: SGD: SGD can win when we have a lot of data And, when analyzing true error, situation even more subtle… expected running time about the same, see readings ©Emily Fox 2014 25 `(w ) - `(w)

Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

Embed Size (px)

Citation preview

Page 1: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

1

Convergence rate of SGD

n  Theorem: ¨  (see Nemirovski et al ‘09 from readings) ¨  Let f be a strongly convex stochastic function ¨  Assume gradient of f is Lipschitz continuous and bounded

¨  Then, for step sizes:

¨  The expected loss decreases as O(1/t):

©Emily Fox 2014 24

Convergence rates for gradient descent/ascent versus SGD

n  Number of Iterations to get to accuracy

n  Gradient descent: ¨  If func is strongly convex: O(ln(1/ϵ)) iterations

n  Stochastic gradient descent: ¨  If func is strongly convex: O(1/ϵ) iterations

n  Seems exponentially worse, but much more subtle: ¨  Total running time, e.g., for logistic regression:

n  Gradient descent: n  SGD: n  SGD can win when we have a lot of data

¨  And, when analyzing true error, situation even more subtle… expected running time about the same, see readings

©Emily Fox 2014 25

`(w⇤)� `(w) ✏

Page 2: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

2

Motivating AdaGrad (Duchi, Hazan, Singer 2011)

n  Assuming , standard stochastic (sub)gradient descent updates are of the form:

n  Should all features share the same learning rate?

n  Often have high-dimensional feature spaces ¨  Many features are irrelevant ¨  Rare features are often very informative

n  Adagrad provides a feature-specific adaptive learning rate by incorporating knowledge of the geometry of past observations

©Emily Fox 2014 26

w 2 Rd

w(t+1)i w(t)

i � ⌘gt,i

Why adapt to geometry?

Hard

Nice

y

t

t,1 �

t,2 �

t,3

1 1 0 0-1 .5 0 11 -.5 1 0-1 0 0 01 .5 0 0-1 1 0 01 -1 1 0-1 -.5 0 1

1 Frequent, irrelevant

2 Infrequent, predictive

3 Infrequent, predictive

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 8 / 32

x

x

x

©Emily Fox 2014 27

Examples from Duchi et al. ISMP 2012

slides

Why adapt to geometry?

Hard

Nice

y

t

t,1 �

t,2 �

t,3

1 1 0 0-1 .5 0 11 -.5 1 0-1 0 0 01 .5 0 0-1 1 0 01 -1 1 0-1 -.5 0 1

1 Frequent, irrelevant

2 Infrequent, predictive

3 Infrequent, predictive

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 8 / 32

Why Adapt to Geometry?

Page 3: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

3

Not All Features are Created Equal

n  Examples:

©Emily Fox 2014 28

Motivation

Text data:

The most unsung birthday

in American business and

technological history

this year may be the 50th

anniversary of the Xerox

914 photocopier.

a

aThe Atlantic, July/August 2010.

High-dimensional image features

Other motivation: selecting advertisements in online advertising,document ranking, problems with parameterizations of manymagnitudes...

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 3 / 32

Images from Duchi et al. ISMP 2012 slides

Projected Gradient

n  Brief aside…

n  Consider an arbitrary feature space

n  If , can use projected gradient for (sub)gradient descent

©Emily Fox 2014 29

w(t+1) =

w 2 W

w 2 W

w(t+1)i w(t)

i � ⌘gt,i

Page 4: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

4

R(T ) =TX

t=1

ft(w(t))� inf

w2W

TX

t=1

ft(w)

Regret Minimization

n  How do we assess the performance of an online algorithm?

n  Algorithm iteratively predicts n  Incur loss n  Regret:

What is the total incurred loss of algorithm relative to the best choice of that could have been made retrospectively

©Emily Fox 2014 30

w(t)

ft(w(t))

w

Regret Bounds for Standard SGD

n  Standard projected gradient stochastic updates:

n  Standard regret bound:

©Emily Fox 2014 31

TX

t=1

ft(w(t))� ft(w

⇤) 1

2⌘||w(1) �w⇤||22 +

2

TX

t=1

||gt||22

w(t+1) = arg minw2W

||w � (w(t) � ⌘gt)||22

Page 5: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

5

Projected Gradient using Mahalanobis

n  Standard projected gradient stochastic updates:

n  What if instead of an L2 metric for projection, we considered the Mahalanobis norm

w(t+1) = arg minw2W

||w � (w(t) � ⌘gt)||22

w(t+1) = arg minw2W

||w � (w(t) � ⌘A�1gt)||2A

©Emily Fox 2014 32

Mahalanobis Regret Bounds

n  What A to choose?

n  Regret bound now:

n  What if we minimize upper bound on regret w.r.t. A in hindsight?

©Emily Fox 2014 33

w(t+1) = arg minw2W

||w � (w(t) � ⌘A�1gt)||2A

minA

TX

t=1

⌦gt, A

�1gt↵

TX

t=1

ft(w(t))� ft(w

⇤) 1

2⌘||w(1) �w⇤||2A +

2

TX

t=1

||gt||2A�1

Page 6: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

6

Mahalanobis Regret Minimization

n  Objective:

n  Solution:

For proof, see Appendix E, Lemma 15 of Duchi et al. 2011. Uses “trace trick” and Lagrangian. n  A defines the norm of the metric space we should be operating in

©Emily Fox 2014 34

minA

TX

t=1

⌦gt, A

�1gt↵

A = c

TX

t=1

gtgTt

! 12

subject to A ⌫ 0, tr(A) C

AdaGrad Algorithm

n  At time t, estimate optimal (sub)gradient modification A by

n  For d large, At is computationally intensive to compute. Instead,

n  Then, algorithm is a simple modification of normal updates:

©Emily Fox 2014 35

At =

tX

⌧=1

g⌧gT⌧

! 12

w(t+1) = arg minw2W

||w � (w(t) � ⌘diag(At)�1gt)||2diag(At)

w(t+1) = arg minw2W

||w � (w(t) � ⌘A�1gt)||2A

Page 7: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

7

AdaGrad in Euclidean Space

n  For , n  For each feature dimension,

where

n  That is,

n  Each feature dimension has it’s own learning rate! ¨  Adapts with t ¨  Takes geometry of the past observations into account ¨  Primary role of η is determining rate the first time a feature is encountered

©Emily Fox 2014 36

W = Rd

w(t+1)i w(t)

i � ⌘t,igt,i

⌘t,i =

w(t+1)i w(t)

i �⌘qPt⌧=1 g

2⌧,i

gt,i

AdaGrad Theoretical Guarantees

n  AdaGrad regret bound:

n  So, what does this mean in practice?

n  Many cool examples. This really is used in practice! n  Let’s just examine one…

©Emily Fox 2014 37

TX

t=1

ft(w(t))� ft(w

⇤) 2R1

dX

i=1

||g1:T,j ||2

R1 := max

t||w(t) �w⇤||1

Page 8: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

8

AdaGrad Theoretical Example

n  Expect to out-perform when gradient vectors are sparse

n  SVM hinge loss example: where

n  If xjt ≠ 0 with probability

n  Previously best known method:

©Emily Fox 2014 38

ft(w) = [1� yt⌦x

t,w↵]+ x

t 2 {�1, 0, 1}d

E"f

1

T

TX

t=1

w(t)

!#� f(w⇤

) = O✓ ||w⇤||1p

T·max{log d, d1�↵/2}

E"f

1

T

TX

t=1

w(t)

!#� f(w⇤) = O

✓ ||w⇤||1pT

·pd

/ j�↵, ↵ > 1

Neural Network Learning

0 20 40 60 80 100 1200

5

10

15

20

25

Time (hours)

Ave

rage F

ram

e A

ccura

cy (

%)

Accuracy on Test Set

SGDGPUDownpour SGDDownpour SGD w/AdagradSandblaster L!BFGS

(Dean et al. 2012)

Distributed, d = 1.7 · 109 parameters. SGD and AdaGrad use 80machines (1000 cores), L-BFGS uses 800 (10000 cores)

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 26 / 32

Neural Network Learning

Wildly non-convex problem:

f(x; ⇠) = log (1 + exp (h[p(hx1, ⇠1i) · · · p(hxk

, ⇠

k

i)], ⇠0i))

where

p(↵) =

1

1 + exp(↵)

�1 �2 �3 �4�5

x1 x2 x3 x4 x5

p(hx1, �1i)

Idea: Use stochastic gradient methods to solve it anyway

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 25 / 32

Neural Network Learning

n  Very non-convex problem, but use SGD methods anyway

©Emily Fox 2014 39

Neural Network Learning

Wildly non-convex problem:

f(x; ⇠) = log (1 + exp (h[p(hx1, ⇠1i) · · · p(hxk

, ⇠

k

i)], ⇠0i))

where

p(↵) =

1

1 + exp(↵)

�1 �2 �3 �4�5

x1 x2 x3 x4 x5

p(hx1, �1i)

Idea: Use stochastic gradient methods to solve it anyway

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 25 / 32

Neural Network Learning

Wildly non-convex problem:

f(x; ⇠) = log (1 + exp (h[p(hx1, ⇠1i) · · · p(hxk

, ⇠

k

i)], ⇠0i))

where

p(↵) =

1

1 + exp(↵)

�1 �2 �3 �4�5

x1 x2 x3 x4 x5

p(hx1, �1i)

Idea: Use stochastic gradient methods to solve it anyway

Duchi et al. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 25 / 32

Images from Duchi et al. ISMP 2012 slides

Page 9: Convergence rate of SGD - University of Washington · f (w(t)) w Regret Bounds for Standard SGD ! Standard projected gradient stochastic updates: ! Standard regret bound: ©Emily

9

What you should know about Logistic Regression (LR) and Click Prediction

n  Click prediction problem: ¨  Estimate probability of clicking ¨  Can be modeled as logistic regression

n  Logistic regression model: Linear model n  Gradient ascent to optimize conditional likelihood n  Overfitting + regularization n  Regularized optimization

¨ Convergence rates and stopping criterion n  Stochastic gradient ascent for large/streaming data

¨ Convergence rates of SGD n  AdaGrad motivation, derivation, and algorithm

40 ©Emily Fox 2014