105
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017 1 Lecture 10: Recurrent Neural Networks

Cs231n 2017 lecture10 Recurrent Neural Networks

Embed Size (px)

Citation preview

Page 1: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20171

Lecture 10:Recurrent Neural Networks

Page 2: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20172

Administrative

A1 grades will go out soon

A2 is due today (11:59pm)

Midterm is in-class on Tuesday!We will send out details on where to go soon

Page 3: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20173

Extra Credit: Train Game

More details on Piazza by early next week

Page 4: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20174

Last Time: CNN Architectures

AlexNet

Figure copyright Kaiming He, 2016. Reproduced with permission.

Page 5: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20175

Last Time: CNN Architectures

Figure copyright Kaiming He, 2016. Reproduced with permission.

3x3 conv, 128

Pool

3x3 conv, 64

3x3 conv, 64

Input

3x3 conv, 128

Pool

3x3 conv, 256

3x3 conv, 256

Pool

3x3 conv, 512

3x3 conv, 512

Pool

3x3 conv, 512

3x3 conv, 512

Pool

FC 4096

FC 1000

Softmax

FC 4096

3x3 conv, 512

3x3 conv, 512

Pool

Input

Pool

Pool

Pool

Pool

Softmax

3x3 conv, 512

3x3 conv, 512

3x3 conv, 256

3x3 conv, 256

3x3 conv, 128

3x3 conv, 128

3x3 conv, 64

3x3 conv, 64

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

FC 4096

FC 1000

FC 4096

VGG16 VGG19 GoogLeNet

Page 6: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20176

Last Time: CNN Architectures

Figure copyright Kaiming He, 2016. Reproduced with permission. Input

Softmax

3x3 conv, 64

7x7 conv, 64 / 2

FC 1000

Pool

3x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 1283x3 conv, 128 / 2

3x3 conv, 1283x3 conv, 128

3x3 conv, 1283x3 conv, 128

...

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

Pool

relu

Residual block

conv

conv

Xidentity

F(x) + x

F(x)

relu

X

Page 7: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20177

Figures copyright Larsson et al., 2017. Reproduced with permission.

Pool

Conv

Dense Block 1

Conv

Input

Conv

Dense Block 2

Conv

Pool

Conv

Dense Block 3

Softmax

FC

Pool

Conv

Conv

1x1 conv, 64

1x1 conv, 64

Input

Concat

Concat

Concat

Dense Block

DenseNet FractalNet

Page 8: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20178

Last Time: CNN Architectures

Page 9: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 20179

Last Time: CNN ArchitecturesAlexNet and VGG have tons of parameters in the fully connected layers

AlexNet: ~62M parameters

FC6: 256x6x6 -> 4096: 38M paramsFC7: 4096 -> 4096: 17M paramsFC8: 4096 -> 1000: 4M params~59M params in FC layers!

Page 10: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201710

Today: Recurrent Neural Networks

Page 11: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201711

Vanilla Neural Networks

“Vanilla” Neural Network

Page 12: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201712

Recurrent Neural Networks: Process Sequences

e.g. Image Captioningimage -> sequence of words

Page 13: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201713

Recurrent Neural Networks: Process Sequences

e.g. Sentiment Classificationsequence of words -> sentiment

Page 14: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201714

Recurrent Neural Networks: Process Sequences

e.g. Machine Translationseq of words -> seq of words

Page 15: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201715

Recurrent Neural Networks: Process Sequences

e.g. Video classification on frame level

Page 16: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201716

Sequential Processing of Non-Sequence Data

Ba, Mnih, and Kavukcuoglu, “Multiple Object Recognition with Visual Attention”, ICLR 2015.Gregor et al, “DRAW: A Recurrent Neural Network For Image Generation”, ICML 2015Figure copyright Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra, 2015. Reproduced with permission.

Classify images by taking a series of “glimpses”

Page 17: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201717

Sequential Processing of Non-Sequence Data

Gregor et al, “DRAW: A Recurrent Neural Network For Image Generation”, ICML 2015Figure copyright Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra, 2015. Reproduced with permission.

Generate images one piece at a time!

Page 18: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201718

Recurrent Neural Network

x

RNN

Page 19: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201719

Recurrent Neural Network

x

RNN

yusually want to predict a vector at some time steps

Page 20: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201720

Recurrent Neural Network

x

RNN

yWe can process a sequence of vectors x by applying a recurrence formula at every time step:

new state old state input vector at some time step

some functionwith parameters W

Page 21: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201721

Recurrent Neural Network

x

RNN

yWe can process a sequence of vectors x by applying a recurrence formula at every time step:

Notice: the same function and the same set of parameters are used at every time step.

Page 22: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201722

(Vanilla) Recurrent Neural Network

x

RNN

y

The state consists of a single “hidden” vector h:

Page 23: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201723

h0 fW h1

x1

RNN: Computational Graph

Page 24: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201724

h0 fW h1 fW h2

x2x1

RNN: Computational Graph

Page 25: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201725

h0 fW h1 fW h2 fW h3

x3

x2x1

RNN: Computational Graph

hT

Page 26: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201726

h0 fW h1 fW h2 fW h3

x3

x2x1W

RNN: Computational Graph

Re-use the same weight matrix at every time-step

hT

Page 27: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201727

h0 fW h1 fW h2 fW h3

x3

yT

x2x1W

RNN: Computational Graph: Many to Many

hT

y3y2y1

Page 28: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201728

h0 fW h1 fW h2 fW h3

x3

yT

x2x1W

RNN: Computational Graph: Many to Many

hT

y3y2y1 L1L2 L3 LT

Page 29: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201729

h0 fW h1 fW h2 fW h3

x3

yT

x2x1W

RNN: Computational Graph: Many to Many

hT

y3y2y1 L1L2 L3 LT

L

Page 30: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201730

h0 fW h1 fW h2 fW h3

x3

y

x2x1W

RNN: Computational Graph: Many to One

hT

Page 31: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201731

h0 fW h1 fW h2 fW h3

yT

xW

RNN: Computational Graph: One to Many

hT

y3y3y3

Page 32: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201732

Sequence to Sequence: Many-to-one + one-to-many

h0

fWh1

fWh2

fWh3

x3

x2

x1

W1

hT

Many to one: Encode input sequence in a single vector

Page 33: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201733

Sequence to Sequence: Many-to-one + one-to-many

h0

fWh1

fWh2

fWh3

x3

x2

x1

W1

hT

y1

y2

Many to one: Encode input sequence in a single vector

One to many: Produce output sequence from single input vector

fWh1

fWh2

fW

W2

Page 34: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201734

Example: Character-levelLanguage Model

Vocabulary:[h,e,l,o]

Example trainingsequence:“hello”

Page 35: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201735

Example: Character-levelLanguage Model

Vocabulary:[h,e,l,o]

Example trainingsequence:“hello”

Page 36: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201736

Example: Character-levelLanguage Model

Vocabulary:[h,e,l,o]

Example trainingsequence:“hello”

Page 37: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201737

Example: Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”Sample

Page 38: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201738

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”SampleExample:

Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

Page 39: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201739

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”SampleExample:

Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

Page 40: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201740

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”SampleExample:

Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

Page 41: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201741

Backpropagation through timeLoss

Forward through entire sequence to compute loss, then backward through entire sequence to compute gradient

Page 42: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201742

Truncated Backpropagation through timeLoss

Run forward and backward through chunks of the sequence instead of whole sequence

Page 43: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201743

Truncated Backpropagation through timeLoss

Carry hidden states forward in time forever, but only backpropagate for some smaller number of steps

Page 44: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201744

Truncated Backpropagation through timeLoss

Page 45: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201745

min-char-rnn.py gist: 112 lines of Python

(https://gist.github.com/karpathy/d4dee566867f8291f086)

Page 46: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201746

x

RNN

y

Page 47: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201747

train more

train more

train more

at first:

Page 48: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201748

Page 49: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201749

The Stacks Project: open source algebraic geometry textbook

Latex source http://stacks.math.columbia.edu/The stacks project is licensed under the GNU Free Documentation License

Page 50: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201750

Page 51: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201751

Page 52: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201752

Page 53: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201753

Generated C code

Page 54: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201754

Page 55: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201755

Page 56: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201756

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016

Page 57: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201757

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Page 58: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201758

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

quote detection cell

Page 59: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201759

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

line length tracking cell

Page 60: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201760

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

if statement cell

Page 61: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201761

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

quote/comment cell

Page 62: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201762

Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

code depth cell

Page 63: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201763

Explain Images with Multimodal Recurrent Neural Networks, Mao et al.Deep Visual-Semantic Alignments for Generating Image Descriptions, Karpathy and Fei-FeiShow and Tell: A Neural Image Caption Generator, Vinyals et al.Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al.Learning a Recurrent Visual Representation for Image Caption Generation, Chen and Zitnick

Image Captioning

Figure from Karpathy et a, “Deep Visual-Semantic Alignments for Generating Image Descriptions”, CVPR 2015; figure copyright IEEE, 2015.Reproduced for educational purposes.

Page 64: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201764

Convolutional Neural Network

Recurrent Neural Network

Page 66: Cs231n 2017 lecture10 Recurrent Neural Networks

test image

Page 67: Cs231n 2017 lecture10 Recurrent Neural Networks

test image

X

Page 68: Cs231n 2017 lecture10 Recurrent Neural Networks

test image

x0<START>

<START>

Page 69: Cs231n 2017 lecture10 Recurrent Neural Networks

h0

x0<START>

y0

<START>

test image

before:h = tanh(Wxh * x + Whh * h)

now:h = tanh(Wxh * x + Whh * h + Wih * v)

v

Wih

Page 70: Cs231n 2017 lecture10 Recurrent Neural Networks

h0

x0<START>

y0

<START>

test image

straw

sample!

Page 71: Cs231n 2017 lecture10 Recurrent Neural Networks

h0

x0<START>

y0

<START>

test image

straw

h1

y1

Page 72: Cs231n 2017 lecture10 Recurrent Neural Networks

h0

x0<START>

y0

<START>

test image

straw

h1

y1

hat

sample!

Page 73: Cs231n 2017 lecture10 Recurrent Neural Networks

h0

x0<START>

y0

<START>

test image

straw

h1

y1

hat

h2

y2

Page 74: Cs231n 2017 lecture10 Recurrent Neural Networks

h0

x0<START>

y0

<START>

test image

straw

h1

y1

hat

h2

y2

sample<END> token=> finish.

Page 75: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201775

A cat sitting on a suitcase on the floor

A cat is sitting on a tree branch

A dog is running in the grass with a frisbee

A white teddy bear sitting in the grass

Two people walking on the beach with surfboards

Two giraffes standing in a grassy field

A man riding a dirt bike on a dirt track

Image Captioning: Example Results

A tennis player in action on the court

Captions generated using neuraltalk2All images are CC0 Public domain: cat suitcase, cat tree, dog, bear, surfers, tennis, giraffe, motorcycle

Page 76: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201776

Image Captioning: Failure Cases

A woman is holding a cat in her hand

A woman standing on a beach holding a surfboard

A person holding a computer mouse on a desk

A bird is perched on a tree branch

A man in a baseball uniform throwing a ball

Captions generated using neuraltalk2All images are CC0 Public domain: fur coat, handstand, spider web, baseball

Page 77: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201777

Image Captioning with Attention

Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with permission.

RNN focuses its attention at a different spatial location when generating each word

Page 78: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201778

Image Captioning with Attention

CNN

Image: H x W x 3

Features: L x D

h0

Xu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Page 79: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201779

CNN

Image: H x W x 3

Features: L x D

h0

a1

Distribution over L locations

Xu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Image Captioning with Attention

Page 80: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201780

CNN

Image: H x W x 3

Features: L x D

h0

a1

Weighted combination of features

Distribution over L locations

z1Weighted

features: D

Xu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Image Captioning with Attention

Page 81: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201781

CNN

Image: H x W x 3

Features: L x D

h0

a1

z1

Weighted combination of features

h1

Distribution over L locations

Weighted features: D y1

First wordXu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Image Captioning with Attention

Page 82: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201782

CNN

Image: H x W x 3

Features: L x D

h0

a1

z1

Weighted combination of features

y1

h1

First word

Distribution over L locations

a2 d1

Weighted features: D

Distribution over vocab

Xu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Image Captioning with Attention

Page 83: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201783

CNN

Image: H x W x 3

Features: L x D

h0

a1

z1

Weighted combination of features

y1

h1

First word

Distribution over L locations

a2 d1

h2

z2 y2Weighted

features: D

Distribution over vocab

Xu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Image Captioning with Attention

Page 84: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201784

CNN

Image: H x W x 3

Features: L x D

h0

a1

z1

Weighted combination of features

y1

h1

First word

Distribution over L locations

a2 d1

h2

a3 d2

z2 y2Weighted

features: D

Distribution over vocab

Xu et al, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015

Image Captioning with Attention

Page 85: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201785

Soft attention

Hard attention

Image Captioning with Attention

Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with permission.

Page 86: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201786

Image Captioning with Attention

Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with permission.

Page 87: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201787

Visual Question Answering

Agrawal et al, “VQA: Visual Question Answering”, ICCV 2015Zhu et al, “Visual 7W: Grounded Question Answering in Images”, CVPR 2016Figure from Zhu et al, copyright IEEE 2016. Reproduced for educational purposes.

Page 88: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201788

Zhu et al, “Visual 7W: Grounded Question Answering in Images”, CVPR 2016Figures from Zhu et al, copyright IEEE 2016. Reproduced for educational purposes.

Visual Question Answering: RNNs with Attention

Page 89: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201789

time

depth

Multilayer RNNs

LSTM:

Page 90: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201790

ht-1

xt

W

stack

tanh

ht

Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 91: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201791

ht-1

xt

W

stack

tanh

ht

Vanilla RNN Gradient FlowBackpropagation from ht to ht-1 multiplies by W (actually Whh

T)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 92: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201792

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Computing gradient of h0 involves many factors of W(and repeated tanh)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 93: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201793

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1: Exploding gradients

Largest singular value < 1:Vanishing gradients

Computing gradient of h0 involves many factors of W(and repeated tanh)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 94: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201794

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1: Exploding gradients

Largest singular value < 1:Vanishing gradients

Gradient clipping: Scale gradient if its norm is too bigComputing gradient

of h0 involves many factors of W(and repeated tanh)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 95: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201795

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Computing gradient of h0 involves many factors of W(and repeated tanh)

Largest singular value > 1: Exploding gradients

Largest singular value < 1:Vanishing gradients Change RNN architecture

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 96: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201796

Long Short Term Memory (LSTM)

Hochreiter and Schmidhuber, “Long Short Term Memory”, Neural Computation 1997

Vanilla RNN LSTM

Page 97: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 201797

Long Short Term Memory (LSTM)[Hochreiter et al., 1997]

x

h

vector from before (h)

W

i

f

o

g

vector from below (x)

sigmoid

sigmoid

tanh

sigmoid

4h x 2h 4h 4*h

f: Forget gate, Whether to erase celli: Input gate, whether to write to cellg: Gate gate (?), How much to write to cello: Output gate, How much to reveal cell

Page 98: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017

98

ct-1

ht-1

xt

fig

o

W ☉

+ ct

tanh

☉ ht

Long Short Term Memory (LSTM)[Hochreiter et al., 1997]

stack

Page 99: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017

99

ct-1

ht-1

xt

fig

o

W ☉

+ ct

tanh

☉ ht

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

stack

Backpropagation from ct to ct-1 only elementwise multiplication by f, no matrix multiply by W

Page 100: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017100

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

c0 c1 c2 c3

Uninterrupted gradient flow!

Page 101: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017101

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

c0 c1 c2 c3

Uninterrupted gradient flow!

Input

Softm

ax

3x3 conv, 64

7x7 conv, 64 / 2

FC 1000

Pool

3x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 1283x3 conv, 128 / 2

3x3 conv, 1283x3 conv, 128

3x3 conv, 1283x3 conv, 128

...

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

Pool

Similar to ResNet!

Page 102: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017102

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

c0 c1 c2 c3

Uninterrupted gradient flow!

Input

Softm

ax

3x3 conv, 64

7x7 conv, 64 / 2

FC 1000

Pool

3x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 1283x3 conv, 128 / 2

3x3 conv, 1283x3 conv, 128

3x3 conv, 1283x3 conv, 128

...

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

Pool

Similar to ResNet!

In between:Highway Networks

Srivastava et al, “Highway Networks”, ICML DL Workshop 2015

Page 103: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017103

Other RNN Variants

[LSTM: A Search Space Odyssey, Greff et al., 2015]

[An Empirical Exploration of Recurrent Network Architectures, Jozefowicz et al., 2015]

GRU [Learning phrase representations using rnn encoder-decoder for statistical machine translation, Cho et al. 2014]

Page 104: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017104

Summary- RNNs allow a lot of flexibility in architecture design- Vanilla RNNs are simple but don’t work very well- Common to use LSTM or GRU: their additive interactions

improve gradient flow- Backward flow of gradients in RNN can explode or vanish.

Exploding is controlled with gradient clipping. Vanishing is controlled with additive interactions (LSTM)

- Better/simpler architectures are a hot topic of current research- Better understanding (both theoretical and empirical) is needed.

Page 105: Cs231n 2017 lecture10 Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 4, 2017105

Next time: Midterm!

Then Detection and Segmentation