Recurrent and Recursive Networks - IDA · 2016-09-14 · Recurrent neural networks (RNNs) •...

Preview:

Citation preview

Recurrent and Recursive Networks

Marco Kuhlmann

Neural Networks with Applications to Vision and Language

Introduction

Applications of sequence modelling

• Map unsegmented connected handwriting to strings.

• Map sequences of acoustic signals to sequences of phonemes.

• Translate sentences from one language into another one.

• Generate baby names, poems, source code, patent applications.

The bag-of-words model

The gorgeously elaborate continuation of “The Lord of the Rings” trilogy is so huge that a column of words cannot adequately describe co-writer/director Peter Jackson’s expanded vision of J.R.R. Tolkien’s Middle-earth.

… is a sour little movie at its core; an exploration of the emptiness that underlay the relentless gaiety of the 1920’s, as if to stop would hasten the economic and global political turmoil that was to come.

pos neg

The bag-of-words model

a adequately cannot co-writer column continuation describe director elaborate expanded gorgeously huge is J.R.R. Jackson Lord Middle-earth of of of of Peter Rings so that The The the Tolkien trilogy vision words

1920’s a an and as at come core economic emptiness exploration gaiety global hasten if is its little movie of of political relentless sour stop that that the the the the to to turmoil underlay was would

pos neg

Part-of-speech tagging

jag bad om en kort bit

PN VB PP DT JJ NN

NN NN

PL RG

AB VBSN PN

NN

AB NN

Hidden Markov Models (HMMs)

jag bad om en kort bit

PN VB PP DT JJ NN

𝑃(PN|BOS)𝑃(VB|PN)

𝑃(PP|VB)𝑃(DT|PP)

𝑃(JJ|DT)𝑃(NN|JJ)

𝑃(EOS|NN)

𝑃(jag|PN) 𝑃(om|PP) 𝑃(bit|NN)𝑃(en|DT)𝑃(bad|VB) 𝑃(kort|JJ)

transition probabilities, emission probabilities

VB

PN

BOS EOS

𝑃(VB| BOS) 𝑃(EOS|VB)

𝑃(PN| BOS) 𝑃(EOS|PN)

𝑃(PN|VB) 𝑃(VB|PN)

𝑃(VB|VB)

𝑃(PN|PN)

𝑃(𝑤 |VB)

𝑃(𝑤 |PN)

A weakness of Hidden Markov Models

• The only information that an HMM has access to at any given point in time is its state.

• Suppose that the HMM has 𝑛 states. Then the number of the current state can be written using log 𝑛 bits.

• Thus the current state contains at most log 𝑛 bits of information about the sequence generated so far.

Strengths of recurrent neural networks

• Distributed hidden state

In recurrent neural networks, several units can be active at once, which allows them to store a lot of information efficiently. contrast that with one current state in HMMs

• Non-linear dynamics

Different units can interact with each other in non-linear ways. This makes recurrent neural networks Turing-complete. contrast that with linear dynamical systems

Attribution: Geoffrey Hinton

Recurrent neural networks (RNNs)

• Recurrent neural networks can be visualised as networks with feedback connections, which form directed cycles between units.

• These feedback connections are unfolded over time.

• A crucial property of recurrent neural networks is that they share the same set of parameters across different timesteps.

RNN, cyclic representation

𝒐

𝒙

= delay of one timestep𝒇

𝒉

RNN, unrolled

𝒐(2)

𝒙(2)

𝒐(1) 𝒐(3)

𝒙(1) 𝒙(3)

𝒉(2) 𝒉(3)𝒉(1)𝒇 𝒇𝒇 𝒇

General observations

• The parameters of the model are shared across all timesteps.

• The hidden state can be influenced by the entire input seen so far. Contrast this with the Markov assumption of HMMs.

• The hidden state can be a ‘lossy summary’ of the input sequence. Hopefully, this state will encode useful information for the task at hand.

• The model has the same input size regardless of sequence length. specified in terms of transitions from one state to the other

Different types of RNN architectures

encoder

generator

transducer

Training recurrent neural networks

Computation graph for a standard architecture

𝒚(2)𝒚(1) 𝒚(3)

𝑳(2)𝑳(1) 𝑳(3)

𝒙(2)𝒙(1) 𝒙(3)

𝑾 𝑾𝑾 𝑾𝒉(2) 𝒉(3)𝒉(1)

𝑽

𝑼 𝑼 𝑼

𝑽 𝑽

𝒐(1) 𝒐(2) 𝒐(3)

Assumptions

• The hidden states are computed by some nonlinear activation function, such as tanh.

• The outputs at each time step are normalised log-probabilities representing distributions over a finite set of labels. In the book, this is assumed to happen implicitly when computing the loss.

Backpropagation through time

• Unrolled recurrent neural networks are just feedforward networks, and can therefore be trained using backpropagation. parameter sharing; linear constraints on the parameters

• This way of training recurrent neural networks is called backpropagation through time.

• Given that the unrolled computation graphs can be very deep, the vanishing gradient problem is exacerbated in RNNs.

𝑦𝑖

𝐸

𝑤𝑖𝑗

𝑤𝑗𝑘

𝑓

𝑓

𝑦𝑘𝑧𝑘

𝑦𝑗𝑧𝑗

𝑡

𝑦𝑖

𝐸

𝑤𝑖𝑗

𝑤𝑗𝑘

𝑓

𝑓

𝑦𝑘𝑧𝑘

𝑦𝑗𝑧𝑗

𝑡

Backpropagation through time

𝒚(2)𝒚(1) 𝒚(3)

𝑳(2)𝑳(1) 𝑳(3)

𝒙(2)𝒙(1) 𝒙(3)

𝑾 𝑾𝑾 𝑾𝒉(2) 𝒉(3)𝒉(1)

𝑽

𝑼 𝑼 𝑼

𝑽 𝑽

𝒐(1) 𝒐(2) 𝒐(3)

Initial values of the hidden state

• We could manually specify the initial state in terms of some sensible starting value.

• We could learn the initial state by starting with a random guess and then updating that guess during backpropagation.

Networks with output recurrence

𝒚(2)𝒚(1) 𝒚(3)

𝑳(2)𝑳(1) 𝑳(3)

𝑾 𝑾𝑾 𝑾

𝒉(2) 𝒉(3)𝒉(1)

𝑽 𝑽 𝑽

𝒙(2)𝒙(1) 𝒙(3)

𝑼 𝑼 𝑼

𝒐(1) 𝒐(2) 𝒐(3)

The limitations of recurrent neural networks

• In principle, recurrent networks are capable of learning long-distance dependencies.

• In practice, standard gradient-based learning algorithms do not perform very well. Bengio et al. (1994) – the ‘vanishing gradient’ problem

• Today, there are several methods available for training recurrent neural networks that avoids these problems. LSTMs, optimisation with small gradients, careful weight initialisations, …

Vanishing and exploding gradients

-1

-0,5

0

0,5

1

-6 -3 0 3 6

sigmoid tanh relu

0

0,25

0,5

0,75

1

-6 -3 0 3 6

sigmoid tanh relu

activation functions gradients

Recursive neural networks

𝒙(2)𝒙(1) 𝒙(3) 𝒙(4)

𝒐 𝒚

𝑳

𝑽 𝑽 𝑽 𝑽

𝑼 𝑾 𝑼 𝑾

𝑼 𝑾

Long Short-Term Memory (LSTM)

Long Short-Term Memory

• The Long Short-Term Memory (LSTM) architecture was specifically designed to battle the vanishing gradients problem

• Metaphor: The dynamic state of the neural network can be considered as a short-term memory.

• The LSTM architecture tries to make this short-term memory last as long as possible by preventing vanishing gradients.

• Central idea: gating mechanism

Memory cell and gating mechanism

The crucial innovation in an LSTM is the design of its memory cell.

• Information is written into the cell whenever its ‘write’ gate is on.

• The information stays in the cell as long as its ‘keep’ gate is on.

• Information is read from the cell whenever its ‘read’ gate is on.

Information flow in an LSTM

write write write

keep keep keep keep

write write write

1.7

1.7 1.7 1.7

1.7

Attribution: Geoffrey Hinton

time →

A look inside an LSTM cell

𝒙(𝑖)

𝒚(𝑖)

𝒉(𝑖−1)

𝒔(𝑖−1)

𝒉(𝑖)

𝒔(𝑖)

𝜎 tanh𝜎 𝜎

tanh

• +

••

Attr

ibut

ion:

Chr

is O

lah

The ‘keep’ gate (‘forget gate’)

𝒙(𝑖)

𝜎

𝒉(𝑖−1)

Attr

ibut

ion:

Chr

is O

lah

The ‘write’ gate (‘input gate’)

𝜎

𝒙(𝑖)

𝒉(𝑖−1)

Attr

ibut

ion:

Chr

is O

lah

Update candidate

𝒙(𝑖)

𝒉(𝑖−1)tanh

Attr

ibut

ion:

Chr

is O

lah

Updating the internal state

𝒔(𝑖−1) 𝒔(𝑖)• +

𝒙(𝑖)

Attr

ibut

ion:

Chr

is O

lah

The ‘write’ gate (‘output gate’)

𝒙(𝑖)

𝒉(𝑖−1)𝜎

Attr

ibut

ion:

Chr

is O

lah

Updating the external state

𝒚(𝑖)

𝒉(𝑖)

tanh

Attr

ibut

ion:

Chr

is O

lah

Peephole connections

𝒙𝑖

𝒚𝑖

𝒔𝑖−1 𝒔𝑖

𝜎 tanh𝜎 𝜎

tanh

× +

××

Attr

ibut

ion:

Chr

is O

lah

Gated Recurrent Unit (GRU)

𝒙(𝑖)

𝒉(𝑖−1) 𝒉(𝑖)

tanh𝜎

1−•

• +

𝜎

Attr

ibut

ion:

Chr

is O

lah

Bidirectional RNNs

• In speech recognition, the correct interpretation of a given sound may depend on both the previous sounds and the next sounds.

• Bidirectional RNNs combine one RNN that moves forward through time with another RNN that moves backward.

• The output can be a representation that depends on both the past and the future, without having to specify a fixed-sized window.

A bidirectional RNN

𝒚(2)

𝒙(2)

𝒚(1) 𝒚(3)

𝒙(1) 𝒙(3)

F B

𝒉(1) 𝒉(2)

𝒈(2) 𝒈(3)

Attr

ibut

ion:

Chr

is O

lah

Recommended