25
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25

Introduction to Natural Computation Lecture 9 Multilayer

Embed Size (px)

Citation preview

Introduction to Natural Computation

Lecture 9

Multilayer Perceptrons and Backpropagation

Peter Lewis

1 / 25

Overview of the Lecture

Why multilayer perceptrons?

Some applications of multilayer perceptrons.

Learning with multilayer perceptrons:

Minimising error using gradient descent.

How to update weights.

The backpropagation learning algorithm.

2 / 25

Recap: Perceptrons

−1

w0

x1 w1

xnxn

wn

yx =

∑ni=0 wixi

00

1

A key limitation of the single perceptron

Only works for linearly separable data.

3 / 25

Multilayer Perceptrons (MLPs)

Input layer

Hidden layer

Output layer

MLPs are feed-forward neural networks.

Organised in layers:One input layer of distribution points.One or more hidden layers of artificial neurons (nodes).One output layer of artificial neurons (nodes).

Each node in a layer is connected to all other nodes in the next layer.Each connection has a weight (remember that the weight can be zero).

MLPs are universal approximators!

4 / 25

MLPs: Universal approximators

Figure : Partitioning of the input space by linear threshold-nodes in an MLP with two hiddenlayers and one output node in the output layer and examples of separable decision regions. Font:http://www.neural-forecasting.com/mlp_neural_nets.htm.

5 / 25

Activation Functions

x1

w1

x2 w2

xn

wn

0h

1

Step function

0xh

y

1

Outputs 0 or 1.

Sigmoid function

0xh

y

1

Outputs a real value between 0 and 1.

With sigmoid activation functions. . . we would get smooth transitions instead of hardlined decision boundaries (e.g. in the previous slide’s example).

6 / 25

Sigmoids. . .

7 / 25

Applications of MLPs

MLPs have been applied to a very wide range of problems. . .

Classification problems. Regression problems.

Examples

Patients’ lengths of stay.

Stock price forecasting.

Image recognition.

Car driving control.

. . .

8 / 25

Learning with MLPs

Input layer

Hidden layer

Output layer

As with single perceptrons, finding the right weights is very hard!

Solution technique: learning!

As with perceptrons, learning means adjusting the weights based on training examples.

9 / 25

Supervised Learning

General idea

1 Send the MLP an input pattern, x, from the training set.

2 Get the output from the MLP, y.

3 Compare y with the “right answer”, or target t, to get the error quantity.

4 Use the error quantity to modify the weights, so next time y will be closer to t.

5 Repeat with another x from the training set.

Of course, when updating weights after seeing x, the network doesn’t just change theway it deals with x, but other inputs too. . .

Inputs it has not seen yet!

The ability to deal accurately with unseen inputs is called generalisation.

10 / 25

Learning and Error Minimisation

Recall: the perceptron learning rule

Try to minimise the difference between the actual and desired outputs:

w′i = wi + α(t− y)xi

We define an error function to represent such a difference over a set of inputs.

Typical error function: mean squared error (MSE)

For example, the mean squared error can be defined as:

E(~w) =1

2N

N∑p=1

(tp − op)2

where

N is the number of patterns,

tp is the target output for the pattern p,

op is the output obtained for the pattern p, and

The 2 makes little difference, but is inserted to make life easier later on!

This tells us how “good” the neural network is.

Learning aims to minimise the error E(~w) by adjusting the weights ~w.11 / 25

Gradient Descent

One technique that can be used for minimising functions isgradient descent.

Can we use this on our error function E?

We would like a learning rule that tells us how to update weights, like this:

w′ij = wij + ∆wij

But what should ∆wij be?

12 / 25

Summary So Far...

We learnt what a multilayer perceptron is and why we might need them.

We know the sort of learning rule we would like, to update weights in order tominimise the error.

However, we don’t yet know what this learning rule should be.

We need to learn a bit about gradient and derivatives to figure it out!

After that, we can come back to the learning rule.

13 / 25

Gradient and Derivatives: The idea

The derivative is a measure of the rate of change of a function, as its input changes.

Consider a function y = f(x).

The derivative dydx

indicates how much y changes in response to changes in x.

If x and y are real numbers, and if the graph of y is plotted against x, the derivativemeasures the slope or gradient of the line at each point, i.e., it describes thesteepness or incline.

14 / 25

Gradient and Derivatives: The idea

dydx

> 0 implies that y increases as x increases.

If we want to find the minimum y, we should reduce x.

dydx

< 0 implies that y decreases as x increases.

If we want to find the minimum y, we should increase x.

dydx

= 0 implies that we are at a minimum or maximum or a plateau.

To get closer to the minimum:

xnew = xold − ηdy

dx

where η indicates how much we would like to reduce or increase x anddydx

tells us the correct direction to go.

15 / 25

Gradient and Derivatives: The idea

OK... so we know how to use derivatives to adjust one input value.

But we have several weights to adjust!

We need to use partial derivatives.

A partial derivative of a function of several variables is its derivative with respect toone of those variables, with the others held constant.

Example

If y = f(x1, x2), then we can have ∂y∂x1

and ∂y∂x2

.

In our learning rule case, if we can work out the partial derivatives, we can use thisrule to update the weights:

Learning rule

w′ij = wij + ∆wij

where ∆wij = −η ∂E∂wij

.

16 / 25

Summary So Far...

We learnt what a multilayer perceptron is and why we might need them.

We know a learning rule for updating weights in order to minimise the error:

w′ij = wij + ∆wij

where ∆wij = −η ∂E∂wij

.

∆wij tells us in which direction and how much we should change each weight toroll down the slope (descend the gradient) of the error function E.

So, how do we calculate ∂E∂wij

?

17 / 25

Using Gradient Descent to Minimise the Error

Recall the mean squared error function E, which we want to minimise:

E(~w) =1

2N

N∑p=1

(tp − op)2

If we use a sigmoid activation function f , then the outputof neuron i for pattern p is

opi = f(ui) =1

1 + e−aui

where a is a pre-defined constant.

And ui is the result of the input function in neuron i:

ui =∑j

wijxij

For the pth pattern and the ith neuron, we use gradient descent on the error function:

∆wij = −η∂Ep

∂wij= η(tpi − opi )f ′(ui)xij

where f ′(ui) = dfdui

is the derivative of f with respect to ui.

If f is the sigmoid function, f ′(ui) = af(ui)(1 − f(ui)).18 / 25

Using Gradient Descent to Minimise the Error

So, we can update weights after processing each pattern, using this rule:

∆wij = η (tpi − opi )f ′(ui) xij

∆wij = η δpi xij

This is known as the generalised delta rule.

Note that we need to use the derivative of the activation function f .So, f must be differentiable!

The threshold activation function is not continuous, thus not differentiable!

The sigmoid activation function has a derivative which is “easy” to calculate.

19 / 25

Updating Output vs Hidden Neurons

We can update output neurons using thegeneralised delta rule:

∆wij = η δpi xij

δpi = f ′(ui)(tpi − opi )

This δpi is only good for the output neurons, since it relies on target outputs.

But we don’t have target output for the hidden nodes! What can we use instead?

δpi = f ′(ui)∑k

wkiδk

This rule propagates error back from output nodes to hidden nodes.

If effect, it blames hidden nodes according to how much influence they had.

So, now we have rules for updating both output and hidden neurons!

20 / 25

Backpropagation

Input layer

Hidden layer

Output layer

Backpropagation of the error

Forward propagation of the input signals

Backpropagation updates an MLP’s weights based on gradient descent of the errorfunction.

21 / 25

Online/Stochastic Backpropagation

1: Initialize all weights to small random values.2: repeat3: for each training example do4: Forward propagate the input features of the example to determine the MLP’s

outputs.5: Back propagate the error to generate ∆wij for all weights wij .6: Update the weights using ∆wij .7: end for8: until stopping criteria reached.

22 / 25

Batch Backpropagation

1: Initialize all weights to small random values.2: repeat3: for each training example do4: Forward propagate the input features of the example to determine the MLP’s

outputs.5: Back propagate the error to generate ∆wij for all weights wij .6: end for7: Update the weights based on the accumulated values ∆wij .8: until stopping criteria reached.

23 / 25

Summary

We learnt what a multilayer perceptron is and why we might need them.

We have some intuition about using gradient descent on an error function.

We know a learning rule for updating weights in order to minimise the error:

∆wij = −η ∂E∂wij

If we use the squared error, we get the generalized delta rule

∆wij = η δpi xij .

We know how to calcuate δpi for both output and hidden layers.

We can use this rule to learn an MLP’s weights using the backpropagationalgorithm.

24 / 25

Further Reading

Gurney K.Introduction to Neural Networks.3rd ed. Taylor and Francis; 2003.

25 / 25