38
biological neuron artificial neuron

Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

  • View
    223

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

biological neuron artificial neuron

Page 2: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

A two-layer neural network

Input layer(activations representfeature vectorfor one training example)

Hidden layer(“internalrepresentation”)

Output layer(activation representsclassification) Weighted connections

Page 3: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Example:ALVINN

(Pomerleau, 1993)• ALVINN learns to drive an autonomous vehicle at normal

speeds on public highways (!)

• Input: 30 x 32 grid of pixel intensities from camera

Page 4: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Each output unit correspond to a particular steering direction. The most highly activated one gives the direction to steer.

Page 5: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

What kinds of problems are suitable for neural networks?

• Have sufficient training data

• Long training times are acceptable

• Not necessary for humans to understand learned target function or hypothesis

Page 6: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Advantages of neural networks

• Designed to be parallelized

• Robust on noisy training data

• Fast to evaluate new examples

• “Bad at logic, good at frisbee” (Andy Clark)

Page 7: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Perceptrons

• Invented by Rosenblatt, 1950s

• Single-layer feedforward network:

otherwise 1

0 if 1)(

0

ssg

xws i

n

i i

.

.

.

wn

w2

w1

xn

x2

x 1

o

+1

w0 This can be used for classification:

+1 = positive, -1 = negative

Page 8: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• Consider example with two inputs, x1, x2.

x1

x2+

++

++

+

+

We have:

2

01

2

12

22110 0

w

wx

w

wx

xwxww

Can view trained network as defining “separation line”. What is its equation?

Page 9: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• This is a “linear discriminant function” or “linear decision surface”.

• Weights determine slope and bias determines offset.

• Can generalize to n-dimensions:

– Decision surface is an n-1 dimensional “hyperplane”.

0...110 nn xwxww

Page 10: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Derivation of a learning rule for Perceptrons

• Let S = {(xi, yi): i = 1, 2, ..., m} be a training set. (Note, xi is a vector of inputs, and yi is {+1, -1} for binary classification.)

• Let hw be the perceptron classifier represented by the weight vector w.

• Definition:2))((

2

1)()( xxx whyErrorSquaredE

Page 11: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Gradient descent in weight space

From T. M. Mitchell, Machine Learning

Page 12: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• We want a learning method that will minimize E over S.

• For each weight wi , change weight in direction of steepest descent:

j

nnjjj

jj

xingingy

xwxwxwwgw

hy

hyw

hyw

E

)('))((

)......())((

)())((

110

x

xx

w

ww

jii xingingyww )('))((

Page 13: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Perceptron learning rule:

1. Start with random weights, w = (w1, w2, ... , wn).

2. Select training example (x,y) S.

3. Run the perceptron with input x and weights w to obtain g {+1, -1}.

4. Let be the training rate (a user-set parameter). Now,

(We leave out the g'(in) since it is the same for all weights.)

5. Go to 2.

ii

iiii

xingyw

wwww

))((

where

,,

Page 14: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• The perceptron learning rule has been proven to converge to correct weights in a finite number of steps, provided the training examples are “linearly separable”.

• Exercise: Give example of training examples that are not linearly separable.

Page 15: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Exclusive-Or

• In their book Perceptrons, Minsky and Papert proved that a single-layer perceptron cannot represent the two-bit exclusive-or function.

x0 x1 x0 xor x1

-1 -1 -1

-1 1 1

1 -1 1

1 1 -1

Page 16: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

w2

x1

w0+1

x2

w1 22110 xwxwws

Summed input to output node given by:

Decision surface is always a line.

XOR is not “linearly separable”.

Page 17: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• 1969: Minsky and Papert proved that perceptrons cannot represent non-linearly separable target functions.

• However, they proved that for binary inputs, any transformation can be carried out by adding fully connected hidden layer.

• In this case, hidden node creates a three-dimensional space defined by two inputs plus hidden node. Can now separate the four points by a plane.

Page 18: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• Good news: Adding hidden layer allows more target functions to be represented.

• Bad news: No algorithm for learning in multi-layered networks, and no convergence theorem!

• Quote from Perceptrons (Expanded edition, p. 231):

“[The perceptron] has many features to attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel computation. There is no reason to suppose that any of these virtues carry over to the many-layered version. Nevertheless, we consider it to be an important research problem to elucidate (or reject) our intuitive judgment that the extension is sterile.”

Page 19: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• Two major problems they saw were

1. How can the learning algorithm apportion credit (or blame) to individual weights for incorrect classifications depending on a (sometimes) large number of weights?

2. How can such a network learn useful higher-order features?

• Good news: Successful credit-apportionment learning algorithms developed soon afterwards (e.g., back-propagation). Still successful, in spite of lack of convergence theorem.

Page 20: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Multi-layer Perceptrons (MLPs)

• Single-layer perceptrons can only represent linear decision surfaces.

• Multi-layer perceptrons can represent non-linear decision surfaces:

Page 21: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Differentiable Threshold Units

• In order to represent non-linear functions, need non-linear activation function at each unit.

• In order to do gradient descent on weights, need differentiable activation function.

Page 22: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Sigmoid activation function:

To do gradient descent in multi-layer networks, need to generalizeperceptron learning rule.

yeyo

1

1 )( where),( xw

Page 23: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Why use sigmoid activation function?

• Note that sigmoid activation function is non-linear, differentiable, and approximates a sgn function.

• The derivative of the sigmoid activation function is easily expressed in terms of its output:

This is useful in deriving the back-propagation algorithm.

))(1()()(

yydz

yd

Page 24: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Training multi-layer perceptrons(The back-propagation algorithm)

Assume two-layer networks:

I. For each training example:

1. Present input to the input layer.

2. Forward propagate the activations times the weights to each node in the hidden layer.

Page 25: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

3. Forward propagate the activations times weights from the hidden layer to the output layer.

4. At each output unit, determine the error E.

5. Run the back-propagation algorithm to update all weights in the network.

II. Repeat (I) for a given number of “epochs” or until accuracy on training or test data is acceptable.

Page 26: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Example: Face recognition

(From T. M. Mitchell, Machine Learning, Chapter 4)

• Task: classify camera images of various people in various poses.

• Data: Photos, varying: – Facial expression: happy, sad, angry, neutral– Direction person is facing: left, right, straight ahead, up– Wearing sunglasses?: yes, no

Within these, variation in background, clothes, position of face for a given person.

Page 27: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

an2i_right_sad_sunglasses_4

an2i_left_angry_open_4

glickman_left_angry_open_4

Page 28: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer
Page 29: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• Preprocessing of photo:

– Create 30x32 coarse resolution version of 120x128 image

– This makes size of neural network more manageable

• Input to neural network:

– Photo is encoded as 30x32 = 960 pixel intensity values, scaled to be in [0,1]

– One input unit per pixel

• Output units:

– Encode classification of input photo

Page 30: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• Possible target functions for neural network:

– Direction person is facing

– Identity of person

– Gender of person

– Facial expression

– etc.

• As an example, consider target of “direction person is facing”.

Page 31: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Network architecture

. . .

Input layer: 960 units

Output layer: 4 units

Classification result is most highly activated output unit.

Hidden layer: 3 units

left straight right up

Network isfully connected

Page 32: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Target function

• Target function is:

– Output unit should have activation 0.9 if it corresponds to correct classification

– Otherwise output unit should have activation 0.1

• Use these values instead of 1 and 0, since sigmoid units can’t produce 1 and 0 activation.

Page 33: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Other parameters

• Learning rate = 0.3

• Momentum = 0.3

( and were found through trial and error)

• If these are set too high, training fails to converge on network with acceptable error over training set.

• If these are set too low, training takes much longer.

Page 34: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Training

• For maximum of M epochs: – For each training example

• Input photo to network• Propagate activations to output units• Determine error in output units• Adjust weights using back-propagation algorithm

– Test accuracy of network on validation set. If accuracy is acceptable, stop algorithm.

• Software and data: http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/faces.html

Page 35: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Understanding weight values• After training:

– Weights from input to hidden layer: high positive in certain facial regions

– “Right” output unit has strong positive from second hidden unit, strong negative from third hidden unit.

• Second hidden unit has positive weights on right side of face (aligns with bright skin of person turned to right) and negative weights on top of head (aligns with dark hair of person turned to right)

• Third hidden unit has negative weights on right side of face, so will output value close to zero for person turned to right.

Page 36: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer
Page 37: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

• One advantage of neural networks is that “high-level feature extraction” can be done automatically!

(Similar idea in face-recognition task. Note that you didn’t have to extract high-level features ahead of time.)

Page 38: Biological neuronartificial neuron. A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer

Other topics we won’t cover here

• Other methods for training the weights

• Recurrent networks

• Dynamically modifying network structure