31
Perceptron Learning CSC 302 1.5 Neural Networks Perceptron Learning Rule 1

Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Learning

CSC 302 1.5 Neural Networks

Perceptron Learning

Rule

1

Page 2: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Objectives

� Determining the weight matrix and bias for

perceptron networks with many inputs.

� Explaining what a learning rule is.

� Developing the perceptron learning rule.� Developing the perceptron learning rule.

� Discussing the advantages and limitations of

the single layer perceptron.

2

Page 3: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Development

� Introduced a neuron model by Warren

McCulloch & Walter Pitts [1943].

� Main features� Weighted sum of input signals is compared to a

threshold to determine the output.threshold to determine the output.

� 0 if weighted_sum < 0

� 1 is weighted_sum >= 0

� Able to compute any logical arithmetic function.

� No training method was available.

3

Page 4: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Development �

� Perceptron was developed by Frank Rosenblatt [1950].

� Neurons were similar to those of McCulloch & Pitts.

� Key feature – introduced a learning rule.� Key feature – introduced a learning rule.

� Proved that learning rule is always converged to correct weights if weights exist for the problem.

� Simple and automatic.

� No restriction on initial weights - random

4

Page 5: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Learning Rules

� Procedure for modifying the weights and biases of a

network to perform a specific task.

� Supervised Learning - Network is provided with a set of

examples of proper network behaviour (inputs/targets)

� Reinforcement Learning - Network is only provided with a � Reinforcement Learning - Network is only provided with a

grade, or score, which indicates network performance.

� Unsupervised Learning - Only network inputs are

available to the learning algorithm. Network learns to

categorize (cluster) the inputs.

5

Page 6: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Architecture

6

Page 7: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Architecture ?

7

� Output of the ith neuron

Page 8: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Architecture ?

Therefore, if the inner product of the ith row of the weight

matrix with the input vector is greater than or equal to –bi

the output will be 1, otherwise the output will be 0.

8

Each neuron in the network divides the input space into

two regions.

Page 9: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Single-Neuron Perceptron

9

� Decision boundary

n = 1wTp+b = w1,1p1 + w1,2p2 + b = 0

Page 10: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Decision Boundary

� Decision boundary

1wTp+b = 0 or 1w

Tp = -b

� All points on the decision boundary have the

same inner product with the weight vector.

� Decision boundary is orthogonal to weight vector.

10

� Decision boundary is orthogonal to weight vector.

�1wTp1= 1w

Tp2 = -b for any two points in the decision

boundary.

�1wT(p1 – p2) = 0

�Weight vector is orthogonal to (p1 – p2).

Page 11: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Direction of the Weight Vector

� Any vector in the shaded region

will have an inner product

greater than –b and

� Vectors in the un-shaded region

will have inner product less than

11

will have inner product less than

–b.

� Therefore the weight vector 1w

will always point toward the

region where the neuron output

is 1.

Page 12: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Graphical Method

� Design of a perceptron to implement the AND

gate.

12

Input space – each input vector labeled

according to the target.

Dark circle – output is1

Light circle – output is 0

Page 13: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Graphical Method ?

� First select a decision boundary

that separates dark circles and

light circles.

� Next choose a weight vector that

is orthogonal to the decision

13

is orthogonal to the decision

boundary.

� The weight vector can be any

length.

� Infinite no of possibilities.

� One choice is

Page 14: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Graphical Method ?

� Finally, we need to find the bias, b.

� Pick a point on the decision boundary (say

[1.5 0]T)

14

� Testing

Page 15: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Multiple-Neuron Perceptron

� Each neuron will have its own decision boundary.

� iwTp + bi = 0

� A single neuron can classify input vectors into two

categories.

� A multi-neuron perceptron can classify input

15

� A multi-neuron perceptron can classify input

vectors into 2S categories.

Page 16: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Learning Rule

� Supervised training

� Provided a set of examples of proper network

behaviour

where p – input to the network and

16

where pq – input to the network and

tq – corresponding output

� As each input is supplied to the network, the

network output is compared to the target.

� The learning rule then adjusts the weights and

biases of the network in order to move the

network output closer to the target.

Page 17: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Test Problem

=

=

=

−=

=

= 0,

1

00,

2

11,

2

13321 ttt ppp 21

� Input/target pairs

Decision

17

� Removed the bias for the simplicity.

� Decision boundary must pass the origin.

Decision boundaries

WeightVectors

Page 18: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Starting Point

=8.0

0.1w1

� Random initial weight

� Present p1 to the network:

T 1

18

a hardlim wT

1 p1( ) hardlim 1.0 0.8–1

2

= =

a hardlim 0.6–( ) 0= =

Incorrect Classification.

Page 19: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Tentative Learning Rule

�We need to alter the

weight vector so that it

points more toward p1,

19

points more toward p1,

so that in the future it

has a better chance of

classifying p1.

.

Page 20: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Tentative Learning Rule ?

�One approach would be

to set 1w equal to p1.

� This rule cannot find a

solution always.

20

If we apply the rule 1w = p

every time one of these

vectors misclassified, and

network weights will simply

oscillate back and forth.

If we apply the rule 1w = p

every time one of these

vectors misclassified, and

network weights will simply

oscillate back and forth.

Page 21: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Tentative Learning Rule ?

� Another possibility would

be to add p1 to 1w.

� This rule can be stated

as

21

Page 22: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Second Input Vector

22

Page 23: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Third Input Vector

23

Page 24: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Unified Learning Rule

24

Page 25: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Unified Learning Rule ?

25

Page 26: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Multiple-Neuron Perceptron

26

Page 27: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Apple/Banana Example

27

Page 28: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Second Iteration

28

Page 29: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Check

29

Page 30: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Rule Capability

30

Page 31: Perceptron Learning Rule - Dr TGI Fernando...2013/04/04  · Perceptron Learning Rule Supervised training Provided a set of examples of proper network behaviour where p –input to

Perceptron Limitations

31