7
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Embed Size (px)

Citation preview

Page 1: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Hazırlayan

NEURAL NETWORKSRadial Basis Function Networks

II

PROF. DR. YUSUF OYSAL

Page 2: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Weight Optimization

NEURAL NETWORKS – Radial Basis Function Networks

• Weights are computed by means of the least squares estimation method.

– For an example (xi, ti) consider the output of the network

– We would like y(xi) = ti for each example, that is

– This can be re-written in matrix form for all examples at the same time as:

||)(||...||)(||)( 111 mimmii xwxwxy

imimmi dxwxw ||)(||...||)(|| 111

)Estimation Squares(Least )(*

||)(||||)(||

||)(||||)(||

1

11

11

1111

TW

TW

t

t

w

w

xx

xx

TT

NmmNmN

mm

Page 3: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Supervised RBF Network Training

NEURAL NETWORKS – Radial Basis Function Networks

• Hybrid Learning Process:– Clustering for finding the centers.– Spreads chosen by normalization. – Least Squares Estimation for finding the weights.or– Apply the gradient descent method for finding centers, spread and weights, by minimizing

the (instantaneous) squared error:

• Update for:

• centers

• spread

• weights

2))((2

1txyE

jj j

E

jj

E

j

ijijij w

w

E

Page 4: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Example: The XOR Problem

NEURAL NETWORKS – Radial Basis Function Networks

Page 5: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Example: The XOR Problem

NEURAL NETWORKS – Radial Basis Function Networks

Page 6: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Comparison of RBF Networks with MLPs

NEURAL NETWORKS – Radial Basis Function Networks

Similarities

1. They are both non-linear feed-forward networks.2. They are both universal approximators.3. They are used in similar application areas.

Output unitsLinear

RBF unitsNon-linear

Input units

Supervised

Unsupervised

Page 7: Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

Comparison of RBF Networks with MLPs

NEURAL NETWORKS – Radial Basis Function Networks

Differences1. An RBF network (in its natural form) has a single hidden layer, whereas MLPs can have

any number of hidden layers.2. RBF networks are usually fully connected, whereas it is common for MLPs to be only

partially connected.3. In MLPs the computation nodes (processing units) in different layers share a common

neuronal model, though not necessarily the same activation function. In RBF networks the hidden nodes (basis functions) operate very differently, and have a very different purpose, to the output nodes.

4. In RBF networks, the argument of each hidden unit activation function is the distance between the input and the “weights” (RBF centres), whereas in MLPs it is the inner product of the input and the weights.

5. MLPs are usually trained with a single global supervised algorithm, whereas RBF networks are usually trained one layer at a time with the first layer unsupervised.

6. MLPs construct global approximations to non-linear input-output mappings with distributed hidden representations, whereas RBF networks tend to use localised non-linearities (Gaussians) at the hidden layer to construct local approximations.