REFERENCES Picton, P. (2000). Neural Networks, 2nd Ed.
Basingstoke, UK.: Palgrave. Haykin, S. (1999). Neural Networks: A
Comprehensive Foundation, 2nd Ed. Upper Saddle River, NJ.:
Prentice-Hall Inc. Callan, R. (1999). The Essence of Neural
Networks. Hemel Hempstead, UK.: Prentice-Hall Europe. Pinel, J.P.J.
(2003). Biopsychology, 5th Ed. Boston, MA.: Allyn & Bacon.
Kohonen, T. (1997). Self-Organizing Maps, 2nd Ed. Berlin,
Heidelberg, New York: Springer-Verlag. Bishop, C.M. (1995). Neural
Networks for Pattern Recognition. Oxford, UK: Clarendon Press.
Arbib, M.A. (Ed) (2003). The Handbook of Brain Theory and Neural
Networks, 2nd Edition. Cambridge, MA.: MIT Press. Beale, R. &
Jackson, T. (1990). Neural Computing: An Introduction. Bristol,
UK.: Institute of Physics Publishing. NEURAL NETWORKS
Introduction
Slide 5
Historical Background 1943 McCulloch and Pitts proposed the
first computational models of neuron. 1949 Hebb proposed the first
learning rule. 1958 Rosenblatts work in perceptrons. 1969 Minsky
and Paperts exposed limitation of the theory. 1970s Decade of
dormancy for neural networks. 1980-90s Neural network return
(self-organization, back- propagation algorithms, etc) 1990-Today
Advanced applications based fast learning algorithms NEURAL
NETWORKS Introduction
Slide 6
Nervous Systems UNITs: nerve cells called neurons, many
different types and are extremely complex Human brain contains ~ 10
11 neurons. INTERACTIONs: signal is conveyed by action potentials,
interactions could be chemical (release or receive
neurotransmitters) or electrical at the synapse. Each neuron is
connected ~ 10 4 others. Some scientists compared the brain with a
complex, nonlinear, parallel computer. The largest modern neural
networks achieve the complexity comparable to a nervous system of a
fly. NEURAL NETWORKS Introduction
Slide 7
Neurons NEURAL NETWORKS Introduction
Slide 8
A Model of Artificial Neuron bias x1x1 x2x2 x m = 1 wi1wi1
wi2wi2 w im = i...... yiyi f (.)a (.) NEURAL NETWORKS
Introduction
Slide 9
A Model of Artificial Neuron Graph representation: nodes:
neurons arrows: signal flow directions A neural network that does
not contain cycles (feedback loops) is called a feed forward
network (or perceptron).... x1x1 x2x2 xmxm y1y1 y2y2 ynyn NEURAL
NETWORKS Introduction
Knowledge and Memory... x1x1 x2x2 xmxm y1y1 y2y2 ynyn The
output behavior of a network is determined by the weights. Weights
the memory of an NN. Knowledge distributed across the network.
Large number of nodes increases the storage capacity; ensures that
the knowledge is robust; fault tolerance. Store new information by
changing weights. NEURAL NETWORKS Introduction
Slide 12
Neuron Models Input signal Synaptic weights Summing function
Activation function Local Field v Output y x1x1 x2x2 xmxm w2w2 wmwm
w1w1 w0w0 x 0 = +1 NEURAL NETWORKS Introduction
Slide 13
Neuron Models
Slide 14
Network architectures Three different classes of network
architectures single-layer feed-forward neurons are organized
multi-layer feed-forward in acyclic layers recurrent The
architecture of a neural network is linked with the learning
algorithm used to train NEURAL NETWORKS Introduction
Slide 15
Single Layer Feed-forward Input layer of source nodes Output
layer of neurons NEURAL NETWORKS Introduction