14
ARTIFICIAL NEURAL NETWORKS IN HYDROLOGY BY THE ASCE TASK COMMITTEE A Paper Review

Artificial neural networks in hydrology

Embed Size (px)

Citation preview

Page 1: Artificial neural networks in hydrology

ARTIFICIAL NEURAL NETWORKS IN HYDROLOGY BY THE ASCE TASK COMMITTEEA Paper Review

Page 2: Artificial neural networks in hydrology

DEVELOPMENT OF ANN

Information processing occurs at many single elements called nodes, also referred to as units, cells or neurons.

Signals are passed between nodes through connection links.

Each node typically applies a non-linear transformation called an activation function to its net input to determine its output signal.

Page 3: Artificial neural networks in hydrology

CLASSIFICATION OF NEURAL NETWORKS

1) Based on Number of Layers a) Single (Hopfield Nets) b) Bilayer (Carpenter/Grossberg adaptive

resonance networks) c) Multilayered (most back propagation

networks)

Page 4: Artificial neural networks in hydrology

2) Based on the direction of information flow and processing

a) Feed Forward Network – The nodes are arranged in layers, starting from the input layer and ending at the output layer ; there could be several hidden layers too.

b) Recurrent ANN – The information flows through nodes in both directions, ie from input to output and vice versa.

Page 5: Artificial neural networks in hydrology

MATHEMATICAL ASPECTS The inputs form an input

vector X=(x1, x2….xn) The sequence of

weights leading to the node form a weight vector Wj=(W1j, W2j…Wnj) where Wij represents the connection weight from the ith node in the preceding layer to this node.

Output yj is computed as follows :

Page 6: Artificial neural networks in hydrology

NETWORK TRAINING

In order for an ANN to generate an output vector that is as close as possible to the target vector, a training process, also called as learning, is used to find optimal weight matrices W and bias vectors V that minimize a predetermined error function that usually has the form

Page 7: Artificial neural networks in hydrology

Where ti = component of desired output T yi = corresponding ANN output P = number of training patterns p = number of output nodesTypes of traininga)Supervised – requires an external teacher

to teach. It requires a large number of inputs and the corresponding outputs.

b)Un-supervised – no teacher required. Only input data set is provided and the ANN automatically adapts its connection weights to cluster those input patterns into classes with similar properties.

Page 8: Artificial neural networks in hydrology

TRAINING ALGORITHMS

1) Back - Propagation It basically minimizes the network error function. Each input pattern is passed through the network

from input layer to output layer. Network output is compared with desired target

output and error is computed. This error is propagated backward through the

network to each node and correspondingly connection weights are adjusted.

Page 9: Artificial neural networks in hydrology

Hence it involves two steps ; a forward pass in which effect of input is passed through the network to reach the output layer, after the error is computed the second steps begins in which the error generated at the output layer is propagated back to the input layer with the weights being modified.

Page 10: Artificial neural networks in hydrology

Back-propagation is a first-order method based on the steepest gradient descent, with the direction vector being set equal to the negative of the gradient vector. Consequently, the solution often follows a zigzag path while trying to reach a minimum error position, which may slow down the training process. It is also possible for the training process to be trapped in the local minimum despite the use of a learning rate. (?)

Page 11: Artificial neural networks in hydrology

CONJUGATE GRADIENT ALGORITHMS It does not proceed along the direction of the

error gradient, but in an orthogonal direction. This prevents the future steps from

influencing the minimization achieved during the current step. (?)

Page 12: Artificial neural networks in hydrology

RADIAL BASIS FUNCTION The hidden layer consists of a number

of nodes and a parameter vector called a ‘‘center,’’ which can be considered the weight vector.

The standard Euclidean distance is used to measure how far an input vector is from the center.

For each node, the Euclidean distance between the center and the input vector of the network input is computed and transformed by a nonlinear function that determines the output of the nodes in the hidden layer.

Page 13: Artificial neural networks in hydrology

The major task of RBF network design is to determine center c. The simplest and easiest way may be to choose the centers randomly from the training set. Or else the technique of clustering input training set into groups and choose the center of each group as the center.

After the center is determined, the connection weights wi between the hidden layer and output layer can be determined simply through ordinary back-propagation training.

The primary difference between the RBF network and back propagation is in the nature of the nonlinearities associated with hidden nodes. The nonlinearity in back-propagation is implemented by a fixed function such as a sigmoid. The RBF method, on the other hand, bases its nonlinearities on the data in the training set.

Page 14: Artificial neural networks in hydrology

CASCADE CORRELATION ALGORITHM It starts with a minimal network

without any hidden nodes and grows during the training by adding new hidden units one by one.

Once a new hidden node has been added to the network, its input-side weights are frozen.

The hidden nodes are trained in order to maximize the correlation between output of the nodes and output error.