15
This article was downloaded by: [Stony Brook University] On: 24 October 2014, At: 09:24 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Hydrological Sciences Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/thsj20 Application of neural network and adaptive neuro- fuzzy inference systems for river flow prediction NIRANJAN PRAMANIK a & RABINDRA KUMAR PANDA a a Agricultural and Food Engineering Department , Indian Institute of Technology , Kharagpur, West Bengal, 721302, India Published online: 21 Dec 2009. To cite this article: NIRANJAN PRAMANIK & RABINDRA KUMAR PANDA (2009) Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction, Hydrological Sciences Journal, 54:2, 247-260, DOI: 10.1623/ hysj.54.2.247 To link to this article: http://dx.doi.org/10.1623/hysj.54.2.247 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Embed Size (px)

Citation preview

Page 1: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

This article was downloaded by: [Stony Brook University]On: 24 October 2014, At: 09:24Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: MortimerHouse, 37-41 Mortimer Street, London W1T 3JH, UK

Hydrological Sciences JournalPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/thsj20

Application of neural network and adaptive neuro-fuzzy inference systems for river flow predictionNIRANJAN PRAMANIK a & RABINDRA KUMAR PANDA aa Agricultural and Food Engineering Department , Indian Institute of Technology ,Kharagpur, West Bengal, 721302, IndiaPublished online: 21 Dec 2009.

To cite this article: NIRANJAN PRAMANIK & RABINDRA KUMAR PANDA (2009) Application of neural network and adaptiveneuro-fuzzy inference systems for river flow prediction, Hydrological Sciences Journal, 54:2, 247-260, DOI: 10.1623/hysj.54.2.247

To link to this article: http://dx.doi.org/10.1623/hysj.54.2.247

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose ofthe Content. Any opinions and views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be reliedupon and should be independently verified with primary sources of information. Taylor and Francis shallnot be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and otherliabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Hydrological Sciences–Journal–des Sciences Hydrologiques, 54(2) April 2009

247

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction NIRANJAN PRAMANIK & RABINDRA KUMAR PANDA Agricultural and Food Engineering Department, Indian Institute of Technology, Kharagpur, West Bengal 721302, India [email protected] Abstract Appropriate outflow from a barrage should be maintained to avoid flooding on the downstream side during the rainy season. Due to the nonlinear and fuzzy behaviour of hydrological processes, and in cases of scarcity of relevant data, it is difficult to simulate the desired outflow using physically-based models. Artificial intelligence techniques, namely artificial neural networks (ANN) and an adaptive neuro-fuzzy inference system (ANFIS), were used in the reported study to estimate the flow at the downstream stretch of a river using flow data for upstream locations. Comparison of the performance of ANN and ANFIS was made by estimating daily outflow from a barrage located in the downstream region of Mahanadi River basin, India, using daily release data from the Hirakud Reservoir, located some distance upstream of the barrage. To obtain the best input–output mapping, five different models with various input combinations were evaluated using both techniques. The significance of the contribution of two upstream tributaries to barrage outflow estimation was also evaluated. Three feed-forward back-propagation training algorithms were used to train the models. Standard performance indices, such as correlation coefficient, index of agreement, root mean square error, modelling efficiency and percentage deviation in peak flow, were used to compare the performance of the models, as well as the training techniques. The results revealed that the neural network with conjugate gradient algorithm performs better than Levenberg-Marquardt and gradient descent algorithms. The model which considers as input the reservoir release up to three antecedent time steps produced the best results. It was found that barrage outflow could be better estimated by the ANFIS than by the ANN technique. Key words neural networks; fuzzy inference system; hydrological processes; training algorithms

Réseau de neurones et systèmes d’inférence neuro-floue adaptatif pour la prévision de débit en rivière Résumé Le flux sortant d’un barrage devrait être maintenu à un niveau approprié pour éviter les inondations à l’aval durant la saison humide. En raison du comportement non-linéaire et flou des processus hydrologiques, et dans le cas où les données pertinentes sont rares, il est difficile de simuler le flux sortant souhaitable à l’aide de modèles à bases physiques. Des techniques d’intelligence artificielle, en l’occurrence à base de réseau de neurones artificiels (RNA) et de système d’inférence neuro-floue adaptatif (ANFIS), ont été utilisées dans cette étude pour estimer le débit dans le tronçon aval d’une rivière à l’aide des données de débit en des sites amont. La comparaison des performances des RNA et de ANFIS a été menée pour l’estimation du flux journalier issu d’un barrage situé dans la région aval du basin de la Rivière Mahanadi en Inde, à partir des données de lâchure du Barrage Hirakud situé à l’amont. Afin d’obtenir la meilleure cartographie entrées–sorties, cinq modèles différents avec plusieurs combinaisons d’entrées ont été évalués avec les deux techniques. La significativité de la contribution des deux affluents amont pour l’estimation du flux sortant du barrage a également été évaluée. Trois algorithmes d’apprentissage progressif avec rétro-propagation ont été utilisés pour renseigner les modèles. Des indices de performance standard, comme le coefficient de corrélation, l’indice de satisfaction, l’erreur quadratique moyenne, l’efficience de modélisation et le pourcentage d’écart du pic de débit, ont été utilisés pour comparer les performances des modèles ainsi que des techniques d’apprentissage. Les résultats montrent que le réseau de neurones avec un algorithme de gradient conjugué donne de meilleurs résultats que les algorithmes de Levenberg-Marquardt et de gradient descendant. Le modèle qui considère comme entrées les lâchures lors des trois pas de temps antérieurs donne les meilleurs résultats. Le flux sortant du barrage est mieux estimé avec la technique ANFIS qu’avec les RNA. Mots clefs réseaux de neurones; système d’inférence floue; processus hydrologiques; algorithms d’apprentissage INTRODUCTION

Accurate prediction of flow at different sections of a river is very important for efficient flood-plain management. River flow prediction by traditional flood routing techniques is not easy, particularly in the presence of flow-control structures in a river that has multiple branches. Uneven bed slope and channel roughness make the flow routing process more complex. Prediction of flow

Open for discussion until 1 October 2009 Copyright © 2009 IAHS Press

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 3: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

248

at the downstream end of a river by means of the discharge and water level records of upstream gauging sites is a method commonly used by hydrologists. In the past decade, soft computing methods such as neural networks, fuzzy logic, and a combination of both, have gained popularity in modelling the hydrological processes. The outflow from a barrage located at the downstream end of a river can be predicted with a considerable degree of accuracy using such computing techniques. Artificial neural networks (ANN) have been used extensively in hydrological modelling, due to their ability to model nonlinear systems efficiently (Karunanithi et al., 1994; Thirumalaiah & Deo, 1998; Dawson & Wilby, 1998; Zealand et al., 1999; Chang et al., 2002; Sivakumar et al., 2002). Many researchers have demonstrated their use in rainfall–runoff modelling and streamflow simulation (Dolling & Veras, 2002; Shamseldin et al., 2002; Shrestha, 2003, Hu et al., 2005; Wang et al., 2006). A comprehensive review of the application of ANNs to hydrology is presented in the ASCE Task Committee report (2000a,b). The performance of ANNs in river flow fore-casting has been evaluated by various researchers and compared with other data-driven techniques. The ANN technique has proved its potential in forecasting river flow with promising results (Lekkas et al., 2001; Sivakumar et al., 2002). Another data-driven method is the fuzzy inference system (FIS), which has been used extensively in hydrological studies. Fuzzy-based models allow a logical data-driven modelling approach, which uses IF–THEN rules and logical operators to establish qualitative relationships among the variables in the model. The FIS has been used successfully in real-time flood fore-casting and rainfall–runoff modelling (Fujita et al., 1992; Zhu & Fujita, 1994; Zhu et al., 1994; See & Openshaw, 2000; Stuber et al., 2000; Hundecha et al., 2001; Xiong et al., 2001). Previous studies used FIS and ANN individually for hydrological investigation. However, when combined, the individual strengths of each approach can be exploited in a synergistic manner for the construction of powerful intelligent systems. In recent years, the integration of neural networks and fuzzy logic has given birth to neuro-fuzzy systems. Neuro-fuzzy systems have the potential to capture the benefits of both ANN and fuzzy logic in a single framework. Use of this technique for rainfall–runoff and river flow time series predictions has been reported by many researchers (e.g. Valenca & Ludermir, 2000; Chang & Chen, 2001, 2006; Nayak et al., 2004, 2005). The present study focuses on comparison of the ANN and ANFIS techniques used to predict outflow from a barrage using only reservoir release data. Subsequently, lateral inflows from two upstream tributaries with suitable lag time are considered along with the reservoir release to examine the effect on model prediction accuracy. Thus, five models are formulated and tested: models 1–4 considering no tributary inflows and Model 5 considering tributary inflows. Three ANN training algorithms, such as the Levenberg-Marquardt (LM), the gradient descent with momentum and adaptive learning rate, as well as the conjugate gradient algorithms, are used to train the models. Further, the ANFIS technique is used for outflow prediction in addition to ANN. The ANN and ANFIS techniques’ modelling ability for predicting barrage outflow is also compared. NEURAL NETWORK MODEL Neural networks are composed of simple elements operating in parallel. The network function is determined largely by the connections between the elements. A neural network can be trained to perform a particular function by adjusting the values of the connection weights between the elements. Generally, two types of neural network, such as multilayer perceptrons (MLP) and radial basis functions (RBF), are used in studying many hydrological problems. Performance compari-sons between the MLP and RBF are presented by Kumar et al. (2005). Multilayer perceptrons (MLP) A layered network with just the input and output layers is referred to as the perceptron. The layers that do not have direct access to the external world are called hidden layers. Multilayer perceptrons

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 4: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Copyright © 2009 IAHS Press

249

Bias (b)

Hidden layer Output layer

X Y

Input layer

Z (output)

Weights (w)

Inputs

Fig. 1 A multilayer perceptron with one hidden layer. are the layered arrangement of nonlinear processing elements (PE) as shown in Fig. 1. Each connection between PEs is weighted by a scalar weight (w), which is adapted during model training, and a bias (b). The processing elements in the MLP generate the final output from the net inputs, making use of nonlinear activation functions. Determination of the optimum number of hidden layers and PEs in neural network architecture, to produce good results, is a trial-and-error procedure which depends on the type of problem and the availability of data. Neural network training During training of an ANN, the connection weights are adapted so as to minimize the squared difference between the desired output and the PE response. The optimal weights are the product of the inverse of the input autocorrelation matrix (R–1) and the cross-correlation vector (P) between the input and the desired response. The analytical solution of this problem is equivalent to a search technique to obtain the minimum of the quadratic performance surface, J(wi), using gradient descent by adjusting the weights at each epoch (Haykin, 1999):

( ) ( ) ( )kJkwkw iii ∇−=+ η1 ii

JJw∂

∇ =∂

(1)

where η is the learning rate coefficient and ∇Ji(k) is the gradient vector of the performance surface at iteration (k) for the ith input node. Equation (2) is used to calculate the performance surface (J):

( 2∑ −=p

pp ydJ ) and min (2) PRwJ 1opt

−=→

where wopt is the optimal weight, dp is the target output and yp is the computed output of the pth output neuron. The most common neural network training algorithm is the back-propagation algorithm (Fausett, 1994; Patterson, 1996; Haykin, 1999). The modern second-order back-propagation algorithm, such as Levenberg-Marquardt (Bishop, 1995; Shepherd, 1997), and first-order algorithms, such as conjugate gradient descent, are substantially faster. The advantage of the back-propagation algorithm is that it is easy to understand and can be successfully used in many appli-cations. Other training algorithms, such as gradient descent with variable learning rate and momentum coefficient as additional parameter are used in many engineering applications. In back-

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 5: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

250

propagation, the gradient vector of the error surface is calculated. This vector moves along the line of steepest descent from its current point to decrease the error. NEURO-FUZZY APPROACH

The adaptive neuro-fuzzy inference system (ANFIS) is a soft computing method in which a given input–output data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The mapping is accomplished by a number of fuzzy IF–THEN rules, each of which describes the local behaviour of the mapping. The fuzzy membership parameters are optimized either by using a back-propagation algorithm or by combination of both back-propagation and least square method. The efficiency of the FIS depends on the estimated parameters. Further description of the ANFIS is given in Brown & Harris (1995). Very few studies are available on rainfall–runoff modelling and river flow forecasting using ANFIS together with its performance comparison with an ANN technique (Chau et al., 2005; Aqil et al., 2007). The ANFIS structure and parameter adjustment

The structure of the ANFIS is similar to that of a neural network (see Fig. 2(b)). It maps inputs through input membership functions (MF) and associated parameters. Similarly, output mapping is done through output membership functions and associated parameters. For example, consider that x and y are the two inputs and z is the output. The first-order IF–THEN fuzzy rules can be expressed as follows: (Rule 1) IF x is A1 AND y is B1, THEN f1 = p1x + q1y + r1 and (Rule 2) IF x is A2 AND y is B2, THEN f2 = p2x + q2y + r2, where A1, A2 and B1, B2 are the MFs for inputs x and y respectively. The symbols, p1, q1, r1 and p2, q2, r2 are the associated parameters of the output functions. The mechanism of fuzzy reasoning for the Sugeno type fuzzy model to derive output function f from a given input vector [x,y] is presented in Fig. 2(a). The five-layered ANFIS architecture is illustrated in Fig. 2(b) and is described subsequently.

Layer 1 Each node is assigned a fuzzy membership value using membership functions to form a fuzzy set.

( )( )

2

1,

1,

for 1, 2

for 3,4i

i

i A

i B

O x i

O y i

μ

μ−

= =

= = (3)

where x, y are the crisp input to node i, and Ai, Bi are the membership grades of the membership functions μA and μB, respectively. A generalized bell-shaped membership function was used in the present study. Using a generalized bell-shaped MF, the output O1,i can be computed as:

( )ii b

i

i

Ai

acx

xO 2,1

1

1

⎟⎟⎠

⎞⎜⎜⎝

⎛ −+

== μ (4)

where {ai, bi, ci} is the parameter set that changes the shapes of the MF with maximum equal to 1 and minimum equal to 0.

Layer 2 In this layer, every node multiplies the input signals, denoted in Fig. 2(b) as “ ” and represents the rule nodes and the output O2,k that represents the firing strength of a rule and is computed as:

Π

( ) ( )yxwOii BAkk μμ==,2 2,1=i (5)

Layer 3 This layer consists of the averaging nodes, which is labelled as “N” and computes the normalized firing strength equal to:

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 6: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Copyright © 2009 IAHS Press

251

21,3 ww

wwO i

ik +== 2,1=i (6)

Layer 4 The node function of this layer is to compute the contribution of each ith rule towards the total output and the function can be defined as:

( )iiiiiii ryqxpwfwO ++==,4 2,1=i (7)

where, iw is the output of Layer 3 and {pi, qi, ri} is the parameter set.

Layer 5 This layer has a single output node, which computes overall output of the ANFIS as:

∑∑

∑ ==

ii

iii

iiil w

fwfwO ,5 (8)

The computation of these parameters (or their adjustment) is facilitated by a gradient vector, which provides a measure of how well the fuzzy inference system is modelling the input/output data for a given set of parameters. Once the gradient vector is obtained, any optimization routines can be applied to adjust the parameters and to reduce some error measure, usually defined by the sum of the squared difference between actual and desired outputs. The ANFIS uses a hybrid learning algorithm, the gradient descent method and least square method to update the membership function parameters.

(a)

1111 ryqxpf ++=

2211

21

2211

fwfw

wwfwfw

f

+=

++

=

2222 ryqxpf ++=

(b)

Fig. 2 (a) A fuzzy inference system, and (b) the corresponding architecture (Jang et al., 1997).

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 7: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

252

STUDY AREA AND DATA USED

The study was carried out in the lower Mahanadi River basin, located in eastern India. The lower Mahanadi basin starts from Hirakud Dam and ends at the Bay of Bengal, covering a coastal stretch of about 200 km. A barrage is located at a location named Naraj, 390 km downstream of the Hirakud Reservoir, to regulate the high discharge from the reservoir to reduce the impact of floods in the deltaic region during the monsoon (rainy) season. The area of the basin up to Naraj is approx. 49 096 km2, which accounts for 35% of the total Mahanadi basin. The tributaries Ong and Tel join the Mahanadi River at 80 and 93 km downstream of Hirakud Dam, respectively, as shown in Fig. 3. However, the flow magnitudes from theses two tributaries are small as compared to the flow released from the reservoir, which contributes to roughly 80% of the total flow in the main Mahanadi during the monsoon period. Average sinuosity of Mahanadi River and the tributaries Ong and Tel are estimated to be 1.38, 1.10 and 1.21 respectively, which indicate a low curvilinear path of the rivers. The surface elevation of the basin ranges between 30 and 200 m from the mean sea level. The index map of Lower Mahanadi basin, showing main streams, location of the reservoir and discharge sites, is presented in Fig. 3.

Fig. 3 Index map of the lower Mahanadi basin showing main streams, location of Hirakud Reservoir and discharge sites.

The types of input data and the length of the data sets to be used in the ANN models are obtained by a trial-and-error process. Initially, only one parameter, the discharge from Hirakud Reservoir, was considered as input to the models trained and tested with ANN and ANFIS techniques to predict the barrage outflow. Subsequently, the lateral inflow from the Ong and Tel tributaries, collected at their outlets (Salebata and Kantamal, respectively), were included as additional inputs along with the reservoir release to examine their effect on modelling efficiency. Cross-correlation analysis was performed using flow values for up to six time lags at three locations (as shown in Fig. 4), to determine the suitable time lag and the number of variables in the input matrix, as suggested by Sudheer et al. (2002). As may be seen from Fig. 4, higher correlation coefficients were obtained between the reservoir release and the barrage outflow; whereas very

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 8: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Copyright © 2009 IAHS Press

253

0.30

0.40

0.50

0.60

0.70

0.80

0.90

0 1 2 3 4 5 6Time lag (days)

Cro

ss-c

orre

latio

n co

effic

ient

Salebata

HirakudKantamal

Fig. 4 Results of cross-correlation between inputs to the lower Mahanadi River from different sources (Salebata and Kantamal) and barrage outflow (Hirakud).

low correlation values were obtained between the flow values at Salebata and Kantamal and the barrage outflow. Low values of correlation coefficient could be attributed to the fact that low discharge from the tributaries, when added to the main stream, have less effect on the magnitude and on the pattern of outflow from the Naraj Barrage located downstream. For this reason, initially the inflow from the tributaries was not considered in the input matrix of the ANN and ANFIS models. Principal component analysis (PCA) was also carried out, considering the antecedent flow values at all three locations, to examine whether the contribution of the tributaries could be neglected. The result of the PCA showed that more than 90% of the total variance in the data sets is obtained from four variables consisting of time series of Hirakud release values from up to four antecedent time steps. Therefore, the input data matrix was prepared using the time series of Hirakud release for four antecedent time steps only. In spite of the PCA results showing that a major contribution to the barrage flow was from reservoir release, it was decided to examine the effect of taking into account the inflow from the two tributaries, along with the reservoir release on model predictability. Due to unavailability of hourly release data, daily reservoir release, tributary discharge and barrage outflow data for the monsoon periods of the years 1997–2001 and 2002–2003 were used for ANN and ANFIS model training and testing, respectively. MODEL DEVELOPMENT AND TESTING Data of daily flow released from Hirakud Reservoir at four antecedent time steps (Qh(t-1), Qh(t-2), Qh(t-3), Qh(t-4)) were considered as input to the ANN and ANFIS models. Accordingly, four different ANN and ANFIS models (Table 1) were proposed and their performance compared to determine the best model. Model 1 represents the outflow from Naraj Barrage at time t (Qn(t)) as a function of reservoir release corresponding to one time step lag, t (Table 1). Likewise, Qn(t) = f[Qh(t-1),Qh(t-2)] represents the flow at Naraj at time t, being a function of reservoir release at t–1 and t–2. The third and fourth models were proposed by considering the integrated effect of the release values up to three and four antecedent time steps, respectively. All models were trained and tested using a three-layered perceptron with a number of functional nodes and connection weights. One hidden layer was decided for all models for training. The number of hidden neurons (HN) in the hidden layer of ANN architecture was varied till the best performance was obtained. Pre-processing of the input and output data sets was performed by normalizing the data in the range of –1 to +1 and then feeding into the ANN models for training and testing. The ANN and ANFIS simulation, and analysis of the results, were performed using the command window of MATLAB 7.

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 9: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

254

Table 1 Proposed four models for ANN and ANFIS training and testing. Model Output Input Model 1 Qn(t) = f[Qh(t-1)] Model 2 Qn(t) = f[Qh(t-1), Qh(t-2)] Model 3 Qn(t) = f[Qh(t-1), Qh(t-2), Qh(t-3)] Model 4 Qn(t) = f[Qh(t-1), Qh(t-2), Qh(t-3),Qh(t-4)] Description of training algorithm and model testing Three back-propagation training algorithms—the Levenberg-Marquardt (LM), the gradient descent algorithm with variable learning and momentum factor (GDX), and the conjugate descent algo-rithm (CGF)—were used to train the ANN models. The performance of all three algorithms was then compared mainly in terms of root mean squared error (RMSE) and modelling efficiency (E). The index of agreement (IOA), correlation coefficient (R) and percentage deviation of peak flow were also used to evaluate the performance of the models.

The Levenberg-Marquardt (LM) algorithm uses a second-order training mode without computing Hessian matrix (Demuth & Beale, 1998). When the performance function has the form of sum of squares (as is typical in training feed-forward networks), the Hessian matrix can be approximated as H = JTJ and the gradient can be computed as g = JTe, where J is the Jacobian matrix that contains first derivatives of the network errors with respect to the weights and biases, and e is a vector of network errors. The LM algorithm uses the above approximation to the Hessian matrix in the Newton-like weight update as [ ] eJIHww T

kk1

1−

+ +−= μ , where w indicates the weight of the neural network, and µ is a non-negative scalar that controls the learning process. When the parameter µ is large, the above expression approximates gradient descent with a small step size, while for a small µ the algorithm approximates the Newton method. By adaptively adjusting the parameters of Newton’s method, the LM can manoeuvre between its two extremes—the gradient descent and Newton’s algorithm. Thus, the LM method is the standard method for minimization of the MSE criterion, due to its rapid convergence properties and robustness (Demuth & Beale, 1998). Table 2 presents the training and testing results obtained from the four models trained by the three algorithms. Each model was trained using a different number of hidden neurons (HN) in the Table 2 Performance indices of ANN models trained using the LM, GDX and CGF algorithms.

Training (1997–2001): Testing (2002 & 2003): Model HN R RMSE IOA E R RMSE IOA E

(a) LM algorithm: Model 1 3 0.805 361.5 0.883 0.648 0.804 281.8 0.878 0.636 Model 2 3 0.832 338.1 0.902 0.692 0.810 261.7 0.884 0.650 Model 3 3 0.856 314.9 0.918 0.733 0.827 255.2 0.895 0.675 Model 4 4 0.865 305.0 0.923 0.742 0.823 258.0 0.893 0.670 (b) GDX algorithm: Model 1 3 0.788 375.5 0.871 0.620 0.820 273.9 0.873 0.648 Model 2 4 0.821 348.3 0.894 0.673 0.810 285.5 0.874 0.630 Model 3 4 0.841 329.8 0.908 0.700 0.821 261.1 0.897 0.667 Model 4 6 0.852 319.0 0.915 0.725 0.817 259.0 0.893 0.660 (c) CGF algorithm: Model 1 3 0.807 365.7 0.878 0.639 0.804 287.6 0.863 0.627 Model 2 3 0.824 345.5 0.894 0.678 0.827 256.7 0.886 0.671 Model 3 4 0.844 326.9 0.910 0.712 0.833 246.1 0.904 0.688 Model 4 5 0.835 335.0 0.900 0.686 0.815 263.0 0.880 0.657 HN: number of hidden neurons. R: correlation coefficient; RMSE: root mean squared error (m3/s); IOA: index of agreement; E: modelling efficiency.

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 10: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Copyright © 2009 IAHS Press

255

0

10000

20000

30000

40000

0 50 100 150 200

Time (Days)

Bar

rage

out

flow

(m3 /s

)Observed

ANN_LM

0

10000

20000

30000

40000

0.00 50.00 100.00 150.00 200.00

Time (Days)

Bar

rage

out

flow

(m3 /s

)

ObservedANN_GDX

0

10000

20000

30000

40000

0.00 50.00 100.00 150.00 200.00

Time (Days)

Bar

rage

out

flow

(m3 /s

)

ObservedANN_CGF

(a)

(b)

(c)

Fig. 5 Comparison of observed and ANN-predicted flow (trained using the LM, GDX and CGF algorithms for (a), (b) and (c), respectively) obtained from Model 3 during testing.

hidden layer and the performance indices were computed to determine the optimum number of hidden neurons. The performance indices of the models obtained during training and testing processes are presented to allow comparison between them. From Table 2(a), it is seen that all models give satisfactory results showing almost comparable values of goodness-of-fit criteria. However, Model 3 showed the best performance as evident from its highest E (0.675) and lowest RMSE (255.2 m3/s). The ANN architecture with three hidden neurons was found to be capable of generalizing input–output data sets. Model 4 produced better results during model training but failed to yield better results in testing. This may be due to over-fitting of the training data sets and poor generalization of the input–output data. A comparison between the observed and ANN computed temporal variation of flow obtained during testing of Model 3 is presented in Fig. 5(a). It is observed from the figure that the peak flows are underestimated by 11.80%, which may be due to the presence of a small number of higher flow values in the training data sets. The gradient descent with momentum and adaptive learning rate algorithm (GDX) can train any network as long as its weight, net input, and transfer functions have derivative functions. It calculates derivatives of performance (p) with respect to the weight and bias variables (X) (Hagan

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 11: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

256

et al., 1996). Each variable is adjusted by: dX = mc dXprev + lr mc(∂p/∂x), according to gradient descent with momentum. In the equation, dXprev is the previous change in the weight or bias, mc is the momentum coefficient, lr is the learning rate, dX is the weight correction and the partial derivative (∂p/∂x) is the performance change with respect to change in the weight. For each epoch, if the performance decreases toward the goal, then the learning rate is increased by some factor. If performance increases by more than a factor at maximum performance, the learning rate is adjusted by some factor. A detailed description of this training method is given in Hagan et al. (1996). All four models were trained using the above algorithm, and the performance indices are presented in Table 2(b). In the training phase, there was improvement in all goodness-of-fit criteria when the time series of flow values corresponding to each antecedent time step was added in the input matrix. However, during testing, Model 3 produced the best results. It is observed that the number of hidden neurons, required for obtaining the better result, increased up to a certain limit while the number of inputs increased. The model efficiency, E, and RMSE were 0.667 and 261.1 m3/s, respectively. In this case, the peak flow was underestimated by 13.20%. Overall, the performance of the GDX algorithm (Table 2(b)) was found to be inferior to that of the LM algorithm. Figure 5(b) presents a comparison between the observed outflow and ANN model output computed by Model 3 using the GDX algorithm.

The conjugate gradient back-propagation algorithm (CGF) can train any network as long as its weight, net input, and transfer functions have derivative functions. Back propagation is used to calculate derivatives of performance p with respect to the weight and bias variables X. Each variable is adjusted according to the expression X = X + adX, where dX is the search direction. The parameter a is a constant which minimizes the error functions along the search direction. Using the line search function, the minimum point may be located. The first search direction is the negative of the gradient of performance. In succeeding epochs the search direction is computed from the new gradient and the previous search direction according to the relationship: , where GX is the gradient. The parameter Z can be computed in several different ways. For the Fletcher-Reeves variation of conjugate gradient, Z is computed according to (GXnew)2/(GXprev)2 where GXprev is the previous gradient and GXnew is the current gradient. A detailed description of the conjugate gradient algorithm is given in Fletcher (1987).

ZXGX X olddd +−=

The results obtained from all the models trained using this algorithm are presented in Table 2(c). The E and RMSE values were found to be better than those obtained from LM and GDX algorithms. The network with the input matrix containing three antecedent flow values (Model 3) produced minimum RMSE and higher E, IOA and R. The results of all training algorithms showed an overestimation of the minimum values and underestimation of peak and higher flow values. The overestimation of minimum values may be due to the fact that the minimum value of the output training vector was higher than the corresponding value in the validation data set. The Model 3 values of E and RMSE were 0.688 and 246.1 m3/s, respectively. The peak value was underestimated by a factor of 9.85 %, showing a relatively better result s compared to LM and GDX. Further, the time required for training the network using this algorithm was less than that for the GDX but a little more than that for the LM algorithm. The best performance of the CGF algorithm, compared to the other two, can be explained by its search technique, which performs along the conjugate direction with a faster rate of convergence. Figure 5(c) presents a comparison between the observed outflow and ANN output produced by Model 3 trained by the CGF algorithm. Model testing using the ANFIS technique Models with the same input combinations as used in the ANN modelling were trained and tested to predict outflow of the barrage using the ANFIS approach. All four models were trained and tested on the data of the same periods. The input–output data sets were scaled between 0 to 1 and then fed to ANFIS GUI of MATLAB 7.0. The number of fuzzy membership functions for each input

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 12: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Copyright © 2009 IAHS Press

257

Table 3 Performance indices of ANFIS models. Training (1997–2001): Testing (2002 & 2003): Model R RMSE

(m3/s) IOA E R RMSE

(m3/s) IOA E

Model 1 0.850 321.4 0.912 0.720 0.837 233.2 0.903 0.701 Model 2 0.855 316.0 0.917 0.730 0.840 230.1 0.907 0.700 Model 3 0.863 307.0 0.922 0.745 0.843 228.1 0.909 0.710 Model 4 0.845 326.0 0.915 0.713 0.832 238.1 0.902 0.692 was considered as either 2 or 3 according to the type of model. The type of membership function used for all the models was of generalized bell (gbell) type, which is a direct generalization of Cauchy distribution used in the probability theory with three parameters, as shown in equation (4). Due to its smoothness and concise expression it is popularly used in many applications to specify the fuzzy sets (Jang et al., 1997). The number of fuzzy rules and the optimum number of parameters required to define the FIS for the best result were decided based upon the number of inputs used and their type, as well as on the number of fuzzy membership functions employed in the model. The parameters of the membership functions were adjusted using the back-propagation algorithm. The outputs function of the ANFIS model was considered as a linear type. Table 3 illustrates the performance indices obtained from all four models trained using the ANFIS technique. As can be seen from the table, Model 3 performed better than the other three models with E and RMSE values of 0.71 and 228.1 m3/s, respectively. The ANFIS produced quite good results, as shown by the goodness-of-fit criteria for all models, compared to the ANN. The overestimation of minimum values was comparatively lower in this case. However, peak flow was underestimated by 8.65%. The superiority of the ANFIS technique to the ANN method may be due to the fuzzy partitioning of the input space and for creating a rule-base to generate the output. Figure 6 presents a comparison between the observed and ANFIS computed flow obtained from Model 3 during testing.

0

10000

20000

30000

40000

0 50 100 150 200Time (Days)

Bar

rage

out

flow

(m3 /s

)

ObservedANFIS

Fig. 6 Comparison of observed and ANFIS computed flow obtained from Model 3 during the testing process.

Consideration of tributary inflow on model performance From the results it was found that Model 3, whose input is based on reservoir release up to three antecedent time steps, is the best representative model for prediction of barrage outflow irrespective of the technique used. The effect on model predictability of considering tributary contribution is presented in this section. The inflow from the tributaries Ong and Tel was considered as additional input in the case of the best performing model (Model 3). Accordingly, a new model was proposed based on the delayed daily flow records at Salebata and Kantamal sites

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 13: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

258

along with the reservoir release. As seen from Fig. 4, the value of cross-correlation of the inflow at Salebata with that of barrage outflow is quite low as compared to that of the Hirakud discharge. The cross-correlation value for the inflow at Kantamal was still lower. Moreover, the variation of the cross-correlation coefficient value with the change in time lag was found to be negligible for these two tributaries unlike that of the reservoir, indicating low influence of the inflow from the tributaries on barrage discharge. Taking into account the tributary contribution as additional input, Model 5 was formulated as: Qn(t) = f[Qh(t-1),Qh(t-2),Qh(t-3),Qs(t-2),Qs(t-3),Qk(t-2),Qk(t-3)], where, Qs(t-2), Qs(t-3), Qk(t-2), Qk(t-3) are flow values at two days and three days lag at Salebata and Kantamal, respectively. The inflow values from the tributaries corresponding to one day lag (Qs(t-1) and Qk(t-1)) were not considered as input to the model. The exclusion of these inputs from Model 5 is due to the fact that their correlation with the barrage outflow (Qn(t)) is low in comparison to the correlation between the tributary inflows, corresponding to two-day and three-day lags, with the barrage outflow Qn(t), as seen in Fig. 4. Moreover, consideration of the above two inputs will increase the dimension of the input matrix, making the ANN and ANFIS architectures more complex. The model was then trained and tested using the three training algorithms of ANN and ANFIS techniques as used for the previous models, utilizing the daily flow data of the rainy seasons of the seven-year period (1997–2003). The performance of Model 5 was then compared with that of Model 3.

0

50

100

150

200

250

300

LM GDX CGF ANFIS

Training methods

RM

SE (

Without tributary inf low (Model-3)

m3/

s)

With tributary inflow (Model-5)

RM

SE

(m3 /s

)

Fig. 7 Comparison of RMSE values between Model 3 and Model 5.

A multilayer perceptrons having one hidden layer was used for model training and testing. The number of neurons in the hidden layer was varied in each iteration within a range of 3–15 to obtain the most efficient ANN architecture. No improvement in the result was noticed beyond six hidden neurons. Two membership functions for each input were used for training and testing the model using the ANFIS. The changes in the values of RMSE due to addition of the tributary inflow (Model 5) as compared to that of Model 3, considering the reservoir release only, are shown in Fig. 7 for the different training methods. From Fig. 7 it is observed that there is improvement in the results after incorporating tributary inflow values into Model 3. A marginal improvement in the values of RMSE was noticed for the ANN training algorithms, particularly in the GDX and CGF cases; slightly better results were obtained in the case of ANFIS. The LM algorithm was found to produce the best performance among all the ANN algorithms. Similar results were also obtained in the case of other goodness-of-fit criteria such as IOA, R and E. The LM algorithm yielded the lowest RMSE (230.5 m3/s) and deviation of peak discharge of 8.9 % as against 251.8 m3/s and 11.3%, respectively, in the case of the GDX algorithm for Model 5. The ANFIS computed RMSE and percentage deviation of peak flow were 197.3 m3/s and 6.87%, respectively. This implies that consideration of tributary contribution improves the predictability of the barrage outflow.

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 14: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Copyright © 2009 IAHS Press

259

CONCLUSIONS

The barrage outflow in the Mahanadi River, India, could be predicted with considerable accuracy taking the combined reservoir release values up to three consecutive daily time lags (Model 3) as inputs using all the three ANN training algorithms and ANFIS technique. The performances of all the three training algorithms of ANN: LM, GDX and CGF were found to be comparable. However, CGF yielded the best result as revealed from the RMSE and deviation of peak flow values. Four hidden neurons in one hidden layer were found to be the most appropriate in the ANN architecture yielding the best results. ANFIS performed slightly better than all ANN algorithms as revealed from the RMSE and percentage deviation of peak flow values. Though principal component analysis showed that reservoir release has the major contribution to the barrage outflow, some improvement in the barrage outflow prediction was obtained when the flow contribution of the two tributaries (Ong and Tel) were considered in addition to the reservoir release (Model 5). The results of the study indicated that ANFIS is a better technique than the ANN to capture the input–output relationship and could be used successfully for hydrological applications. REFERENCES Aqil, M., Kita, I., Yano, A. & Nishiyama, S. (2007) A comparative study of artificial neural networks and neuro-fuzzy in

continuous modeling of the daily and hourly behaviour of runoff. J. Hydrol. 337, 22–34. ASCE Task Committee on Application of Artificial Neural Networks in Hydrology (2000a) Artificial neural networks in

hydrology, I: preliminary concepts. J. Hydrol. Engng ASCE 5(2), 115–123. ASCE Task Committee on Application of Artificial Neural Networks in Hydrology (2000b) Artificial neural networks in

hydrology, II: Hydrologic applications. J. Hydrol. Engng ASCE 5(2), 124–137. Bishop, C. M. (1995) Neural Networks for Pattern Recognition. Clarendon Press, Oxford, UK. Brown, M. & Harris, C. (1995) Neurofuzzy Adaptive Modeling and Control. Prentice-Hall International, Hertfordshire, UK. Chang, F. J. & Chang, Y. T. (2006) Adaptive neuro-fuzzy inference system for prediction of water level in reservoir. Adv.

Water Resour. 29, 1–10. Chang, F. J. & Chen, Y. C. (2001) A counter propagation fuzzy-neural network modeling approach to real time stream flow

prediction. J. Hydrol. 245, 153–164. Chang, F. J., Chang, L. C. & Huang, H. L. (2002) Real-time recurrent neural network for stream-flow forecasting. Hydrol.

Processes 16, 2577–2588. Chau, K. W., Wu, C. L. & Li, Y. S. (2005) Comparison of several flood forecasting models in Yangtze River. J. Hydrol. Engng

ASCE 10(6), 485–491. Dawson, C. W. & Wilby, R. L. (1998) An artificial neural network approach to rainfall runoff modeling. Hydrol. Sci. J. 43(1),

47–67. Demuth, H. B. & Beale, M. (1998) Neural Network Toolbox for Use with MATLAB, Users Guide. The Mathworks, Inc.,

Massachusetts, USA. Dolling, O. R. & Vears, E. A. (2002) Artificial neural network for stream flow prediction. J. Hydraul. Res.40, 547–554. Fausett, L. (1994) Fundamentals of Neural Networks Architectures, Algorithms, and Applications. Prentice-Hall, Upper Saddle

River, New Jersey, USA. Fletcher, R. (1987) Practical Methods of Optimization. John Wiley and Sons, New York, USA. Fujita, M., Zhu, M. L., Nakoa, T. & Ishi, C. (1992) An application of fuzzy set theory to runoff prediction. In: Sixth IAHR Int.

Symp. on Stochastic Hydraulics (ed. by I. D. Cluckie & D. Han) (National Taiwan University, Taipei, May 1992), 727–734. Taipei, Taiwan.

Jang, J.-S. R., Sun, C. T. & Mizutani, E. (1997) Neuro-fuzzy and Soft Computing. Prentice-Hall, Upper Saddle River, New Jersey, USA.

Hagan, M. T., Demuth, H. B. & Beale, M. (1996) Neural Network Design. Cengage Learning, New Delhi, India. Haykin, S. (1999) Neural Networks: A Comprehensive Foundation. Macmillan, New York, USA. Hu, T. S., Lam, K. C. & Thomas Ng, S. (2005) A modified neural network for improving river flow prediction. Hydrol. Sci. J.

50(2), 299–318. Hundecha, Y., Bardossy, A. & Theisen, H.-W. (2001) Development of a fuzzy logic based rainfall–runoff model. Hydrol. Sci.

J. 46(3), 363–377. Karunanithi, N., Grenney, W. J. & Whitley, D. (1994) Neural network for river flow prediction. J. Comput. Civil Engng 8,

201–220. Kumar, A. R. S., Sudheer, K. P., Jain, S. K. &. Agarwal, P. K. (2005) Rainfall–runoff modeling using artificial neural networks:

comparison of network types. Hydrol. Processes 19, 1277–1291. Lekkas, D. F., Imrie, C. E. & Lees, M. J. (2001) Improved non-linear transfer function and neural network methods of flow

routing for real-time forecasting. J. Hydroinformatics 3,153–164. Nayak, P. C., Sudheer, K. P., Rangan, D. M. & Ramasastri, K. S. (2004) A neuro-fuzzy computing technique for modeling

hydrological time series. J. Hydrol. 291, 52–66. Nayak, P. C., Sudheer, K. P., Rangan, D. M. & Ramasastri, K. S. (2005) Short-term flood forecasting with a neurofuzzy model.

Water Resour. Res. 41, 2517–2530.

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4

Page 15: Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction

Niranjan Pramanik & Rabindra Kumar Panda

Copyright © 2009 IAHS Press

260

Patterson, D. (1996) Artificial Neural Networks. Prentice-Hall, Singapore. See, L. & Openshaw, S. (2000) Applying soft computing approaches to river level forecasting. Hydrol. Sci. J. 44(5), 763–779. Shamseldin, A. Y., Nasr, A. E. & O’Conner, K. M. (2002) Comparison of different forms of the multi-layer feed-forward neural

network method used for river flow forecasting. Hydol. Earth Syst. Sci. 6, 671–684. Shepherd, A. J. (1997) Second-Order Methods for Neural Networks. Springer-Verlag New York Inc., Secaucus, New Jersey,

USA. Shrestha, R. R. (2003) Flood routing using artificial neural networks. In: Proc. IAHR XXX Congress, Thessaloniki, Greece. Sivakumar, B., Jayawardena, A. W. & Fernando, T. M. K. G. (2002) River flow forecasting—use of phase-space reconstruction

and artificial neural networks approaches. J. Hydrol. 265, 225–245. Stuber, M., Gemmar, P. & Greving, M. (2000) Machine supported development of fuzzy-flood forecast systems. In: European

Conf. on Adv. in Flood Res. (Potsdam, Germany) (ed. by A. Bronstert, C. Bismuth & L. Menzel), 504–515. Sudheer, K. P., Gosain, A. K. & Ramasastri, K. S. (2002) A data driven algorithm for constructing ANN based rainfall–runoff

models. Hydrol. Processes 16, 1325–1330. Thirumalaiah, K. & Deo, M. C.(1998) River stage forecasting using artificial neural network. J. Hydrol. Engng ASCE 3(1),

26–3. Valenca, M. & Ludermir, T. (2000) Monthly stream flow forecasting using an neural fuzzy network model. In: Proc. Sixth

Brazilian Symposium on Neural Networks, 117–120. Brazilian Computer Society, Brazil. Wang, W., Pieter, H. A. J. M., Gelder, V.,Vrijling, J. K. & Ma, J. (2006) Forecasting daily stream flow using hybrid ANN

models. J. Hydrol. 321, 383–399. Xiong, L. H., Shamseldin, A. Y. & O’Connor, K. M. (2001) A nonlinear combination of the forecasts of rainfall–runoff models

by the first order Takagi-Sugeno fuzzy system. J. Hydrol. 245, 196–217. Zealand, C. M., Burn, D. H. & Simonovic, S. P. (1999) Short term stream flow forecasting using artificial neural networks.

J. Hydrol. 214, 32–48. Zhu, M. L. & Fujita, M. (1994) Comparison between fuzzy reasoning and neural network method to forecast runoff discharge.

J. Hydrosci. & Hydraul. Engng. 12 (2), 131–141. Zhu, M. L., Fujita, M., Hashimoto, N. & Kudo, M. (1994) Long lead time forecast of runoff using fuzzy reasoning method.

J. Japan Soc. Hydrol. & Water Resour. 7(2), 83–89. Received 17 September 2007; accepted 22 August 2008

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 0

9:24

24

Oct

ober

201

4