10
CSLMEN: A New Cuckoo Search Levenberg Marquardt Elman Network for Data Classification Nazri Mohd. Nawi 1 , Abdullah Khan 1 , M. Z. Rehman 1 , Tutut Herawan 2 , Mustafa Mat Deris 1 1 Software and Multimedia Centre (SMC), Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM), 86400, Parit Raja, Batu Pahat, Johor, Malaysia 1 Department of Information System, University of Malaya, 50603, Lembah Pantai, Kuala Lumpur, Malaysia [email protected] [email protected] [email protected] [email protected] [email protected] Abstract. Recurrent Neural Networks (RNN) have local feedback loops inside the network which allows them to store earlier accessible patterns. This network can be trained with gradient descent back propagation and optimization technique such as second-order methods. Levenberg-Marquardt has been used for networks training but still this algorithm is not definite to find the global minima of the error function. Nature inspired meta-heuristic algorithms provide derivative-free solution to optimize complex problems. This paper proposed a new meta-heuristic search algorithm, called Cuckoo Search (CS) to train Levenberg Marquardt Elman Network (LMEN) in achieving fast convergence rate and to avoid local minima problem. The proposed Cuckoo Search Levenberg Marquardt Elman Network (CSLMEN) results are compared with Artificial Bee Colony using BP algorithm, and other hybrid variants. Specifically 7-bit parity and Iris classification datasets are used. The simulation results show that the computational efficiency of the proposed CSLMEN training process is highly enhanced when coupled with the Cuckoo Search method. Keywords: recurrent back propagation, cuckoo search, local minima, meta-heuristic optimization, , Elman network, Levenberg Marquardt algorithm. 1 Introduction Artificial Neural Network (ANN) is a unified group of artificial neurons that uses a mathematical or computational form for information processing based on a connectionist network calculation. In most cases, ANN is an adaptive construction that changes its formation based on outer or inner information that flows in the network. They can be used to copy composite affairs between inputs and outputs or to find patterns in data [1]. Among some possible network architectures the ones usually used are the feed forward and the recurrent neural networks (RNN). In a feed forward neural network the signals propagate only in one direction, starting from the input layer, through the hidden layers to the output layer. While RNN have local feed-back loops inside the network which allows them to store earlier accessible patterns. This ability makes RNN more advanced than the conventional feed

CSLMEN: A New Cuckoo Search Levenberg Marquardt Elman Network for Data Classification

  • Upload
    uthm

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

CSLMEN: A New Cuckoo Search Levenberg Marquardt

Elman Network for Data Classification

Nazri Mohd. Nawi1, Abdullah Khan1, M. Z. Rehman1, Tutut Herawan2, Mustafa Mat Deris1

1 Software and Multimedia Centre (SMC), Faculty of Computer Science and Information

Technology, Universiti Tun Hussein Onn Malaysia (UTHM), 86400, Parit Raja, Batu Pahat, Johor,

Malaysia 1 Department of Information System, University of Malaya, 50603, Lembah Pantai, Kuala Lumpur,

Malaysia

[email protected] [email protected] [email protected] [email protected]

[email protected]

Abstract. Recurrent Neural Networks (RNN) have local feedback loops inside the

network which allows them to store earlier accessible patterns. This network can be

trained with gradient descent back propagation and optimization technique such as

second-order methods. Levenberg-Marquardt has been used for networks training but

still this algorithm is not definite to find the global minima of the error function. Nature

inspired meta-heuristic algorithms provide derivative-free solution to optimize complex

problems. This paper proposed a new meta-heuristic search algorithm, called Cuckoo

Search (CS) to train Levenberg Marquardt Elman Network (LMEN) in achieving fast

convergence rate and to avoid local minima problem. The proposed Cuckoo Search

Levenberg Marquardt Elman Network (CSLMEN) results are compared with Artificial

Bee Colony using BP algorithm, and other hybrid variants. Specifically 7-bit parity and

Iris classification datasets are used. The simulation results show that the computational

efficiency of the proposed CSLMEN training process is highly enhanced when coupled

with the Cuckoo Search method.

Keywords: recurrent back propagation, cuckoo search, local minima, meta-heuristic

optimization, , Elman network, Levenberg Marquardt algorithm.

1 Introduction

Artificial Neural Network (ANN) is a unified group of artificial neurons that uses a

mathematical or computational form for information processing based on a connectionist

network calculation. In most cases, ANN is an adaptive construction that changes its

formation based on outer or inner information that flows in the network. They can be used to

copy composite affairs between inputs and outputs or to find patterns in data [1]. Among some

possible network architectures the ones usually used are the feed forward and the recurrent

neural networks (RNN). In a feed forward neural network the signals propagate only in one

direction, starting from the input layer, through the hidden layers to the output layer. While

RNN have local feed-back loops inside the network which allows them to store earlier

accessible patterns. This ability makes RNN more advanced than the conventional feed

forward neural networks in modeling active systems because the network outputs are

functions of both the present inputs as well as their inner states [2-3].

Earlier, many types of RNN have been proposed and they are either moderately recurrent or

fully recurrent. RNN can carry out non-linear bouncy mappings and have been used in many

applications including associative memories, spatiotemporal pattern classification, managed

optimization, forecasting and simplification of pattern sequences [4-8]. Fully recurrent

networks use unrestricted fully interrelated architectures and learning algorithms that can deal

with time altering input and output in non-minor manners. However, certain property of RNN

makes many of algorithms less efficient, and it often takes an enormous amount of time to

train a network of even a reasonable size. In addition, the complex error surface of the RNN

network makes many training algorithms more flat to being intent in local minima. Thus the

main disadvantage of the RNN is that they require substantially more connections, and more

memory in simulation, then standard back propagation network, thus resulting in a substantial

increase in computational times. Therefore, this study used partial recurrent networks, whose

connections are mainly feed forward, but they comprise of carefully selected set of feedback

associates. The reappearance allows the system to memorize past history from the precedent

without complicating the learning extremely [9]. One example of such a recurrent network is

an Elman which is set up as a usual feed forward network [10].

The Elman network can be cultured with gradient descent back propagation and

optimization technique, like normal feed forward neural networks. The back propagation has

several problems for numerous applications. The algorithm is not definite to find the global

minimum of the error function since gradient descent may get stuck in local minima, where it

may stay indeterminately [11-13, 24]. In recent years, a number of research studies have

attempted to surmount these problems and to improve the convergence of the back

propagation. Optimization methods such as second-order conjugate gradient, quasi-Newton,

and Levenberg-Marquardt have also been used for networks training [14, 15]. To overcome

these drawbacks many evolutionary computing techniques have been used. Evolutionary

computation is often used to train the weights and parameters of the networks. One such

example of evolutionary algorithms is Cuckoo search (CS) developed by Yang and Deb [16]

used for global optimization [17-19]. The CS algorithm has been applied independently to

solve several engineering design optimization problems, such as the design of springs and

welded beam structures [20], and forecasting [21].

In this paper, we proposed a new meta-hybrid search algorithm, called Cuckoo Search

Levenberg Marquardt Elman Network (CSLMEN). The convergence performance of the

proposed CSLMEN algorithm is analyzed on XOR and OR datasets in terms of simulations.

The results are compared with artificial bee colony using back-propagation (ABCBP)

algorithm and similar hybrid variants. The main goal is to decrease the computational cost and

to accelerate the learning process in RNN using Cuckoo Search and Levenberg-Marquardt

algorithms.

The remaining paper is organized as follows: Section 2 gives literature review on

Levenberg Marquardt algorithm, explains Cuckoo Search via levy flight and CSLMEN

algorithm is proposed. The simulation results are discussed in Section 3 and finally the paper

is concluded in the Section 4.

2 Training Algorithms

2.1 Levenberg Marquardt (LM) Algorithm

To Speed up the convergence, Levenberg-Marquardt algorithm is used. The Levenberg-

Marquardt (LM) algorithm is an approximation to Newton’s method to accelerate training

speed. Benefits of applying LM algorithm over variable learning rate and conjugate gradient

method were reported in [22]. The LM algorithm is developed through Newton’s method.

For Gauss-Newton Method:

∇𝑤 = −[𝐽𝑇(𝑡)𝐽(𝑡)]−1𝐽(𝑡)𝑒(𝑡) (1)

For the Levenberg-Marquardt algorithm as the variation of Gauss-Newton Method:

w(k + 1) = w(k) − [𝐽𝑇(𝑡)𝐽(𝑡) + 𝜇𝐼]−1𝐽(𝑡)𝑒(𝑡) (2)

Where 𝜇 > 0 and is a constant; 𝐼 is identity matrix. So that the algorithm will approach

Gauss- Newton, which should provide faster convergence.

2.2 Cuckoo Search (CS) Algorithm

Cuckoo Search (CS) algorithm is a meta-heuristic technique proposed by Xin-She Yang

[16, 17]. This algorithm was inspired by the obligate brood parasitism of some cuckoo species

in which they lay their eggs in the other bird’s nest. Some host birds don’t recognize any

difference in the eggs of the cuckoo and its own. But, if the host bird finds an eggs as not their

own, they will simply throw the unknown eggs or simply leave its nest to build a new nest

elsewhere. Some other cuckoo species have evolved in such a way that female parasitic

cuckoos are very specialized in mimicking the color and pattern of the eggs of the chosen host

species. This reduces the probability of their eggs being abandoned and thus increases their

survival rate. The CS algorithm follows the three idealized rules:

1 Each cuckoo lays one egg at a time, and put its egg in randomly chosen nest;

2 The best nests with high quality of eggs will carry over to the next generations;

3 The number of available host nests is fixed, and the egg laid by a cuckoo is

discovered by the host bird with a probability 𝑝𝑎 ∈ [0, 1].

3 The Proposed CSLMEN Algorithm

The proposed Cuckoo Search Levenberg Marquardt Elman Network (CSLMEN) is

implemented in two stages. In the first phase the Cuckoo Search algorithm via levy flight is

initialized with nests and then CS is integrated into Levenberg Marquardt Elman network. In

the second phase, the best nests as weights are used for the Levenberg Marquardt Elman

network training. In this proposed model the CS is responsible to modify the perimeters for

the convergences of this algorithm. Then the overall procedure automatically adjusts its own

parameters and converge to global minimum. In the proposed CSLMEN algorithm, the CS

algorithm is initiated with a set of randomly generated nests as weights for the networks. Then

each randomly generated weight is passed to Levenberg Marquardt back propagation Elman

network for further process. The sum of square error is considered as a performances index for

the proposed CSLMEN algorithm. So the weight value of a matrix is calculated as following;

𝑊𝑡 = ∑ 𝑎. (𝑟𝑎𝑛𝑑 −1

2)𝑚

𝑐=1 (3)

𝐵𝑡 = ∑ 𝑎. (𝑟𝑎𝑛𝑑 −1

2)𝑚

𝑐=1 (4)

Where, Wt = tth is the weight value in a weight matrix. the rand in the Equation (3) is

the random number between[0 1]. a is constant parameter and for the proposed method its

value is less than one. And Bt bias value. So the list of weight matrix is as follows.

Ws = [Wt1,Wt

2,Wt3, …… . .Wt

n−1] (5)

Now from back propagation process sum of square error can be easily calculated for every

weight matrix in Ws. So the error can be calculated as:

𝑒𝑟 = (𝑇𝑟 − 𝑋𝑟) (6)

The performances index for the network is calculated as;

V(x) =1

2∑ (Tr − Xr)

T(Tr − Xr)Rr=1 (7)

In the proposed CSLMEN, the average sum of square is considered as the performance

index and calculated as following;

Vμ(x) =∑ VF(x)N

j=1

Pi (8)

Where, Vμ(x) is the average performance, VF(x) is the performance index, and Pi is the

number of cuckoo population in ith iteration. The weight and bias are calculated according to

the back propagation method. The sensitivity of one layer is calculated from its previous one

and the calculation of the sensitivity start from the last layer of the network and move

backward. To speed up convergence Levenberg Marquardt is selected a learning algorithm.

The Levenberg-Marquardt (LM) algorithm is an approximation to Newton’s method to get

faster training speed.

Assume the error function is:

𝐸(𝑡) =1

2∑ 𝑒𝑟

2(𝑡)𝑁𝑖=1 (9)

Where, e(t) is the error; N is the number of vector elements, and E(t) is the sum of spares

function then the gradient is calculated as:

𝛻𝐸(𝑡) = 𝐽𝑇(𝑡)𝑒(𝑡) (10)

𝛻2𝐸(𝑡) = 𝐽𝑇(𝑡)𝐽(𝑡) (11)

Where, ∇E(t) is the gradient; ∇2E(t) is the Hessian matrix of E (t), and J (t) is the Jacobin

matrix which is calculated in Equation (12);

J (t) =

[ ∂e1(t)

∂t1 ∂e1(t)

∂t2… . .

∂e1(t)

∂tn

∂e2(t)

∂t1 ∂e2(t)

∂t2… . .

∂e2(t)

∂tn...

∂en(t)

∂t1 ∂en(t)

∂t2… . .

∂en(t)

∂tn ]

(12)

For Gauss-Newton Method, the following Equation is used:

∇𝑤 = −[𝐽𝑇(𝑡)𝐽(𝑡)]−1𝐽(𝑡)𝑒(𝑡) (13)

For the Levenberg-Marquardt algorithm as the variation of Gauss-Newton Method:

𝑤(𝑘 + 1) = 𝑤(𝑘) − [𝐽𝑇(𝑡)𝐽(𝑡) + 𝜇𝐼]−1𝐽(𝑡)𝑒(𝑡) (14)

Where μ > 0 and is a constant; I is identity matrix. So that the algorithm will approach

Gauss- Newton, which should provide faster convergence. Note that when parameter λ is

large, the above expression approximates gradient descent (with a learning rate 1/λ) while for

a small λ, the algorithm approximates the Gauss- Newton method.

The Levenberg-Marquardt works in the following manner:

1. It Presents all inputs to the network and compute the corresponding network outputs

and errors using Equation (6 and 7) over all inputs. And compute sum of square of

error over all input.

2. Compute the Jacobin matrix using Equation (12).

3. Solve Equation (13) to obtain ∇𝑤 .

4. Recomputed the sum of squares of errors using Equation (14) if this new sum of

squares is smaller than that computed in step 1, then reduce 𝜇 by λ, update weight

using 𝑤(𝑘 + 1) = 𝑤(𝑘) − ∇𝑤 and go back to step 1. If the sum of squares is not

reduced, then increase 𝜇 by λ and go back to step 3.

5. The algorithm is assumed to have converged when the norm of the gradient Equation

(10) is less than some prearranged value, or when the sum of squares has been

compact to some error goal.

At the end of each epoch the list of average sum of square error of ith iteration SSE can be

calculated as;

𝑆𝑆𝐸𝑖 = {𝑉𝜇1(𝑥), 𝑉𝜇

2(𝑥), 𝑉𝜇3(𝑥) … . . 𝑉𝜇

𝑛(𝑥)} (15)

𝑥𝑗 = 𝑀𝑖𝑛{𝑉𝜇1(𝑥), 𝑉𝜇

2(𝑥), 𝑉𝜇3(𝑥) … . . 𝑉𝜇

𝑛(𝑥)} (16)

And the rest of the average sum of square is consider as other Cuckoo nest. A new solution

xit+1 for Cuckoo i is generated using a levy flight in the following Equation;

xit+1 = xi

t + α ⊗ levy(λ) (17)

So the movement of the other Cuckoo xi toward xj can be drawn from Equation (18);

X = { xi + rand · (xj − xi) randi > pα

xi else (18)

The Cuckoo Search can move from xi toward xj through levy fight;

𝛻𝑋𝑖 = { 𝑥𝑖 + 𝛼 ⊗ 𝑙𝑒𝑣𝑦(𝜆) (𝑥𝑗 − 𝑥𝑖) 𝑟𝑎𝑛𝑑𝑖 > 𝑝𝛼

𝑥𝑖 𝑒𝑙𝑠𝑒 (19)

Where ∇Vi is a smal movement of xi toward xj. The weights and biases for each layer can

then be adjusted as;

𝑊𝑛𝑡+1 = 𝑈𝑛

𝑡+1 = 𝑊𝑛𝑡 − 𝛻𝑋𝑖 (20)

𝐵𝑛𝑡+1 = 𝐵𝑛

𝑡 − 𝛻𝑋𝑖 (21)

4 Results and Discussions

Basically, the main focus of this paper is to compare the performance of different algorithms

in reducing the error and accuracy in network convergence. Some simulation results, tools and

technologies, network topologies, testing methodology and the classification problems used

for the entire experimentation will be discussed further in the this section.

4.1 Preliminary Study

Iris and 7-Parity bit benchmark classification problem were used for testing the

performance of the proposed CSLMEN with other algorithms. The simulation experiments

were performed on a 1.66 GHz AMD Processor with 2GB of RAM and by using Matlab

2009b Software. For each algorithm, 20 trials were run during the simulation process. In this

paper, the following algorithms are compared, analyzed and simulated on Iris and 7-Parity bit

benchmark classification problem

1. Artificial Bee Colony Algorithm based Back Propagation (ABCBP) algorithm,

2. Artificial Bee Colony Algorithm based Levenberg Marquardt (ABCLM)algorithm,

3. Back Propagation Neural Network (BPNN)algorithm, and

4. The Proposed Cuckoo Search based Elman Network (CSLMEN) algorithm,

The performance measure for each algorithm is based on the Mean Square Error (MSE),

standard deviation (SD) and accuracy. The three layers feed forward neural network

architecture (i.e. input layer, one hidden layer, and output layers.) is used for each problem.

The number of hidden nodes is keep fixed to 5. In the network structure, the bias nodes are

also used and the log-sigmoid activation function is applied. While the target error and the

maximum epochs are set to 0.00001 and 1000. The learning value is set to 0.4. A total of 20

trials are run for each dataset. The network results are stored in the result file for each trial.

CPU time, average accuracy, and Mean Square Error (MSE) are recorded for each

independent trial on Iris and 7-Parity bit benchmark classification problem.

4.2 Iris Benchmark Classification Problems

The Iris classification dataset is the most famous dataset to be found in the pattern

recognition literature. It consists of 150 instances, 4 inputs, and 3 outputs in this dataset. The

classification of Iris dataset involves the data of petal width, petal length, sepal length, and

sepal width into three classes of species, which consist of Iris Santos, Iris Versicolor, and Iris

Virginia.

Table 1. CPU Time, Epochs, MSE, Accuracy, Standard Deviation (SD) for 4-5-3 ANN Architecture

Algorithm ABCBP ABCLM BPNN CSLMEN

CPU Time 156.43 171.52 168.47 7.07

Epochs 1000 1000 1000 64

MSE 0.155 0.058 0.311 3.2E-06

SD 0.022 0.005 0.022 2.2E-06

Accuracy (%) 86.87 79.55 87.19 99.9

Table 1 shows that the proposed CSLMEN method performs well on Iris dataset. The

CSLMEN converges to global minima in 7.07 CPU seconds with an average accuracy of 99.9

percent and achieves a MSE and SD of 3.2E-06, 2.2E-06 respectively. While other algorithms

fall behind in-terms of MSE, CPU time and accuracy. The proposed CSLMEN performs well

and converges to global minima within 64 epochs for the 4-5-3 network structure as compared

to other algorithms which take more CPU time and epochs to converge.

4.1 Seven Bit-Parity Problems

The parity problem is one of the most popular initial testing task and a very demanding

classification dataset for neural network to solve. In parity problem, if a give input vectors

contains an odd number of one, the corresponding target value is 1, and otherwise the target

value is 0. The N-bit parity training set consist of 2N training pairs, with each training pairs

comprising of an N-length input vector and a single binary target value. The 2N input vector

represents all possible combinations of the N binary numbers. Here the Neural Network

Architecture used is 7-5-1.

Table 2. CPU Time, Epochs, MSE, Accuracy, Standard Deviation (SD) for 7-5-1 ANN Architecture

Algorithm ABCBP ABCLM BPNN CSLMEN

CPU Time 183.39 134.88 142.07 0.60

Epochs 1000 1000 1000 6

MSE 0.12 0.08 0.26 2.2E-06

SD 0.008 0.012 0.014 2.8E-06

Accuracy (%) 82.12 69.13 85.12 99.98

From Table 2, it can be seen that that there is eventually a decrease in the CPU time,

number of epochs, the mean square error using CSLMEN algorithm. While the accuracy for

the 7-parity bit classification problem with five hidden neurons is also increased considerably

using the CSLMEN algorithm. The proposed CSLMEN converged to a MSE of 2.2E-06

within 6 epochs. The ABCLM algorithm has an MSE of 0.08 and the ABCBP has the MSE of

0.12 with 82.12 and 69.13 % of accuracy. The simple BPNN still needs more epochs and CPU

time to converge to global minima.

5 Conclusions

Elman Recurrent Neural Network (ERNN) is one of the most widely used and a popular

training feed forward neural network. The Elman network can be cultured with gradient

descent back propagation and optimization technique. In this paper, a new meta-heuristic

search algorithm, called cuckoo search (CS) is proposed to train Levenberg Marquardt Elman

Network (LMEN) to achieve fast convergence rate and to minimize the training error. The

performance of the proposed CSLMEN algorithm is compared with the ABCLM, ABCBP and

BPNN algorithms. The proposed CSLMEN model is verified by means of simulation on 7-bit

parity and Iris classification datasets. The simulation results show that the proposed CSLMEN

is far better than the previous methods in terms of MSE, convergence rate and accuracy.

Acknowledgments. The Authors would like to thank Office of Research, Innovation,

Commercialization and Consultancy Office (ORICC), Universiti Tun Hussein Onn Malaysia

(UTHM) and Ministry of Higher Education (MOHE) Malaysia for financially supporting this

Research under Fundamental Research Grant Scheme (FRGS) vote no. 1236.

References

1. Firdaus, A. A. M., Mariyam, S. H. S., Razana, A.: Enhancement of Particle Swarm Optimization in

Elman Recurrent Network with bounded Vmax Function, In: Third Asia International Conference on

Modelling & Simulation (2009)

2. Peng, X. G., Venayagamoorthy, K., Corzin K A.: Combined Training of Recurrent Neural Networks

with Particle Swarm Optimization and Back propagation Algorithms for Impedance Identification,

In: IEEE Swarm Intelligence Symposium, pp. 9--15 (2007)

3. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd Edition, pp. 84—89, ISBN 0-13-

273350 (1999)

4. Ubeyli, E. D.: Recurrent neural networks employing lyapunov exponents for analysis of Doppler

ultrasound signals, J. Expert Systems with Applications, vol. 34(4), pp. 2538-2544 (2008)

5. Ubeyli, E.D.: Recurrent neural networks with composite features for detection of

electrocardiographic changes in partial epileptic patients, J. Computers in Biology and Medicine, vol.

38 (3), pp. 401--410 (2008)

6. Saad, E.W., Prokhorov, D.V., Wunsch II, D.C.: Comparative study of stock trend prediction using

time delay, recurrent and probabilistic neural networks, J. IEEE Transactions on Neural Networks,

vol. 9(6), pp. 1456--1470 (1998)

7. Gupta, L., McAvoy, M.: Investigating the prediction capabilities of the simple recurrent neural

network on real temporal sequences, J. Pattern Recognition, vol. 33(12), pp. 2075--2081 (2000)

8. Gupta, L., McAvoy, M., Phegley, J.: Classification of temporal sequences via prediction using the

simple recurrent neural network, J. Pattern Recognition, vol. 33(10), pp. 1759--1770 (2000)

9. Nihal, G. F., Elif, U. D., Inan, G.: Recurrent neural networks employing lyapunov exponents for

EEG signals classification, J. Expert Systems with Applications, vol. 29 pp. 506--514 (2005)

10. Elman, J. L.: Finding structure in time, J. Cognitive Science, vol. 14(2), pp. 179--211 (1990)

11. Rehman, M. Z., Nawi, N. M.: Improving the Accuracy of Gradient Descent Back Propagation

Algorithm(GDAM) on Classification Problems, Int. J. of New Computer Architectures and their

Applications (IJNCAA), vol. 1(4), pp.838--847 (2012)

12. WAM, A., ESM, S., ESA, A.: Modified Back Propagation Algorithm for Learning Artificial Neural

Networks, In: The 18th National Radio Science Conference, pp. 345--352 (2001)

13. Wen, J., Zhao J. L., Luo, S. W., Han, Z.: The Improvements of BP Neural Network Learning

Algorithm In: 5th Int. Conf. on Signal Processing WCCC-ICSP, pp.1647--1649 (2000)

14. Tanoto, Y., Ongsakul, W., Charles, O., Marpaung, P.: Levenberg-Marquardt Recurrent Networks for

Long-Term Electricity Peak Load Forecasting, J. TELKOMNIKA, vol.9(2), pp. 257--266 (2011)

15. Peng, C., George, D., Magoulas.: Nonmonotone Levenberg–Marquardt training of recurrent neural

architectures for processing symbolic sequences, J. of Neural Comput & Application, vol. 20,

pp:897--908 (2011)

16. Yang X.S., Deb, S.: Cuckoo search via Lévy flights, In: Proceedings of World Congress on Nature &

Biologically Inspired Computing, India, pp. 210--214 (2009)

17. Yang, X. S., Deb, S.: Engineering Optimisation by Cuckoo Search, J. International Journal of

Mathematical Modelling and Numerical Optimisation, vol. 1(4), pp. 330--343 (2010)

18. Tuba, M., Subotic,M., and Stanarevic, N.: Modified cuckoo search algorithm for unconstrained

optimization problems, In: Proceedings of the European Computing Conference (ECC ’11), pp. 263--

268 (2011)

19. Tuba, M., Subotic,M., Stanarevic, N.: Performance of a Modified Cuckoo Search Algorithm for

Unconstrained Optimization Problems, J. Faculty of Computer Science, vol. 11(2), pp. 62--74 (2012)

20. Yang, X. S., Deb, S.: Engineering optimisation by cuckoo search, J. Int. J. Mathematical Modelling

and Numerical Optimisation, vol. 1 (4), pp. 330--343 (2010)

21. Chaowanawate, K., Heednacram, A.: Implementation of Cuckoo Search in RBF Neural Network for

Flood Forecasting, In: 4th International Conference on Computational Intelligence, Communication

Systems and Networks, pp. 22--26 (2012)

22. Hagan M.T, Menhaj M.B.: Training Feedforward Networks with the Marquardt Algorithm, J. IEEE

Transactions on Neural Networks, vol. 5(6), pp. 989-993 (1999)

23. Nourani, E., Rahmani, A. M., Navin, A. H.: Forecasting Stock Prices using a hybrid Artificial Bee

Colony based Neural Network, In: ICIMTR2012, Malacca, Malaysia, pp. 21--22 (2012)

24. Rehman, M. Z., Nawi, N. M.: Studying the Effect of adaptive momentum in improving the accuracy

of gradient descent back propagation algorithm on classification problems, J. Int. Journal of Modern

Physics: Conference Series, vol. 9, pp. 432-439 (2012)