8
Review Article Concepts, Methods, and Performances of Particle Swarm Optimization, Backpropagation, and Neural Networks Leke Zajmi , Falah Y. H. Ahmed , and Adam Amril Jaharadak Department of Information Sciences and Computing, Faculty of Information Sciences and Engineering (FISE), Management and Science University, 40100 Shah Alam, Malaysia Correspondence should be addressed to Falah Y. H. Ahmed; [email protected] Received 16 April 2018; Revised 7 August 2018; Accepted 15 August 2018; Published 3 September 2018 Academic Editor: Christian W. Dawson Copyright © 2018 Leke Zajmi et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. With the advancement of Machine Learning, since its beginning and over the last years, a special attention has been given to the Artificial Neural Network. As an inspiration from natural selection of animal groups and human’s neural system, the Artificial Neural Network also known as Neural Networks has become the new computational power which is used for solving real world problems. Neural Networks alone as a concept involve various methods for achieving their success; thus, this review paper describes an overview of such methods called Particle Swarm Optimization, Backpropagation, and Neural Network itself, respectively. A brief explanation of the concepts, history, performances, advantages, and disadvantages is given, followed by the latest researches done on these methods. A description of solutions and applications on various industrial sectors such as Medicine or Information Technology has been provided. e last part briefly discusses the directions, current, and future challenges of Neural Networks towards achieving the highest success rate in solving real world problems. 1. Introduction Artificial Neural Network (ANN) or simply known as Neural Networks (NNs) is the area which has received and continues to receive attention from world’s greatest researchers. In scientific terms it is known as a structure of interconnected units of large number of neurons. As the researcher Zhang et. al [1] mentioned in his research, each of those intercon- nected neurons have the ability of receiving, processing, and sending an output signals. In more common understanding, researcher Sonali et. al [2] stated that Neural Networks are a digital copy of human biological nervous system and follow the same path of learning neurons. It processes information in a similar way to how the human brain does. A Neural Network consists of three sets, with the first set being the pattern of connections which is between neurons (an adder that sums the input data), the second set being the method of determining the weights on the connections, and lastly an activation function which limits the output amplitude of the neuron. Neural Networks (NNs) are daily used in various applications. In 2017, Rasit A. [3] found that Neural Networks are highly useful when it comes to pattern recognition, optimization, simulation, and also prediction. A trained Artificial Neural Network (ANN) could be considered an “expert” in the task of information that is been given to analyze, and this comes as an advantage Neural Networks have in taking different approaches when it comes to problem solving. In the following Section 2, a review is provided on the methods and algorithms of the Artificial Neural Network, such as Backpropagation and Swarm Intelligence which includes Practical Swarm Optimization. A review on Feed- forward and Backward phases of Backpropagation algorithm together with their sets of equations explained will give a brief understanding on how this method and algorithm works, together with its weaknesses and strengths. Furthermore, the discussion continues on the Multilayer Perceptron (MLP) and the supervised and unsupervised learning techniques are also briefly explained. 2. Artificial Neural Networks (ANN) One of the most researched techniques of Neural Networks is Backpropagation. Backpropagation is a technique in which its network of nodes is arranged in layers. Researcher Jaswante Hindawi Applied Computational Intelligence and So Computing Volume 2018, Article ID 9547212, 7 pages https://doi.org/10.1155/2018/9547212

Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

Review ArticleConcepts, Methods, and Performances of Particle SwarmOptimization, Backpropagation, and Neural Networks

Leke Zajmi , Falah Y. H. Ahmed , and Adam Amril Jaharadak

Department of Information Sciences and Computing, Faculty of Information Sciences and Engineering (FISE),Management and Science University, 40100 Shah Alam, Malaysia

Correspondence should be addressed to Falah Y. H. Ahmed; [email protected]

Received 16 April 2018; Revised 7 August 2018; Accepted 15 August 2018; Published 3 September 2018

Academic Editor: Christian W. Dawson

Copyright © 2018 Leke Zajmi et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

With the advancement of Machine Learning, since its beginning and over the last years, a special attention has been given to theArtificial Neural Network. As an inspiration from natural selection of animal groups and human’s neural system, the ArtificialNeural Network also known as Neural Networks has become the new computational power which is used for solving real worldproblems. NeuralNetworks alone as a concept involve variousmethods for achieving their success; thus, this reviewpaper describesan overview of such methods called Particle Swarm Optimization, Backpropagation, and Neural Network itself, respectively. Abrief explanation of the concepts, history, performances, advantages, and disadvantages is given, followed by the latest researchesdone on these methods. A description of solutions and applications on various industrial sectors such as Medicine or InformationTechnology has been provided. The last part briefly discusses the directions, current, and future challenges of Neural Networkstowards achieving the highest success rate in solving real world problems.

1. Introduction

Artificial Neural Network (ANN) or simply known as NeuralNetworks (NNs) is the area which has received and continuesto receive attention from world’s greatest researchers. Inscientific terms it is known as a structure of interconnectedunits of large number of neurons. As the researcher Zhanget. al [1] mentioned in his research, each of those intercon-nected neurons have the ability of receiving, processing, andsending an output signals. In more common understanding,researcher Sonali et. al [2] stated that Neural Networks are adigital copy of human biological nervous system and followthe same path of learning neurons. It processes informationin a similar way to how the human brain does.

A Neural Network consists of three sets, with the first setbeing the pattern of connections which is between neurons(an adder that sums the input data), the second set beingthe method of determining the weights on the connections,and lastly an activation function which limits the outputamplitude of the neuron. Neural Networks (NNs) are dailyused in various applications. In 2017, Rasit A. [3] found thatNeural Networks are highly useful when it comes to patternrecognition, optimization, simulation, and also prediction.

A trained Artificial Neural Network (ANN) could beconsidered an “expert” in the task of information that isbeen given to analyze, and this comes as an advantage NeuralNetworks have in taking different approaches when it comesto problem solving.

In the following Section 2, a review is provided on themethods and algorithms of the Artificial Neural Network,such as Backpropagation and Swarm Intelligence whichincludes Practical Swarm Optimization. A review on Feed-forward and Backward phases of Backpropagation algorithmtogether with their sets of equations explained will give a briefunderstanding on how this method and algorithm works,together with its weaknesses and strengths. Furthermore, thediscussion continues on the Multilayer Perceptron (MLP)and the supervised and unsupervised learning techniques arealso briefly explained.

2. Artificial Neural Networks (ANN)

One of the most researched techniques of Neural Networks isBackpropagation. Backpropagation is a technique inwhich itsnetwork of nodes is arranged in layers. Researcher Jaswante

HindawiApplied Computational Intelligence and So ComputingVolume 2018, Article ID 9547212, 7 pageshttps://doi.org/10.1155/2018/9547212

Page 2: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

2 Applied Computational Intelligence and Soft Computing

Input Weights Sigmoid Function Output

Figure 1: Artificial Neural Network model.

et. al [6] describes it as the first layer of the network being theinput layer, and the last layer being the output layer, while allthe remaining intermediate layers being called hidden layers.Backpropagation is a technique which considers a number ofelements in order to get an impact on its convergence. Input,processing (hidden), and output nodes are part of thoseelements, together with the momentum rate and minimumerror [7].

Learning in Backpropagation follows a set of steps. Thesesteps are simplified as follows:

(a) The input layer gets presented to input vector(b) The output layer gets presented to the set of a desired

output(c) A comparison between the desired errors and the

actual output is done after every forward pass(d) The results’ comparison determines weight changes

according to learning rulesDespite the hype of the widely researched Backpropaga-

tion, this algorithm is also well known for its disadvantagesand its accuracy leaves room for much better desired results.A group of researchers, lead by Cho et. al. [4], states thatBackpropagation takes longer in time when it comes totraining. This disadvantage comes mostly due to the timingduring backward moves that neuron perform until the idealsolution is found.Thus, a few researchers started using SwarmIntelligence algorithms (SI) which enhances the learning inNeural Networks using different approach.

Researchers Christian Blum and Daniel Merkle [8]describe Swarm Intelligence as a technique which has takeninspiration from the group or collective behavior of animalsand insects, like the collective behavior of insects, flocks ofbirds, fishes, etc. This inspiration comes due to the techniquethe neurons (or known as swarms) use by following the groupof collective neurons towards the better solution.

Swarm Intelligence uses a common and also one of themost accurate techniques known, Particle Swarm Optimiza-tion. The main objective of this method in Neural Networksis getting the best particle position from a group of particleswhich are either moving or trying to move towards the bestsolution.

2.1. Artificial Neural Network (ANN) and BackpropagationAlgorithm (BP). Artificial Neural Network consists of a net-work which is made of neurons, nodes, or cells arranged andinterconnected to that network. Neurons in Artificial NeuralNetworks have the capability of learning from examples, andthey are able to respond intelligently to new triggers [9, 10].

A typical Neural Networks (NNs) topology is shown inFigure 1. Each node consists of an activation function calledsigmoid function. The signal sent from each input nodetravels through the weighted connection, whereby accordingto internal activation function the output is produced.

Figure 2 shows the Multilayer Perceptron (MLP), whichis the interconnection flow among nodes in Artificial NeuralNetwork (ANN).

The equations of the processes between input (i) andhidden (j) layers are as follows:

𝑂𝑗 = 𝑓 (𝑛𝑒𝑡𝑗) =11 + 𝑒−𝑛𝑒𝑡𝑗

(1)

𝑛𝑒𝑡𝑗 = ∑𝑗

𝑤𝑖𝑗𝑂𝑖 + 𝜃𝑗 (2)

with

𝑂𝑗 being the output of node j𝑂𝑖 being the output of node i𝑤𝑖𝑗 being the connected weight between nodes i and j

𝜃𝑗 being the bias of node j

The further transitions between hidden layer (j) andoutput layer (k) are as follows:

𝑂𝑘 = 𝑓 (𝑛𝑒𝑡𝑘) =11 + 𝑒−𝑛𝑒𝑡𝑘

(3)

𝑛𝑒𝑡𝑘 = ∑𝑘

𝑤𝑗𝑘𝑂𝑗 + 𝜃𝑘 (4)

with

𝑂𝑘 being the output of node k𝑂𝑗 being the output of node j𝑤𝑗𝑘 being the connected weight between nodes j andk

𝜃𝑘 being the bias of node k

The error of the above process is calculated using (5).Thiserror calculation measures or compares differences betweenthe desired output we desired and the output which wasproduced. The error gets propagated backward among layersof network, from output to hidden and to input with weights

Page 3: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

Applied Computational Intelligence and Soft Computing 3

Input 1

Input 2

Input 3

Output 1

Output 2

InputLayer i

HiddenLayer j

OutputLayer k

Figure 2: Combination of ANN with Multilayer Perceptron (MLP) model [4].

being modified while the weights are modified for errorreduction during this propagation.

𝑒𝑟𝑟𝑜𝑟 = 12(𝑂𝑢𝑡𝑝𝑢𝑡𝑑𝑒𝑠𝑖𝑟𝑒𝑑 − 𝑂𝑢𝑡𝑝𝑢𝑡𝑎𝑐𝑡𝑢𝑎𝑙)

2 (5)

Based on the calculated error above, Backpropagationalgorithm gets to be applied on reversing from output (k) tohidden node (j), as shown in

𝑤𝑘𝑗 (𝑡 + 1) = 𝑤𝑘𝑗 (𝑡) + Δ𝑤𝑘𝑗 (𝑡 + 1) (6)

Δ𝑤𝑘𝑗 (𝑡 + 1) = 𝑛𝛿𝑘𝑂𝑗 + 𝛼Δ𝑤𝑘𝑗 (𝑡) (7)

with

𝛿𝑘 = 𝑂𝑘 (1 − 𝑂𝑘) (𝑡𝑘 − 𝑂𝑘) (8)

with

𝑤𝑘𝑗(𝑡) being the weight from nodes k to j for a time tΔ𝑤𝑘𝑗 being weight adjustment𝑛 being the learning rate𝛼 being the momentum rate𝛿𝑘 being the error at node k𝑂𝑗 being the actual network output at node j𝑂𝑘 being the actual network output at node k𝑡𝑘 being the target output value at node k

The backward calculations from hidden to input layers (jand i, respectively) are shown in the following examples:

𝑤𝑗𝑖 (𝑡 + 1) = 𝑤𝑗𝑖 (𝑡) + Δ𝑤𝑗𝑖 (𝑡 + 1) (9)

Δ𝑤𝑗𝑖 (𝑡 + 1) = 𝑛𝛿𝑗𝑂𝑖 + 𝛼Δ𝑤𝑗𝑖 (𝑡) (10)

with

𝛿𝑗 = 𝑂𝑗 (1 − 𝑂𝑗)∑𝑘

𝛿𝑘𝑤𝑘𝑗

𝑂𝑘 = 𝑓 (𝑛𝑒𝑡𝑘) =11 + 𝑒−𝑛𝑒𝑡𝑘

𝑛𝑒𝑡𝑘 = ∑𝑘

𝑤𝑘𝑗𝑂𝑗 + 𝜃𝑘

(11)

where

𝑤𝑗𝑖(𝑡) is the weight from node j to node i at time tΔ𝑤𝑗𝑖 is the weight adjustment𝑛 is the learning rate𝛼 is the momentum rate𝛿𝑗 is the error at node j𝛿𝑘 is the error at node k𝑂𝑖 is the network output at node i𝑂𝑗 is the network output at node j𝑂𝑘 is the network output at node k𝑤𝑘𝑗 is the weight connected between nodes j and k𝜃𝑘 is the bias of node k

The repetition of this process is unlimited; however, itstops until convergence is achieved.

2.2. Swarm Intelligence. In 2009, researchers Konstantinosand Michael [11] called Swarm Intelligence (SI) an ArtificialIntelligence (AI) branch which studies the collective behav-ior of complex, self-organized, and decentralized systemswith social structure. In a simplified understanding, thistechnique got its inspiration from nature, similar to theway that ant colonies and bird flocks operate, translatedinto computationally intelligent systems. These systems are

Page 4: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

4 Applied Computational Intelligence and Soft Computing

Randomly InitializePopulation Locations

and Velocities

Evaluate Fitness ofParticle

If particle fitness > globalbest fitness, update global

best

If particle fitness > particlebest fitness, update particle

best

Update particle velocity

Update particle position

gbest

pbestIf particle fitness > particlebest fitness, update particle

best

Next Particle

Figure 3: Basic PSO procedure [5].

translated in a way that is formed from interacting agentswith their environment, whereby such interaction leads toa global solution behavior, similar to bird flocks whereeach bird is part of the contribution on reaching the des-tination, or global solution. In Swarm Intelligence, theseinteracting agents mean that all the neurons or particlesof the group work as a team on finding the best placeto be. Swarm Intelligence (SI) spreads through special-ized optimization techniques, with two of the major tech-niques known such as Ant Colony Optimization (ACO)and Particle Swarm Optimization (PSO), which we will bereviewing in the next section. The Ant Colony Optimiza-tion (ACO) comes in handy while solving computationalproblems through finding good ways/paths using graphs.Its inspiration comes from ants findings ways from colonyto their food. On the other side, the second major tech-nique is known, and one of the three reviewed meth-ods of this article is Particle Swarm Optimization (PSO).Due to the more accurate performance, Particle Swarm

Optimization is known to have replaced Genetic Algo-rithm.

Particle Swarm Optimization is a technique where par-ticles move in group for finding better results. ResearcherGerhard et. al [5] mentioned that when these particles movein group, a vector is used to update the position of thoseparticles, called velocity vector. Figure 3 shows the basic flowprocedure of Particle Swarm Optimization.

Particle Swarm Optimization achieves its success rateusing different ways of modifications. In 2011, a groupof researchers [12] concluded that modification in ParticleSwarm Optimization algorithm consists of three categories,the extension of field searching space, adjustment of theparameters, and hybridization with another technique.

2.3. Particle Swarm Optimization (PSO). As mentioned pre-viously, this method is one of the core and most interestingparts of Neural Networks.

Page 5: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

Applied Computational Intelligence and Soft Computing 5

Particle Swarm Optimization has started from the anal-ysis of real life samples and social models [10, 13]. AsParticle Swarm Optimization belongs to the family of SwarmIntelligence, swarms or neurons work together on finding thebest solution.Thus, its concept is adapted fromnatural causes,such as the bird flocking and fish schooling, and this makesParticle Swarm Optimization a population algorithm.

The physical position of the particle is not important,therefore, the swarm (neuron) gets initialized by beingassigned to any random position and velocity, as well as thepotential solutions which are flown through the hyperspace.Ferguson [14] mentions that similar to our brain neuronswhich learn from our past experiences, so do the particles inParticle Swarm Optimization.

All particles working towards the global best solutionkeep record of each position they have taken and achieved tothe moment [15]. From these values, the best personal valueof particle is called pbest (personal best), and the best valueobtained from the overall particle group is called gbest (globalbest). Iteration of each particle causes acceleration towardstheir own personal best position (pbest), as well as the overallglobal best position (gbest). In 2000, the researcher Van denBergh et. al. [16] stated that these two record (pbest andgbest) velocities were weighted randomly and then produce anew velocity for the particle which will affect the future nextpositions of the particle.

Particle Swarm Optimization (PSO) includes a set of twoequations called the equation of movements (12) and theequation of velocity update (13).Themovement of particles byusing their specific vector velocity is shown in (12), where thevelocity update equation which provides the velocity vectoradjustment given the two competing forces (gbest and pbest)is shown in (13).

𝑥𝑛 = 𝑥𝑛 + V𝑛 ∗ Δ𝑡 (12)

V𝑛 = V𝑛 + 𝑐1 ∗ 𝑟𝑎𝑛𝑑 ( ) ∗ (𝑔𝑏𝑒𝑠𝑡,𝑛 − 𝑥𝑛) + 𝑐2 ∗ 𝑟𝑎𝑛𝑑 ( )

∗ (𝑝𝑏𝑒𝑠𝑡,𝑛 − 𝑥𝑛)(13)

Equation (12) is used for all the elements of x positionand v velocity vector. The Δ𝑡 parameter defines the discreteinterval time in which swarmwill bemoving, and it is usuallyset to 1.0. This movement results in the new position ofthe swarm. In (13), the result brings a subtraction of thedimensional element from the best vector—which is thenmultiplied with a random number of 0 to 1, and also withan acceleration constant of 𝐶1 and 𝐶2. Hence, the sumgets added to velocity. This process is performed for allthe population. If we choose random numbers, those wouldalso provide an amount of randomness helping the swarmtowards its path throughout the solution space. 𝐶1 and 𝐶2acceleration constantly provide control to the equation thatdefines which one should be given more rights towards thepath, which is either global or personal best [4].

In Table 1 we will demonstrate the movement of ParticleA towards global best solution in a two-dimensional space(2D).The example gets 𝐶1 = 1.0 and 𝐶2 = 0.5. 𝐶1 has a highervalue than 𝐶2, which means that it gives higher attention andemphasis to finding the best global solution.

Table 1: The value and position of Particle A.

Particle A Gbest PbestValue = 5 5 15Value = 10 13 13

We assume that velocity values of Particle A have beencalculated in the previous iteration 𝑃V = (0.1). First of all, thevelocity vector has to be updated for current iteration using(13).

The first position of Article A with the value of 5 is

V𝑥 = V𝑥 + 𝑐1 ∗ 𝑟𝑎𝑛𝑑 ( ) ∗ (𝑔𝑏𝑒𝑠𝑡,𝑥 − 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑥) + 𝑐2

∗ 𝑟𝑎𝑛𝑑 ( ) ∗ (𝑝𝑏𝑒𝑠𝑡,𝑥 − 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑥)

V𝑥 = 0 + 1.0 ∗ 0.35 ∗ (5 − 5) + 0.5 ∗ 0.1 ∗ (15 − 5)

V𝑥 = 0 + 0.35 + 0.05 ∗ (5)

V𝑥 = 0.35 + 0.25

V𝑥 = 0.6

(14)

Second position of Article A with the value of 10 is

V𝑦 = V𝑦 + 𝑐1 ∗ 𝑟𝑎𝑛𝑑 ( ) ∗ (𝑔𝑏𝑒𝑠𝑡,𝑦 − 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑦) + 𝑐2

∗ 𝑟𝑎𝑛𝑑 ( ) ∗ (𝑝𝑏𝑒𝑠𝑡,𝑦 − 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑦)

V𝑦 = 1 + 1.0 ∗ 0.2 ∗ (13 − 10) + 0.5 ∗ 0.45

∗ (13 − 10)

V𝑦 = 1 + 0.2 ∗ (3) + 0.225 ∗ (3)

V𝑦 = 1 + 0.6 + 0.675

V𝑦 = 2.275

(15)

As we can see, the velocity value of Particle A is now𝑃V = (0.6, 2.275); therefore new velocities will be appliedupon particle positions using (2j) as follows:

𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑥 = 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑥 + V𝑥 ∗ Δ𝑡

𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑥 = 5 + (0.6) ∗ 1.0

𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑥 = 5.6

𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑦 = 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑦 + V𝑦 ∗ Δ𝑡

𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑦 = 5 + (2.275) ∗ 1.0

𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛𝑦 = 7.275

(16)

From the above calculations, we got to conclude the newupdated value for Particle A which you can see in Table 2.

In [17] it was stated that the position of each particlerepresents a set of weights for each iteration, for neuralnetwork implementation; thus, the dimensionality of thoseparticles would be the number of weights that are associatedwith the network.

Page 6: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

6 Applied Computational Intelligence and Soft Computing

Table 2: The updated value of Particle A.

Particle A Gbest PbestNew Value = 5.6 5 15New Value = 7.275 13 13

To minimize the learning error of the results, and toproduce better quality ones, we need Mean Squared Er-ror.

As such, Mean Squared Error (MSE) is the minimizationof the learning error by each particle moving in the weightspace. Position change comes with weight update in order toreduce particular current epoch. Epoch is the state where theparticles update positions from calculating their new velocitywhich is used for moving forward towards new positions.These new positions are a state of newweights that are used toobtain the new error. In Particle Swarm Optimization (PSO)these weights are adopted even without new improvement.The global best position of particle is chosen after the processrepetition and selection for all the particles with the lowesterror. Once the satisfactory error is achieved, this processends and once the training ends, those weights are used fortraining patterns by calculating the classification error, andthe same set of weights is also used for network testing usingtesting patterns.

2.4. Classification Problems. Classification ranks as one ofthe most important implementations of Neural Networks.As it is, solving classification problems requires a lot ofwork and ranging all the available static patterns for classes.These patterns include various parameters, such as if relatedto Information Security it could be intrusion detection, forbanking it could be bankruptcy prediction, and as formedicalfield it could be the diagnosis. Patterns are representedby vectors, which influence pattern assignation decisionin classes. An example of the usage of vectors is on themedical field, where the vector component is served from thecheckup data, and the Neural Networks determine to wherethe pattern is going to be assigned, based on the availableinformation for it.

The input data for the Neural Networks must be in anormalized format and is done in several ways. By nor-malization, it is meant that all data numbers should beof the range between zero to one. Although classificationhas shown significant progress in various areas of NeuralNetworks, in 2017 [18], it was mentioned that there still area number of unsolved issues completely. First classificationissue being the amount of data which Neural Networks dealwith, and secondly the prediction that often causes errors dueto learning error problems.

The usage of Neural Networks classification has shownsuccess and has been applied to various world classificationtasks, frombusiness industry, to InformationTechnology andto science. Specific examples of usage of Neural Networksare many, as mentioned in [19]; the usefulness of NeuralNetworks is seen in improving the broader family of theoverall Machine Learning.

3. Conclusion

Learning in Neural Networks has made it possible for scien-tists and researchers to create various applications for multi-ple industries and create ease in everyday life. The methodsreviewed in this paper, namely, Artificial Neural Networks,Backpropagation, and Particle Swarm Optimization have asignificant role in Neural Networks in understanding realworld problems and tasks, such as image processing, speechor character recognition, and intrusion detection [20, 21]

Contributions of these methods are significant; however,each category still lacks the definite success as these fieldsare still progressing and improving on enclosing the gapwhich exists between its theory and practice. The noveltyof the classification concept in Artificial Neural Networks,in particular Particle Swarm Optimization and Backpropa-gation, means that this field of research is openly, actively,and highly researched every year. This paper has provided abrief review on the concepts, methods, and performances ofParticle Swarm Optimization, Backpropagation, and NeuralNetworks.

In conclusion, the study on Artificial Neural Networksand its methods is promising and ongoing, especially theimprovement studies that are needed in learning error sideand in the classification accuracy. The improvement of thesetwo categories is a big part for Neural Networks solutionsfor the new emerging field technologies like Deep Learningin Medicine, Cloud Computing, and even in InformationSecurity [22–25].

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper.

Acknowledgments

This review paper and research have been supported by Fac-ulty of Information Sciences and Engineering atManagementand Science University, Malaysia.

References

[1] Z. Chiyuan, B. Samy, H. Moritz, R. Benjamin, and V. Oriol,“Understanding deep learning requires rethinking generaliza-tion,” International Conference on Learning Representations,2017.

[2] B. M. Sonali and W. Priyanka, “Research Paper on Basic ofArtificial Neural Network,” International Journal on Recent andInnovation Trends in Computing and Communication, vol. 2, no.1, 2014.

[3] A. Rasit, Renewable and Sustainable Energy Reviews 534-562,Celal Bayar University, Turkey, 2015.

[4] T. Cho, R. Conners, and P. Araman, “Fast backpropagationlearning using steep activation functions and automatic weightreinitialization,” in Proceedings of the IEEE International Con-ference on Systems, Man, and Cybernetics, pp. 1587–1592, Char-lottesville, VA, USA.

Page 7: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

Applied Computational Intelligence and Soft Computing 7

[5] V. Gerhard and S. Jaroslaw, “Particle Swarm Optimization,”AIAA Journal, vol. 41, no. 8, 2003.

[6] A. Jaswante, K. Asif, and G. Bhupesh, IJCSNS, vol. 14, No. 11,2014.

[7] A. Hamed andN. Haza, Particle SwarmOptimization for NeuralNetwork Learning Enhancement, Universiti TeknologiMalaysia,2006, Particle Swarm Optimization for Neural Network Learn-ing Enhancement.Universiti Teknologi Malaysia.

[8] C. Blum and D. Merkle, Swarm Intelligence: Introduction andApplications, Natural Computing Series, Springer, 2008.

[9] A. P.Markopoulos, S. Georgiopoulos, andD. E.Manolakos, “Onthe use of back propagation and radial basis function neuralnetworks in surface roughness prediction,” Journal of IndustrialEngineering International, vol. 12, no. 3, pp. 389–400, 2016.

[10] F. Y. H. Ahmed, S. M. Shamsuddin, and S. Z. M. Hashim,“Improved SpikeProp for using particle swarm optimization,”Mathematical Problems in Engineering, vol. 2013, Article ID257085, 13 pages, 2013.

[11] P. Konstantinos and V. Michael, “Particle Swarm Optimizationand Intelligence,”Advances and Applications, 2009.

[12] D. P. Rini, S. M. Shamsuddin, and S. S. Yuhaniz, “Particle swarmoptimization: technique, system and challenges,” InternationalJournal of Computer Applications, vol. 14, no. 1, pp. 19–26, 2011.

[13] Y. Shi, “Particle Swarm Optimization,” IEEE Neural NetworkSociety, p. 13, 2004.

[14] D. Ferguson, Particle Swarm, University of Victoria, Canada,2004.

[15] J. Kennedy and R. C. Eberhart, Swarm Intelligence, MorganKaufmann, 2001.

[16] F. A. Van den Bergh and A. Engelbrecht, “Cooperative Learningin Neural Networks using Particle Swarm Optimizers,” SouthAfrican Computer Journal, vol. 26, pp. 84–90, 2000.

[17] B. Al-Kazemi and C. K. Mohan, “Training feedforward neuralnetworks using multi-phase particle swarm optimization,” inProceedings of the 9th International Conference on NeuralInformation Processing, ICONIP 2002, New York, 2002.

[18] Zhang. Guoqiang, “IEEE Transactions on Systems, Man andCybernetics,” IEEE, Part C, Applications and Reviews, vol. 30,no. 4, 2017.

[19] M. K. Khan, W. G. Al-Khatib, and M. Moinuddin, “Automaticclassification of speech and music using neural networks,” inProceedings of the the 2nd ACM international workshop, p. 94,Washington, DC, USA, November 2004.

[20] I. R. Ali, H. Kolivand, andM. H. Alkawaz, “Lip syncing methodfor realistic expressive 3D face model,” Multimedia Tools andApplications, vol. 77, no. 5, pp. 5323–5366, 2018.

[21] Y. H. A. Falah, “A Survey and Analysis of Image Encryp-tion Methods,” International Journal of Applied EngineeringResearch, 2017, Research India Publications.

[22] H. Igor, J. Bohuslava, J. Martin, and N. Martin, “Application ofneural networks in computer security,” Procedia Engineering,vol. 69, pp. 1209–1215, 2013.

[23] R. A. Abbas, “Improving time series forecast errors by UsingRecurrent Neural Networks,” Australian Journal of ForensicSciences, 2018.

[24] M. H. Alkawaz, Automated Kinship verification and identifica-tion through human facial images, vol. 76 of Multimedia Toolsand Applications, Kluwer Academic Publishers, 2017.

[25] M. H. Alkawaz, Detection of Copy-move Image forgery based ondiscrete Cosine transform, Neural Computing and Applications,Springer verlag, 2018.

Page 8: Concepts, Methods, and Performances of Particle Swarm ...downloads.hindawi.com/journals/acisc/2018/9547212.pdf · ReviewArticle Concepts, Methods, and Performances of Particle Swarm

Computer Games Technology

International Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwww.hindawi.com

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Applied Computational Intelligence and Soft Computing

 Advances in 

 Artificial Intelligence

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Civil EngineeringAdvances in

Hindawiwww.hindawi.com Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwww.hindawi.com Volume 2018

Hindawi

www.hindawi.com Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Computational Intelligence and Neuroscience

Hindawiwww.hindawi.com Volume 2018

Mathematical Problems in Engineering

Modelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Hindawiwww.hindawi.com Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwww.hindawi.com Volume 2018

Scienti�c Programming

Submit your manuscripts atwww.hindawi.com