4
Abstract—This paper proposes a new model of an Evolving Spiking Neural Network (ESNN) for spatio-temporal data (STD) classification problems. The proposed ESNN model incorporates an additional layer for capturing both spatial and temporal components of the STD and then transforms them into high dimensional spiking patterns. These patterns are learned and classified in the evolving classification layer of the ESNN. A fast time-to-first-spike learning algorithm is used that enables the new model to be more suitable for learning from the STD streams in an adaptive and incremental manner. The proposed method is evaluated on a benchmark sign language video that is spatio-temporal in nature. The results show that the proposed method is able to capture important spatio-temporal information from the STD stream. This results in significantly higher classification accuracy than the traditional time-delay MLP neural network model. Future directions for the development of ESNN models for STD are discussed. I. INTRODUCTION ost data, that we deal with when developing machine learning, data mining and decision support systems, is often spatio-temporal in nature. For example: cloud formation for rain forecasting; traffic movement for route identification based on Global Positioning System (GPS); motion and human gesture recognition; brain signals and their recognition, and many more. Generally, spatio- temporal data (STD) relates to objects whose position, shape and size change over time [1]. STD is produced by a time- evolving spatial object represented by a set of triplets ( _ , , i i o id s t ) where _ o id is an object with the identification number, and i s is the location of the _ o id at time i t [2]. When solving spatio-temporal problems (STP), spatial information is needed to represent the position of the object Manuscript received February 10, 2011. Haza Nuzly Abdull Hamed is a PhD student at the Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand. He is also an academic staff at Universiti Teknologi Malaysia. (Phone: +64 9 921 9547; fax: +64 9 921 9543; e-mail: [email protected], [email protected]). Nikola Kasabov is a director and founder of the Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand (e-mail: [email protected]). Siti Mariyam Shamsuddin is a head of the Soft Computing Research Group at University Teknologi Malaysia (e-mail: [email protected]). Harya Widiputra is with Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand (e- mail: [email protected]). Kshitij Dhoble is with Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand (e-mail: [email protected]). in space together with the temporal information indicating when the event has occurred. The most common STP is Electroencephalography (EEG) signal processing. There are a number of algorithms and approaches for dealing with STD and STP, such as Time Delay Neural Networks (TDNN) [3] and recurrent Elman networks [4]. However, more biological inspired methods have been introduced and have received tremendous attentions in regard to solving STP. In biological neurons, whether a neuron spikes or not at any given time may depend not only on input signals, but also on parameters such as gene and protein expression [5], the physical properties of connections [6], the probabilities of spikes being received at the synapses and the emitted neuro-transmitters or open ion channels. Many of these properties have been mathematically modeled and used to study biological neurons [7], but have not been fully utilized for the creation of more efficient ANN to solve complex STP. Spiking Neural Networks (SNN) models are made up of artificial spiking neurons that use trains of spikes to represent and process pulse-coded information. In biological neural networks, neurons are connected at synapses, and electrical signals (spikes) pass information from one neuron to another. SNNs are biologically plausible and offer some means for representing time, frequency, phase and other features of the information being processed. The main reason to study SNN for STP is due to the ability of SNN to internally represent and to process both spatial and temporal information adequately. Important aspects of SNN for STP are: information encoding [8], [9]; SNN structural implementation [10], [11], [12]. Evolving SNN (ESNN) [13], [14], [15], [16] constitute a class of SNN that evolve their structure through incremental, fast and one-pass learning utilizing time-to-first spike or similar algorithms [17], [18], [19]. During the learning phase, for every new input pattern, the ESNN creates a new class output neuron and calculates its connection weights. It then compares its connection weights with the other neurons for the same output class. If the difference is lower than a threshold, the new neuron is merged with the closest one belonging to the same output class; otherwise the new neuron stays. ESNNs have been applied to various classification tasks, such as person authentication based on audiovisual information [14], face recognition [15], taste recognition [20] and have achieved better results than previously published. In this paper we extend the structure of the ESNN to deal with STP. A new layer is added to capture the whole STD An Extended Evolving Spiking Neural Network Model for Spatio-Temporal Pattern Classification Haza Nuzly Abdull Hamed, Nikola Kasabov, Fellow, IEEE, Siti Mariyam Shamsuddin, Harya Widiputra and Kshitij Dhoble M Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011 978-1-4244-9637-2/11/$26.00 ©2011 IEEE 2653

[IEEE 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose) - San Jose, CA, USA (2011.07.31-2011.08.5)] The 2011 International Joint Conference on Neural Networks

  • Upload
    kshitij

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose) - San Jose, CA, USA (2011.07.31-2011.08.5)] The 2011 International Joint Conference on Neural Networks

Abstract—This paper proposes a new model of an Evolving Spiking Neural Network (ESNN) for spatio-temporal data (STD) classification problems. The proposed ESNN model incorporates an additional layer for capturing both spatial and temporal components of the STD and then transforms them into high dimensional spiking patterns. These patterns are learned and classified in the evolving classification layer of the ESNN. A fast time-to-first-spike learning algorithm is used that enables the new model to be more suitable for learning from the STD streams in an adaptive and incremental manner. The proposed method is evaluated on a benchmark sign language video that is spatio-temporal in nature. The results show that the proposed method is able to capture important spatio-temporal information from the STD stream. This results in significantly higher classification accuracy than the traditional time-delay MLP neural network model. Future directions for the development of ESNN models for STD are discussed.

I. INTRODUCTION ost data, that we deal with when developing machine learning, data mining and decision support systems, is often spatio-temporal in nature. For example: cloud

formation for rain forecasting; traffic movement for route identification based on Global Positioning System (GPS); motion and human gesture recognition; brain signals and their recognition, and many more. Generally, spatio-temporal data (STD) relates to objects whose position, shape and size change over time [1]. STD is produced by a time-evolving spatial object represented by a set of triplets ( _ , ,i io id s t ) where _o id is an object with the identification number, and is is the location of the _o id at time it [2]. When solving spatio-temporal problems (STP), spatial information is needed to represent the position of the object

Manuscript received February 10, 2011. Haza Nuzly Abdull Hamed is a PhD student at the Knowledge

Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand. He is also an academic staff at Universiti Teknologi Malaysia. (Phone: +64 9 921 9547; fax: +64 9 921 9543; e-mail: [email protected], [email protected]).

Nikola Kasabov is a director and founder of the Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand (e-mail: [email protected]).

Siti Mariyam Shamsuddin is a head of the Soft Computing Research Group at University Teknologi Malaysia (e-mail: [email protected]).

Harya Widiputra is with Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand (e-mail: [email protected]).

Kshitij Dhoble is with Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand (e-mail: [email protected]).

in space together with the temporal information indicating when the event has occurred. The most common STP is Electroencephalography (EEG) signal processing.

There are a number of algorithms and approaches for dealing with STD and STP, such as Time Delay Neural Networks (TDNN) [3] and recurrent Elman networks [4]. However, more biological inspired methods have been introduced and have received tremendous attentions in regard to solving STP. In biological neurons, whether a neuron spikes or not at any given time may depend not only on input signals, but also on parameters such as gene and protein expression [5], the physical properties of connections [6], the probabilities of spikes being received at the synapses and the emitted neuro-transmitters or open ion channels. Many of these properties have been mathematically modeled and used to study biological neurons [7], but have not been fully utilized for the creation of more efficient ANN to solve complex STP. Spiking Neural Networks (SNN) models are made up of artificial spiking neurons that use trains of spikes to represent and process pulse-coded information. In biological neural networks, neurons are connected at synapses, and electrical signals (spikes) pass information from one neuron to another. SNNs are biologically plausible and offer some means for representing time, frequency, phase and other features of the information being processed. The main reason to study SNN for STP is due to the ability of SNN to internally represent and to process both spatial and temporal information adequately. Important aspects of SNN for STP are: information encoding [8], [9]; SNN structural implementation [10], [11], [12].

Evolving SNN (ESNN) [13], [14], [15], [16] constitute a class of SNN that evolve their structure through incremental, fast and one-pass learning utilizing time-to-first spike or similar algorithms [17], [18], [19]. During the learning phase, for every new input pattern, the ESNN creates a new class output neuron and calculates its connection weights. It then compares its connection weights with the other neurons for the same output class. If the difference is lower than a threshold, the new neuron is merged with the closest one belonging to the same output class; otherwise the new neuron stays. ESNNs have been applied to various classification tasks, such as person authentication based on audiovisual information [14], face recognition [15], taste recognition [20] and have achieved better results than previously published.

In this paper we extend the structure of the ESNN to deal with STP. A new layer is added to capture the whole STD

An Extended Evolving Spiking Neural Network Model for Spatio-Temporal Pattern Classification Haza Nuzly Abdull Hamed, Nikola Kasabov, Fellow, IEEE,

Siti Mariyam Shamsuddin, Harya Widiputra and Kshitij Dhoble

M

Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011

978-1-4244-9637-2/11/$26.00 ©2011 IEEE 2653

Page 2: [IEEE 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose) - San Jose, CA, USA (2011.07.31-2011.08.5)] The 2011 International Joint Conference on Neural Networks

pattern that needs to be further classified. This layer utilizes standard ESNN encoding method of population rank order encoding to turn the input STD pattern into a spiking spatio-temporal input pattern. Output from this layer is then passed for classification task in the evolving classification layer.

II. THE PROPOSED EXTENDED ESNN (EESNN) MODEL FOR STD CLASSIFICATION

STP normally deal with a sequence of events within a given time duration. We have proposed here an extended ESNN model (eESNN) as shown in Figure 1 (only one output class neuron is shown for simplicity). The first layer acts as a ‘reservoir’ to capture the whole STD pattern that needs to be classified. The second layer is a standard ESNN used for classification task.

Every spatial variable value at every discrete time unit is encoded in the first layer using the population rank-order encoding scheme. The whole STD input pattern is mapped into multidimensional spiking neuron structure and the spiking time of the neurons reflects the values of the input variables at every time unit when the STD is measured. The obtained multi dimensional input pattern of spikes is then fed to the second layer.

Fig. 1. A simplified structure of the eESNN For the encoding of the input patterns into patterns of

spikes, we use Population Rank Order Encoding [17], [18], [19] where a single input value is encoded as spikes emitted by multiple neurons M . Each neuron generates a spike at a certain time. The firing time of a neuron is calculated using the intersection of the Gaussian function – which is a receptive field of the neuron, with the continuous value input data. The Gaussian centre is calculated using (1) and the width is computed using (2) with the variable interval of [ ]maxmin , II and parameter β controlling the width.

min max min(2* 3) / 2*( ) / ( 2).I i I I Mμ = + − − − (1)

max min1/ ( ) / ( 2) 1 2.I I M whereσ β β= − − ≤ ≤ (2)

An illustration of this encoding process is shown in Figure 2.

Fig. 2. Population Rank Order Encoding Method. If the input value is 0.8 with five pre-synaptic neurons required, the intersection of five Gaussian functions will provide the firing time for this input value [15].

The second layer of the eESNN is a classification layer that consists of as many groups of evolving neurons as the number of the output classes are. The eESNN principles allow new classes to be introduced during the continuous learning and classification. When a new input spiking pattern is passed for classification from the first to the second layer, a new spiking neuron is generated in the class group to which the pattern belongs to (this is supervised learning). When a neuron is created, connection weights are assigned based on the time of the arrival of the spikes from the previous layer. As we use the time-to-first spike algorithm, the weights are dependent on a parameter called Modulation Factor ( Mod ) calculated using (3):

( )( ) .order jw Modj = (3)

When a new input data pattern is entered (a test pattern) for classification (not for learning), the Post-Synaptic Potential (PSP) of each classification neuron is calculated using (4). The neuron spikes if the PSP is higher than a Threshold value, which is calculated using (5). It depends on the value of a parameter called Proportion Factor ( C ):

( )* .max( )order jPSP w Modji ∑= (4)

* .max( )PSP Ci iχ = (5)

The new test pattern belongs to an output class identified by the first fired output neuron. During the learning phase, if a newly created classification neuron resembles a neuron that is already present in the same class group, it will be merged with the most similar one. The merging process involves modifying the connection weights and threshold to the average value. Otherwise, it will be added to the group as a newly-trained neuron. The major advantage of this learning algorithm is the ability of the trained eESNN to learn incrementally new input patterns without retraining the whole eESNN again with previously learned patterns. Equations (6) and (7) show the computation of the average weight and threshold value where N is the number of input patterns already merged into the present class neuron that is most similar to the newly created one that will be merged:

2654

Page 3: [IEEE 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose) - San Jose, CA, USA (2011.07.31-2011.08.5)] The 2011 International Joint Conference on Neural Networks

*

.1

w w NnewwN

+=

+ (6)

*.

1

NnewN

χ χχ

+=

+ (7)

In addition to the time-to-first-spike algorithm, here we also use the Spike Time Dependent Plasticity (STDP) learning rule. STDP is a form of Hebbian Learning where spike time and transmission are used to calculate the connection weights of a neuron. If a pre-synaptic spike arrives at the synapse before the postsynaptic action potential, the synapse is potentiated, i.e. the connection weight is increased; if the timing is reversed, the synapse is depressed, i.e. the connection weight is decreased [21]. When in a test (classification) mode, an output class neuron accumulates information (spikes) with every new input pattern presented to the eESNN. When the neuron reaches a certain PSP value that exceeds the threshold value, it fires and then becomes inactive (PSP=0) (8), as all other neurons as the eESNN is ready for a new classification:

0.( )*

if firedPSP order ji elsew Modji i

=∑

⎧⎨⎩

(8)

where: w ji is the weight of the pre-synaptic neuron j ; Modi

is the Modulation Factor parameter for neuron i with an interval of [0,1] and ( )jorder represents the rank of the spike emitted by the neuron j [15]. The value of ( )jorder is 0 if the neuron j spikes first among all pre-synaptic neurons and increases according to the firing time (the time of the spike arrival at neuron i). Figure 3 shows the mode of operation of a spiking neuron, which emits an output spike when the total spiking input, the PSP, is larger than the spiking threshold.

Fig. 3. A spiking neuron emits an output spike when the total spiking input or PSP is larger than the spiking threshold (here it equals to 1).

III. EXPERIMENTS

A. Datasets In order to illustrate the proposed eESNN on a STP, the

LIBRAS movement benchmark dataset [22] is used. LIBRAS is the acronym for LIngua BRAsileira de Sinais,

which is the official Brazilian sign language. There are 15 hand movements (signs) in the dataset to be learned and classified. The movements are obtained from recorded video of four different people performing the movements in two sessions. In total 360 videos have been recorded, where each video recorded one movement that lasts for about 7 seconds. 45 frames have then been selected according to uniform distribution. In each frame, the centroid pixels of the hand are used to determine the movement. All samples have been organized in 10 sub-datasets. Dataset 1 to 7 contains all samples while Dataset 8, 9 and 10 contain selected samples. Details of the dataset can be found in [23].

In our experiment, we used Dataset 10 which contains video of three different people. The objective of using this dataset is to train and test an eESNN model for user-independent movement classification and recognition, where the hand movement of one or several persons can be used to train the system that can then be used to identify the same movements of other people. This dataset consists of 270 videos with 18 samples in each class movement.

B. Setup The three parameters, namely Modulation factor ( Mod ),

Proportion factor ( C ) and the Similarity factor ( Sim ), were set to 0.99, 0.65 and 0.05 respectively following considerations derived in [24], [25]. The Gaussian height was set to 1.0 and parameter which controls the Gaussian width was set to 1.5. The dataset consisted of 270 samples in 15 classes. Two spatial variables, x and y, represent the coordinates of each frame centroid and all 45 frames form an input STD pattern related to a class (a sign or movement). The input data range was normalized and set at an interval of [-0.5, 1.5]. Each of input STD pattern of 90 spatio-temporal variables is encoded using the Population Rank Order Coding into spiking time of 90x20 spatio-temporal spiking neurons as every variable is encoded into 20 spiking neurons with Gaussian receptive fields. This eESNN was trained and tested in a 9-fold cross validation mode on the 270 STD patterns of 15 movements of 3 persons.

The test results were compared with a traditional Time Delay MLP, trained and tested in the same way. The optimal number of hidden nodes in the MLP was evaluated to be 45, learning rate 0.3 and 500 training iteration.

C. Results The average training accuracy of the eESNN in this

experiment is 99.35 +/- 0.30 %, while for the testing, 88.15 +/- 6.26 % accuracy is achieved. The test accuracy of the MLP under the same conditions of training and testing was found to be 82.96 +/- 5.39%. The results also show that the proposed eESNN can be applied not only in user dependent mode, but in user independent mode as well. This is due to the extended first layer of the eESNN that captures in the first phase of the operation of the system the complex STD patterns that are better classified in the second layer.

Additionally, the other advantages of the proposed eESNN when compared to the MLP and other traditional NNs are: (1) Fast, one pass, incremental training, rather than

2655

Page 4: [IEEE 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose) - San Jose, CA, USA (2011.07.31-2011.08.5)] The 2011 International Joint Conference on Neural Networks

multiple and batch, e.g hundreds and thousands, iterations); (2) Evolvability, i.e. the eESNN model can be incrementally trained on new data and new classes without the need to be retrained on the old data [15].

IV. CONCLUSION AND FUTURE WORK This paper has proposed an extended structure of ESNN

(eESNN) along with a STD encoding method that gives the spike representation of the input patterns that are required before the classification task. Its capability is demonstrated on a bench mark sign language STD data and the results show that the proposed eESNN performs better than the traditional Time Delay MLP.

Future exploration will include: (1) Optimization of the eESNN network parameters and the internal neuronal connections using some of the already developed algorithms of quantum inspired genetic algorithm [24] and particle swarm optimisation [25], [26]; (2) Using a Probabilistic Spiking Neural Network Model [26], [27], rather than a simple Integrate-and-Fire neuron model; (3) Applying the proposed model to other datasets and compare results with other evolving connectionist methods [16]; (4) Using the reservoir computing principles as illustrated in [28].

REFERENCES [1] Y. Theodoridis and M. A. Nascimento, “Generating spatio-temporal

datasets on the WWW”, ACM SIGMOD Record, vol. 29(3), 39-43, Sept. 2000.

[2] Y. Theodoridis, T. Sellis, A. Papadopoulos and Y. Manolopoulos, “Specifications for efficient indexing in spatio-temporal databases,” In Proc. 10th Int’l Conference on Scientific and Statistical Database Management, 1998, pp. 123-132.

[3] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K. Lang, “Phoneme recognition using time-delay neural networks,” IEEE Trans. Acoust Speech Signal Process, vol. 37(3), pp. 328-339, Aug. 2002.

[4] J.L. Elman, “Finding structure in time,” Cognitive Sci., vol. 14, pp. 179-211, 1990.

[5] H. Kojima and S. Katsumata, “Analysis of synaptic transmission and its plasticity by glutamate receptor channel kinetics models and 2-photon laser photolysis,” In Proc. International Conference on Neural Information Processing, Auckland, 2008, pp. 88-94.

[6] J. R. Huguenard, “Reliability of axonal propagation: the spike doesn't stop here,” Proceedings of the National Academy of Sciences of the United States of America, Vol. 97, No. 17, pp. 9349-9350, Aug. 2000.

[7] W. Gerstner and W. Kistler, Spiking Neuron Models: An Introduction. New York, NY, USA: Cambridge University Press, 2002.

[8] D. Z. Jin, “Spiking neural network for recognizing spatio-temporal sequences of spikes,” Phys. Rev. E. Stat. Nonlin. Soft Matter Phys., 69(2 Pt. 1), Feb. 2004.

[9] W. Bialek, F. Rieke, R. R. de Ruyter van Steveninck and D. Warland, “Reading a neural code,” Science, vol. 252, pp. 1854-1857, 1991.

[10] B. W. Stiles and J. Ghosh, “Habituation based neural networks for spatio-temporal classification,” Neurocomputing, Vol. 15(3-4), pp. 273-307, June 1997.

[11] C. Mannes, “A neural network model of spatio-temporal pattern recognition, recall, and timing,” In Proc. of International Joint Conference on Neural Networks, Baltimore, 1992, pp. 109 – 114.

[12] B. Jeon and D. A. Landgrebe, “Classification with spatio-temporal interpixel class dependency contexts,” IEEE Trans Geosci Remote Sens, vol. 30(4), pp. 663 – 672, Jul. 1992.

[13] S. G. Wysoski, L. Benuskova and N. Kasabov, “On-line learning with structural adaptation in a network of spiking neurons for visual pattern recognition,” In Proc. International Conference on Artificial Neural Networks, Athens, 2006, pp. 61-70.

[14] S. G. Wysoski, L. Benuskova and N. Kasabov, “Text-independent speaker authentication with spiking neural networks,” In Proc. International Conference on Artificial Neural Networks, Porto, Portugal, 2007, pp. 758-767.

[15] S. G. Wysoski, L. Benuskova and N. Kasabov, “Fast and Adaptive Network of Spiking Neurons for Multi-view Visual Pattern Recognition,” Neurocomputing, 71(13-15), pp. 2563-75, Aug. 2008.

[16] N.Kasabov, Evovling connectionist systems - A Knowledge Engineering Approach. Secaucus, NJ, USA: Springer-Verlag New York, 2007.

[17] S.M. Bohte, J.N. Kok and H.L. Poutre, “Error-backpropagation in temporally encoded networks of spiking neurons,” Neurocomputing, vol. 48 (1-4), pp. 17-37, Oct. 2002.

[18] R. Séguier and D. Mercier, “Audio-Visual Speech Recognition One Pass Learning with Spiking Neurons,” In Proc. International Conference on Artificial Neural Networks, Bucharest, Romania, 2002, pp. 1207-1212.

[19] S. J. Thorpe, “How Can the Human Visual System Process a Natural Scene in Under 150ms? Experiments and Neural Network Models,” In Proc. European Symposium on Artificial Neural Networks, Verleysen, M. (ed.), Bruges, Belgium, 1997.

[20] S. Soltic, S. G. Wysoski and N. Kasabov, “Evolving spiking neural networks for taste recognition,” In Proc. IEEE World Congress on Computational Intelligence, Hong Kong, 2008, pp. 2091 - 2097.

[21] H. Markram, J. Lubke, M. Frotscher and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science, Vol. 275, No. 5297, pp. 213-215, Jan. 1997.

[22] University of California, Irvine, Machine Learning Repository. Available: http://archive.ics.uci.edu/ml/

[23] D. B. Dias, R. C. B. Madeo, T. Rocha, H. H. Biscaro and S. M. Peres, “Hand movement recognition for brazilian sign language: a study using distance-based neural networks,” In Proc. of International Joint Conference on Neural Networks, Atlanta, 2009, pp. 697-704.

[24] S. Schliebs, M. Defoin-Platel, N. Kasabov, “Analyzing the dynamics of the simultaneous feature and parameter optimization of an evolving spiking neural network,” In Proc. of International Joint Conference on Neural Networks, Barcelona, 2010, pp 933-940.

[25] H. N. A. Hamed, N. Kasabov, Z. Michlovský and S. M. Shamsuddin, “String Pattern Recognition Using Evolving Spiking Neural Networks and Quantum Inspired Particle Swarm Optimization,” In Proc. International Conference on Neural Information Processing, Bangkok, 2009, pp. 611-619.

[26] H.Nuzly, N.Kasabov and S.Shamsuddin. “Probabilistic evolving spiking neural network optimization using dynamic quantum inspired particle swarm optimization,” In Proc. International Conference on Neural Information Processing, Sydney, 2010.

[27] N. Kasabov, “To spike or not to spike: a probabilistic spiking neural model,” Neural Netw., vol. 23(1), pp. 16-19, Jan. 2010.

[28] S. Schliebs, N. Nuntalid and N. Kasabov, “Towards spatio-temporal pattern recognition using evolving spiking neural networks,” In Proc. International Conference on Neural Information Processing, Sydney, 2010, pp. 163-170.

2656