8
0 (0) 1 1 IOS Press On-chip Face Recognition System Design with Memristive Hierarchical Temporal Memory Timur Ibrayev , Ulan Myrzakhan , Olga Krestinskaya , Aidana Irmanova , Alex Pappachen James * a School of Engineering, Nazarbayev University Abstract. Hierarchical Temporal Memory is a new machine learning algorithm intended to mimic the working principle of neocortex, part of the human brain, which is responsi- ble for learning, classification, and making predictions. Al- though many works illustrate its effectiveness as a software algorithm, hardware design for HTM remains an open re- search problem. Hence, this work proposes an architecture for HTM Spatial Pooler and Temporal Memory with learn- ing mechanism, which creates a single image for each class based on important and unimportant features of all images in the training set. In turn, the reduction in the number of templates within database reduces the memory requirements and increases the processing speed. Moreover, face recogni- tion analysis indicates that for a large number of training im- ages, the proposed design provides higher accuracy results (83.5%) compared to only Spatial Pooler design presented in the previous works. Keywords: HTM, Temporal memory, Spatial pooler, Mem- ristor, face recognition, 1. Introduction The Hierarchical Temporal Memory (HTM) is a cognitive learning algorithm developed by Numenta Inc. [1]. HTM was designed based on various princi- ples of neuroscience and, therefore, is said to be able to emulate the working principle of neocortex, a part of the human brain responsible for learning, classifica- tion, and making predictions [2]. After successful software realization of this learning algorithm, several works such as [3,4,5,6] have been conducted to realise its hardware implementation, in- * Email:[email protected] * Email:[email protected] cluding [7], one of the latest works that propose ana- log circuit design of HTM Spatial Pooler based on the combination of memristive crossbar circuits with CMOS technology. One of the main advantages of the latest work is that the processing of input data is per- formed in the analog domain, which indeed can of- fer higher processing speed, primarily, due to the ab- sence of analog-to-digital and digital-to-analog con- verters used in digital systems. Thus, inspired by de- sign described in [7] and from the idea of creating new analog add-on system that may move processing from digital domain to analog domain at the sensory level, this work proposes a system design of Hierarchical Temporal Memory for face recognition applications. In particular, this work proposes a system level de- sign for HTM that exploits the combination of a mem- ristive crossbar based Spatial Pooler [7], and con- ceptual analog Temporal Memory based on learning mechanism inspired from the Hebbian rule. Further, we present face recognition algorithm for the proposed system and provide its performance results and analy- sis. 2. Background 2.1. Hierarchical Temporal Memory The main core of the proposed system is HTM, that consists of two parts: Spatial Pooler and Temporal Memory [2]. Spatial Pooler (SP) is responsible for the generation of the sparse distributed representations of input data and can be used for feature extraction and pattern recognition applications on its own, whereas Temporal Memory (TM) is responsible for learning in- 1570-1263/0-1900/$17.00 c 0 – IOS Press and the authors. All rights reserved arXiv:1709.08184v1 [cs.ET] 24 Sep 2017

On-chip Face Recognition System Design with Memristive

  • Upload
    others

  • View
    7

  • Download
    5

Embed Size (px)

Citation preview

Page 1: On-chip Face Recognition System Design with Memristive

0 (0) 1 1IOS Press

On-chip Face Recognition System Designwith Memristive Hierarchical TemporalMemoryTimur Ibrayev , Ulan Myrzakhan , Olga Krestinskaya , Aidana Irmanova , Alex Pappachen James ∗

a School of Engineering, Nazarbayev University

Abstract. Hierarchical Temporal Memory is a new machinelearning algorithm intended to mimic the working principleof neocortex, part of the human brain, which is responsi-ble for learning, classification, and making predictions. Al-though many works illustrate its effectiveness as a softwarealgorithm, hardware design for HTM remains an open re-search problem. Hence, this work proposes an architecturefor HTM Spatial Pooler and Temporal Memory with learn-ing mechanism, which creates a single image for each classbased on important and unimportant features of all imagesin the training set. In turn, the reduction in the number oftemplates within database reduces the memory requirementsand increases the processing speed. Moreover, face recogni-tion analysis indicates that for a large number of training im-ages, the proposed design provides higher accuracy results(83.5%) compared to only Spatial Pooler design presented inthe previous works.

Keywords: HTM, Temporal memory, Spatial pooler, Mem-ristor, face recognition,

1. Introduction

The Hierarchical Temporal Memory (HTM) is acognitive learning algorithm developed by NumentaInc. [1]. HTM was designed based on various princi-ples of neuroscience and, therefore, is said to be ableto emulate the working principle of neocortex, a partof the human brain responsible for learning, classifica-tion, and making predictions [2].

After successful software realization of this learningalgorithm, several works such as [3,4,5,6] have beenconducted to realise its hardware implementation, in-

*Email:[email protected]*Email:[email protected]

cluding [7], one of the latest works that propose ana-log circuit design of HTM Spatial Pooler based onthe combination of memristive crossbar circuits withCMOS technology. One of the main advantages of thelatest work is that the processing of input data is per-formed in the analog domain, which indeed can of-fer higher processing speed, primarily, due to the ab-sence of analog-to-digital and digital-to-analog con-verters used in digital systems. Thus, inspired by de-sign described in [7] and from the idea of creating newanalog add-on system that may move processing fromdigital domain to analog domain at the sensory level,this work proposes a system design of HierarchicalTemporal Memory for face recognition applications.

In particular, this work proposes a system level de-sign for HTM that exploits the combination of a mem-ristive crossbar based Spatial Pooler [7], and con-ceptual analog Temporal Memory based on learningmechanism inspired from the Hebbian rule. Further,we present face recognition algorithm for the proposedsystem and provide its performance results and analy-sis.

2. Background

2.1. Hierarchical Temporal Memory

The main core of the proposed system is HTM,that consists of two parts: Spatial Pooler and TemporalMemory [2]. Spatial Pooler (SP) is responsible for thegeneration of the sparse distributed representations ofinput data and can be used for feature extraction andpattern recognition applications on its own, whereasTemporal Memory (TM) is responsible for learning in-

1570-1263/0-1900/$17.00 c© 0 – IOS Press and the authors. All rights reserved

arX

iv:1

709.

0818

4v1

[cs

.ET

] 2

4 Se

p 20

17

Page 2: On-chip Face Recognition System Design with Memristive

2 T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory

put patterns and making predictions based on the tem-poral changes in the given input stream [8].

HTM was initially developed as the software algo-rithm [8] and some research works, such as [9,10,11],were presented to illustrate and verify the capabilitiesof algorithmic implementation of HTM for performingclassification, learning patterns, detecting abnormali-ties, and making predictions.

Since HTM is a new machine learning algorithm,few attempts were made to implement it in hardwarelevel. For instance, [3] presented HTM hardware de-sign for digital application-specific integrated circuit(ASIC) architecture, [4] depicts the design for theFPGA implementation of digital HTM, [5] proposedcomputing blocks for HTM using memristive crossbararrays and spin-neurons, which process data in bothdigital and analog domains, and the latest works [6,7]proposed circuits for HTM Spatial Pooler based onmemristive crossbar architecture.

2.2. Hebbian theory

Introduced by Donald Hebb in 1949, Hebbian the-ory (also known as the Hebbian rule or Hebbian pos-tulate) serves as one of the many learning mecha-nisms used in the design of artificial neural networks.In particular, it describes the basic idea behind synap-tic plasticity and states that synaptic efficacy increaseswhen a presynaptic cell takes part in repeated or persis-tent stimulations and firing of neighboring postsynap-tic cell [12]. Following the Hebbian rule, the activa-tion of postsynaptic units in neural nets depends on theweighted activations of presynaptic units, which canbe represented by (1)

yi =∑j

wijxj (1)

where yi represents the output of neuron i, xj standsfor the jth input, and wij is the weight of the connec-tion from neuron xj to yi [12].

In other words, the Hebbian theory claims that thesynaptic weight between two neurons increases whenboth neurons simultaneously experience the activationor deactivation, and it decreases when they activate ordeactivate separately. The equation for the change insynaptic weight ∆wij of the connection can be shown

by (2), which is known as a learning mechanism ofHebbian theory [12].

∆wij = ηxjyi (2)

where η is the learning rate. This learning mecha-nism in the artificial neural networks is used to alterweights between neurons.

2.3. Memristor models

Currently, there are several available memristormodels, which incorporate not only the characteristicsof real existing devices, but also provide the possibil-ity to switch from one device parameters to another, sothat suitability of these devices can also be assessed.For example, [13] proposed a model with nanosecondswitching time, which is crucial in designing real-timesystems. Recent models proposed in [14] allow sim-ulation of large-scale networks of memristors as wellsince parallelism and scalability of the system play im-portant role in processing huge amount of data.

3. Proposed HTM System

3.1. System Design

A high-level block diagram of the proposed systemis illustrated in Fig. 1. Input data controller reads theinput image, places it in data storage and sends this in-put image to HTM Spatial Pooler by partially retriev-ing it from the data storage. The data controller is sub-stantial as partial sending is required to ensure that theselected size of HTM SP is capable of processing anentire image. In turn, HTM SP is responsible for fea-ture extraction of the input image and thus, provides abinary output. If the input image is a training image,the output data controller directs its extracted featuresto HTM TM, which creates a single image/template foreach class having common features of all the trainingimages belonging to that particular class. During thetesting phase, resulting images stored within TM arethen used by pattern matcher to calculate the similarityscore between the input testing image and each of thetrained classes.

Page 3: On-chip Face Recognition System Design with Memristive

T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory 3

Fig. 1. High level block diagram of the proposed system illustrating operating principle of the HTM spatial pooler

3.2. System Algorithm

In this work, we also propose Algorithm 1 thatcan be used to analyze the effectiveness of the pro-posed system. The algorithm shows interconnectionsbetween main processing stages of the entire sys-tem: pre-processing, HTM SP, HTM TM, and patternmatching. Pre-processing stage, shown in lines 2-3 inAlgorithm 1, is necessary to convert the input imageinto system compatible format. In this stage, we con-vert input image to grayscale and enhance its qualityusing standard deviation filtering. This is achieved byeither external means or by input data controller.

HTM SP stage models feature extraction processachieved by the Spatial Pooler block that is illustratedin lines 5-21 of Algorithm 1. This is done by ini-tially generating random weights matrix w, so thateach weight in w would have analog value between 0and 1. The weights matrix has dimensions of N ×N ,which also defines dimensions of each column withinSpatial Pooler. Lines 6-10 define connectivity of eachsynapse, so that if its weight w is higher than thethreshold γ, the synapse is connected and representedby 1, but if w < γ, it is disconnected and repre-sented by 0. Synapse connectivity is used to determineoverlap value column.overlap() for each column m,which is represented as the sum of the products ofsynaptic weight matrix w and N × N bits of the im-age within the region m. This overlap value representsthe importance of bits connected to each particular col-umn.

Lines 15-20 define the inhibition rule implementedin the proposed system. According to the rule, inhi-bition is performed in a block-by-block manner, eachhaving dimensions of M ×M columns and is basedon overlap values achieved by columns c lying withinthat inhibition block inhibition.block(). This is doneby comparing individual overlap values of columnsc with threshold value θ, which is determined as themaximum overlap that is detected within that particu-lar inhibition block. Then, the column or columns with

overlap value greater than or equal to θ are consideredas important and represented by logical high 1 value.Otherwise, columns are considered as unimportant andrepresented by logical low 0 value. As a result, the bi-nary feature extracted output image SP.image afterHTM SP processing is formed by concatenating all in-hibition blocks.

HTM TM stage defines a learning mechanism thatis activated during training stage when binary fea-ture extracted an image from HTM SP SP.image

is moved by the output data controller block to theHTM TM block. Lines 26-32 define that proposed TMshould create certain class.map for each class in k

image classes, which reflects temporal variations ofspatial features. This is done by making TM updateclass.map every time new feature extracted image isfetched to TM block. Based on whether bit has thevalue of 1 or 0 within SP.image, corresponding mem-ory cell within class.map is either increased or de-creased by δ value, respectively. At the end of train-ing phase, according to lines 33-37, class.map of eachclass is binarized.

Recognition and image classification stage defines apattern matching process that is active during testingphase when binary feature extracted image SP.imageis moved by the output data controller block to the Pat-tern Matcher block. According to lines 39-44, the sim-ilarity score between extracted features SP.image ofthe testing image and each of the class maps storedwithin HTM TM is defined as the sum of XOR logichigh 1 outputs. Since XOR operation produces logichigh 1 output at places where two compared bits are ofdifferent value, a class of the tested image can be de-fined as the class.map that produces the least score()value.

Page 4: On-chip Face Recognition System Design with Memristive

4 T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory

4. Circuits for the Proposed System

4.1. Spatial Pooler

In the proposed system HTM SP processing can beimplemented with memristive crossbar based SP [7].Memristor devices due to its ability to memories andbeing able to mimic neurons find various applica-tions [15,16,17]. Figure 2 illustrates single memristivecrossbar processing unit. Memristive-CMOS circuitsallow precise realization of the feature extraction pro-cess described by algorithm lines 3-10 with additionaladvantages in terms of parallel synaptic processing andcompact storage of synaptic weights. Figure 3 illus-trates modified version of the WTA circuit designedby [18], which is when combined with SP circuits pre-sented in [7] allow implementation of inhibition pro-cessing described by algorithm lines 11-17.

Fig. 2. Single memristive crossbar processing unit as presented in[6]

Fig. 3. Winner-Take-All Circuit taken from [6]

4.2. Temporal Memory

Instead of saving all the feature extracted images asit was done in the previous SP design [7], the proposedwork incorporates conceptual analog TM into the en-tire system. It is, in turn, intended to reduce mem-ory requirements and processing time by creating sin-gle training image, which is called a class map in thiswork. Such class map incorporates features of all train-ing images belonging to a single class and allows pat-tern matching to be performed by comparing the test-ing image with the only single image for each of thememorized (trained) classes.

This is realized by making TM learn by focusingon both important and unimportant features and by re-flecting how features change with time.

Focus is achieved by placing Temporal Memory cir-cuitry after Spatial Pooler so that the inputs are not theoriginal training images, but feature extracted imagesprovided at the output of Spatial Pooler. Since each ofthese outputs is binary in nature, such placement al-lows Temporal Memory to differentiate important andunimportant features.

Reflection is achieved by changing the weights ofthe Temporal Memory cells according to the impor-tance of the corresponding input bit. This is realizedby implementing learning mechanism of the Hebbiantheory given by (2), which, in general, is used to de-termine the weight change between presynaptic andpostsynaptic units. In the proposed design of Tempo-ral Memory, the binary pixels of feature extracted im-ages are used as presynaptic units, whereas postsynap-tic units are represented by a matrix of ones of the samesize as the image. The realization of postsynaptic unitsas a matrix of ones is required to ensure that every pixelof feature extracted image is treated as equally impor-tant. Moreover, such arrangement allows alteration ofthe weights of Temporal Memory cells with respect totheir importance.

In particular, if the input bit of feature extracted im-age is 1, meaning that it represents an important fea-ture, then the weight of the corresponding TemporalMemory cell increases by positive weight update (+∆)value. Contrary, if the input bit of the feature extractedimage is 0, meaning that it represents an unimportantfeature, then the weight of the corresponding Tempo-ral Memory cell decreases by negative weight update(−∆) value. This algorithm of differentiating impor-tant and unimportant features using learning mecha-nism of the Hebbian rule is illustrated in Figure 4.

Page 5: On-chip Face Recognition System Design with Memristive

T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory 5

Fig. 4. Example of determining required weight updates (positive ornegative) using Hebbian learning mechanism at each particular pixelwithin Temporal Memory

As a result, instead of having multiple binary imageswith extracted features, TM creates single analog im-age incorporating important and unimportant featuresof and having the same dimensions as each of the inputimages belonging to a single class. Figure 5 illustratesthe formation of the class map for the first class byfetching feature extracted binary images belonging tothe first class to TM. All of the TM cells, initially hav-ing the same weight, eventually become distinguish-able at the end of the training sequence.

Fig. 5. The main principle of single class map formation using Tem-poral Memory and feature extracted images obtained from SpatialPooler

However, such learning mechanism requires TM tobe multi-valued. This is to ensure that the weights cantake values not only of 1 and 0, as feature extractedimages do, but can be changed according to the weightupdate value, which is ±∆.

Hence, Fig. 6 illustrates the design of TM requiredto memorize single class map and utilizing multi-valued memory cells. The total number of the requiredmemory cells correspond to the product of the numberof memorized classes and the number of bits of a singleclass map. For example, for 13 class maps, each havingdimensions of 120 bits × 160 bits, the required num-ber of memory cells is 13 × 19200. The multi-valuedmemory cells, in turn, can be realized by using n-bitmemristor-based memory, which is described in [19].

Fig. 6. The design of Temporal Memory consisting of120 × 160 = 19200 memory cells and that is used for storing asingle class map trained by fetching input images having dimensionsof 120 bits× 160 bits.

4.3. Pattern Matcher

After class maps were formed within TM during thetraining phase, the testing is performed by compar-ing each input testing image with all the class mapslearned by the system. In the proposed system this canbe achieved by fetching a feature extracted input test-ing image and all the class maps into memristive pat-tern matcher (Fig. 7(a)), which is realized by memris-tive XOR gates illustrated in Fig. 7(b) and described in[7].

(a) (b)

Fig. 7. (a) A 2 bit XOR pattern matcher, (b) XOR configuration ofmemristive memory threshold logic

Figure 8 illustrates the principle used to determine asimilarity score between any feature extracted the im-age and two arbitrary class maps. The class maps arethresholded at the mean value of 0.5, so that XOR logiccan be accomplished. For two input images, memris-tive XOR gates will produce output image having log-ical 0 at the regions where both images represent im-portant or unimportant features (i.e. either both have 1at that region or both have 0 at that region) and hav-ing logical 1 at the regions where two images representdifferent features. Hence, the class of an input testingimage is determined by the class map that has the leastnumber of white bits (or the greatest number of blackbits) at the corresponding XOR output.

The described pattern matching process emphasizesthe advantage of the proposed design in terms of fasterprocessing speed: the time required for the system to

Page 6: On-chip Face Recognition System Design with Memristive

6 T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory

determine similarity score reduces, as the number oftemplates required to be compared with the input test-ing image decreases to a single image (that is classmap) per class.

Fig. 8. The face recognition process done by calculating similarityscore between the feature extracted testing image and two class mapsusing memristive XOR gates

5. Results and Discussion

The performance metrics were based on face recog-nition accuracy of AR database [20]. The databaseconsists of 100 classes each having 26 images thatwere taken in two sessions. These images then weredivided into two separate sets. The first set consisted of13 images of each class taken during the first sessionand was used to train Temporal Memory, whereas thesecond set consisted of 13 images of each class takenduring the second session and was used for testing.

Based on the above mentioned set up, the first anal-ysis was aimed to determine optimal delta (±∆) re-quired to update the weights of Temporal Memory fora different number of training images. Figure 9 illus-trates the recognition accuracy results achieved for dif-ferent combinations of ±∆ and the size of the train-ing set. As can be seen, for the number of training im-ages between 1 and 13 the maximum recognition accu-racy is achieved when ±∆ value is lower than or equalto ±0.1. Moreover, as the size of the training set in-creases, the maximum achieved accuracy increases for±∆ value of ±0.01 and decreases for large values of∆.

This result indicates that achieving maximum recog-nition accuracy with a large number of training imagesis possible, when the weight update value is small. An-other point that should be taken into account is that thevalue of memristor directly proportional to the dura-

tion of applied constant voltage. These two statementsimply that the increased number of training images de-creases the duration applied voltage required to updatethe weights, which means the consecutive input im-ages can be processed at a higher speed.

Fig. 9. Optimal Delta estimation based on recognition accuracy re-sults

After optimal weight update value was determinedto be ±0.01, the analysis was performed to comparethe effectiveness of the architecture based on SpatialPooler only [7] and the proposed architecture com-bining Spatial Pooler and Temporal Memory on facerecognition task. To make common settings, for thearchitecture of only Spatial Pooler the training im-ages belonging to a single class were initially averagedand the averaged images then were processed by Spa-tial Pooler to provide a feature extracted training tem-plates. For the proposed architecture, the training im-ages were processed by Spatial Pooler and extractedfeature outputs were used to create class maps withinTemporal Memory. Table 1 illustrates recognition ac-curacy results for the condition when both architec-tures had one template or class map for each of 100classes (giving in total 100 templates or class maps)with which all of 13 testing images of each class (giv-ing in total 1300 testing images) were compared. Com-paring these results with these reported in [7], it can beseen that as the number of training images increases,the architecture incorporating Temporal Memory pro-vides higher recognition accuracy at lower memory re-quirements and faster processing at pattern matchingstage.

6. Conclusion

In this paper, we proposed system realization ofHTM Spatial Pooler and Temporal Memory usingmemristor-CMOS circuits. The main difference fromthe existing HTM system with memristor is in that thesystem incorporates Temporal Memory with learning

Page 7: On-chip Face Recognition System Design with Memristive

T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory 7

Table 1

Recognition accuracy of classifying test images in each category ofAR database done by two different architectures using single tem-plate or class map per each class.

Architecture Emotions Light condi-tions

Occlusions(glasses)

Occlusions(scarf)

Total

Spatial Pooler [7] 77.50% 91.00% 84.33% 53.33% 76.54%

Spatial Pooler and Temporal Mem-ory

84.25% 96.33% 85.67% 67.67% 83.48%

capability. The learning process of the system involvescollecting important features from the training data ofgiven class and creating its class map - a single im-age, based on the extracted features. Hence, the mainadvantage of the system is less memory occupation ofHTM that provides with higher processing speed. Theresults of performance analysis indicate that for a largeset of training images the proposed recognition sys-tem provides a higher accuracy compared to the resultspresented in the previous work.

References

[1] D. George and J. Hawkins, “A hierarchical bayesian model ofinvariant pattern recognition in the visual cortex,” in NeuralNetworks, 2005. IJCNN ’05. Proceedings. 2005 IEEE Interna-tional Joint Conference on, vol. 3, July 2005, pp. 1812–1817vol. 3.

[2] J. Hawkins and D. George, “Hierarchical temporal mem-ory: Concepts, theory and terminology,” Technical report, Nu-menta, Tech. Rep., 2006.

[3] W. J. Melis, S. Chizuwa, and M. Kameyama, “Evaluation ofthe hierarchical temporal memory as soft computing platformand its vlsi architecture,” in 39th International Symposium onMultiple-Valued Logic. IEEE, 2009, pp. 233–238.

[4] A. M. Zyarah, “Design and analysis of a reconfigurable hierar-chical temporal memory architecture,” Master’s thesis, 2015.

[5] D. Fan, M. Sharad, A. Sengupta, and K. Roy, “Hierarchicaltemporal memory based on spin-neurons and resistive memoryfor energy-efficient brain-inspired computing,” IEEE Transac-tions on Neural Networks and Learning Systems, vol. 27, no. 9,pp. 1907–1919, Sept 2016.

[6] T. Ibrayev, A. P. James, C. Merkel, and D. Kudithipudi, “A de-sign of htm spatial pooler for face recognition using memristor-cmos hybrid circuits,” in 2016 International Symposium onCircuits and Systems (ISCAS). IEEE, 2016.

[7] A. P. James, I. Fedorova, T. Ibrayev, and D. Kudithipudi, “Htmspatial pooler with memristor crossbar circuits for sparse bio-metric recognition,” IEEE Transactions on Biomedical Circuitsand Systems, vol. PP, no. 99, pp. 1–12, 2017.

[8] J. Hawkins, S. Ahmad, and D. Dubinsky, “Hierarchicaltemporal memory including htm cortical learning algorithms,”Techical report, Numenta, Inc, Palto Alto http://www. numenta.

com/htmoverview/education/HTM_CorticalLearningAlgorithms.pdf, 2010.

[9] N. Farahmand, M. H. Dezfoulian, H. GhiasiRad, A. Mokhtari,and A. Nouri, “Online temporal pattern learning,” in 2009 In-ternational Joint Conference on Neural Networks, June 2009,pp. 797–802.

[10] I. Ramli and C. Ortega-Sanchez, “Pattern recognition using hi-erarchical concatenation,” in Computer, Control, Informaticsand its Applications (IC3INA), 2015 International Conferenceon, Oct 2015, pp. 109–113.

[11] A. B. Csapo, P. Baranyi, and D. Tikk, “Object categorizationusing vfa-generated nodemaps and hierarchical temporal mem-ories,” in Computational Cybernetics, 2007. ICCC 2007. IEEEInternational Conference on, Oct 2007, pp. 257–262.

[12] C. Fyfe, Hebbian learning and negative feedback networks.Springer Science & Business Media, 2007.

[13] C. Yakopcic, T. M. Taha, G. Subramanyam, and R. E. Pino,“Memristor spice model and crossbar simulation based on de-vices with nanosecond switching time,” in Neural Networks(IJCNN), The 2013 International Joint Conference on. IEEE,2013, pp. 1–7.

[14] D. Biolek, Z. Kolka, V. Biolkova, and Z. Biolek, “Memris-tor models for spice simulation of extremely large memristivenetworks,” in 2016 IEEE International Symposium on Circuitsand Systems (ISCAS). IEEE, 2016, pp. 389–392.

[15] A. K. Maan, D. A. Jayadevi, and A. P. James, “A surveyof memristive threshold logic circuits,” IEEE Transactions onNeural Networks and Learning Systems, vol. 28, no. 8, pp.1734–1746, Aug 2017.

[16] A. K. Maan, A. P. James, and S. Dimitrijev, “Memristor pat-tern recogniser: isolated speech word recognition,” ElectronicsLetters, vol. 51, no. 17, pp. 1370–1372, 2015.

[17] O. Krestinskaya, T. Ibarayev, and A. James, “Hierarchical tem-poral memory features with memristor logic circuits for patternrecognition,” IEEE Transactions on Computer-Aided Design ofIntegrated Circuits and Systems, vol. PP, 2017.

[18] J. Lazzaro, S. Ryckebusch, M. A. Mahowald, and C. A. Mead,“Winner-take-all networks of o (n) complexity,” CALIFOR-NIA INST OF TECH PASADENA DEPT OF COMPUTERSCIENCE, Tech. Rep., 1988.

[19] H. Mostafa and Y. Ismail, “Process variation aware designof multi-valued spintronic memristor-based memory arrays,”IEEE Transactions on Semiconductor Manufacturing, vol. 29,no. 2, pp. 145–152, 2016.

[20] A. Martınez and R. Benavente, “The ar face database,” Rapporttechnique, vol. 24, 1998.

Page 8: On-chip Face Recognition System Design with Memristive

8 T. Ibrayev,U. Myrzakhan,O. Krestinskaya, A. Irmanova, A. P. James / Memristive Hierarchical Temporal Memory

Algorithm 1 for the proposed HTM face recognitionsystem

1: . Pre-processing of input images2: Convert RGB image into grayscale3: Filter grayscale image with standard deviation filter4: . HTM SP5: Create random matrix w of N ×N size6: for all n in w do7: if w(n) > γ then8: w(n) = 19: else

10: w(n) = 0

11: Divide image into blocks of N ×N pixels12: for m image blocks do13: column.overlap(m) = sum(w ×

image.block(m))

14: Divide image into inhibition blocks of M ×M columns

15: for c columns within inhibition.block(M×M ) do16: θ = max(column.overlap(c.columns))17: if column.overlap(c) ≥ θ then18: inhibition.block(c) = 119: else20: inhibition.block(c) = 0

21: SP.image = inhibition.block22: . HTM TM23: if training stage then24: Define number of train classes k25: Define number of train images z per class26: for all n in k train classes do27: for all j in z train images do28: for all i pixels in SP.image do29: if SP.image(j, i) = 1 then30: class.map(n, i) = class.map(n, i) + ∆31: else if SP.image(j, i) = 0 then32: class.map(n, i) = class.map(n, i) − ∆

33: for all i pixels in class.map(n) do34: if class.map(n, i) ≥ σ then35: class.map(n, i) = 136: else if class.map(n, i) < σ then37: class.map(n, i) = 0

38: . Recognition stage and image classification39: else if testing stage then40: for all n in k image classes do41: for i image pixels do42: xor.res(n) =

XOR(class.map(n, i), SP.image(i))43: score(n) = sum(xor.res(n))

44: determined.class = class(min(score))