4
2007 4th International Conference on Electrical and Electronics Engineering (ICEEE 2007) IJu J A rnethod for designing CNN ternplates Jose Antonio Medina Hernagndez12 Felipe Gomez Castaniedal Jose Antonio Moreno Cadenas1 1 Department of Electrical Engineering. CINVES IAV Av. Instituto Polite&cnico Nacional 2508, 07300, Mexico D.F., Mexico, Phone (0155) 50613800 Ext 6262 2 Aguascalientes Autonomous IJniversity, Mathematics and Physics Department, Aguascalientes, Mexico E mail:joseamh6906 d-yahoo.com.mx Abstract- Cellular neural networks (CNN) are very useful for image processing tasks [1], [2]. One problem with CNN networks is the lack of a programming method to realize a processing task. The cloning tem- plates entirely specifies the programming of a CNN net. There are a lot of cloning templates for several tasks [3]-[4], got by mathematical analysis or heuristically [4]- [9]. However for some specific tasks is very difficult to find the correct templates. In this paper a procedure for finding cloning templates for image processing tasks is described, using a gradient method. A set of CNN templates obtained using the proposed procedure is shown. Keywords: CNN network, traininL set, learning rules, stochastic gradien-it descent method. I. INTRODUCTION A Cellular Neural Network (CNN) is a set of locally interconnected processing units arranged in a rectangular two-dimensional grid [1]. This structure is very useful for image processing [2]. Two important aspects of this nets are its real time processing capacities and facilities for VLSI implementation. CNN network has a space invariant local interconnection. For radius r=1 neighborhoods two 3x3 arrays A and B, named self-feedback and control templates, and a bias parameter z are needed [1. A set of 19 parameters forms the cloning template. Designing a cloning template is often a hard task. In this paper we propose a method for getting templates for a given task. In section II learning rules are described. Section III contains the training algorithm. Section IV shows experimental results. Conclusions are presented in section V. II. LEARNING RULES. Let I and Id be two images each having MxN pixels. I is a grey level input image and Id is the desired binary output image. The aim is find A,B matrices and z bias parameter such that the CNN net transforms I to Id. Learning proceeds iteratively through time. Denote with rx1 (ta) and Liii(ta) activation and output functions of cell ij at time t, being L d the desidered output value, where I < ill < I j N and tn is the learninc time for transforming the n- pixel in I to the n- pixel in Id, where I < < lMN.NThe symbol ui is the uray level of ij- pixel in input image. The ij-cell output ii (tr) can be approximated by where xij(t,) can be approximated by activation equation dxij (tn) dt Xij(tr) + Akl (tn)Ykl(tr) kl C Vr (ij) n + L Bkl(tn) Ukl + Zij(tn) kl C V, (ij) (1) This activation equation is similar to the CNN activation equation, but coefficients Ak, and Bkl are functions of time t,. Similarly, X/kl(t) and ykl(t,) are the activation and output functions at time tr for cells kl in the neighborhood of cell ij. At time t, we want minimize the error function fij (tn ) =I2 [Yij (tn ) _yd; i 2 refereed to parameters Akl(tn), Bkl (tn) and z(tn), associ- ated to radius I neighborhood of ij cell, using an stochastic gradient descent method. The parameter's learning pro- ceeds according with relations AAkl (tn) dq ij, (tn) ABkl (tn) A2ij (tn ) o'1BkI (tn) IAZ(tn) _ ld e,j(tn) wZ(tn) where ,j is the learninii rate. We, have eij (tn) Aki (tna) with aLi,, (t,) dAkI (tn) [Yij (tn) - Yi (tn) I Ai (tn) 29i-cTxP j(t,-) Arij (t ) I e J (t, 2&Akl.(tn) so AYij (tn) 2r cAkl (tn) 2 .j .tn) I[ I - Yi tnI XA i (tn) Akl (tn) (2) The chancre rate dxmi, (t,^) - (t ) is 19 parameters t) ~~~dt dependent for radius 1 neighborhoods, so variation pro- duced only for one parameter must be little. The partial derivates a must be little and aproximately zero &njt" = in atvt aTio Using supposition Ak(t in activation equation (1), we find Yi (tn) 2 I e iTW7 .t 1-4244-1 166-1/07/$25.00 ©2007 IEEE. I OXi (trn) dAkl (tn) 153 Ykl (tn) + Akl (tn ) Akl (tn) Akl (tn ) IEEE Catalog Number: 07EX1762C ISBN: 1-4244-1166-1 Library of Congress: 2007923398

[IEEE 2007 4th International Conference on Electrical and Electronics Engineering - Mexico City, Mexico (2007.09.5-2007.09.7)] 2007 4th International Conference on Electrical and Electronics

Embed Size (px)

Citation preview

2007 4th International Conference on Electrical and Electronics Engineering (ICEEE 2007)

IJu J

A rnethod for designing CNN ternplatesJose Antonio Medina Hernagndez12 Felipe Gomez Castaniedal Jose Antonio Moreno Cadenas1

1 Department of Electrical Engineering. CINVES IAVAv. Instituto Polite&cnico Nacional 2508, 07300, Mexico D.F., Mexico, Phone (0155) 50613800 Ext 62622 Aguascalientes Autonomous IJniversity, Mathematics and Physics Department, Aguascalientes, Mexico

E mail:joseamh6906 d-yahoo.com.mx

Abstract-Cellular neural networks (CNN) are veryuseful for image processing tasks [1], [2]. One problemwith CNN networks is the lack of a programmingmethod to realize a processing task. The cloning tem-plates entirely specifies the programming of a CNN net.There are a lot of cloning templates for several tasks[3]-[4], got by mathematical analysis or heuristically [4]-[9]. However for some specific tasks is very difficult tofind the correct templates. In this paper a procedurefor finding cloning templates for image processing tasksis described, using a gradient method. A set of CNNtemplates obtained using the proposed procedure isshown.Keywords: CNN network, traininL set, learning rules,stochastic gradien-it descent method.

I. INTRODUCTIONA Cellular Neural Network (CNN) is a set of locally

interconnected processing units arranged in a rectangulartwo-dimensional grid [1]. This structure is very useful forimage processing [2]. Two important aspects of this netsare its real time processing capacities and facilities forVLSI implementation. CNN network has a space invariantlocal interconnection. For radius r=1 neighborhoods two3x3 arrays A and B, named self-feedback and controltemplates, and a bias parameter z are needed [1. A setof 19 parameters forms the cloning template. Designinga cloning template is often a hard task. In this paper wepropose a method for getting templates for a given task. Insection II learning rules are described. Section III containsthe training algorithm. Section IV shows experimentalresults. Conclusions are presented in section V.

II. LEARNING RULES.

Let I and Id be two images each having MxN pixels.I is a grey level input image and Id is the desired binaryoutput image. The aim is find A,B matrices and z biasparameter such that the CNN net transforms I to Id.Learning proceeds iteratively through time. Denote withrx1 (ta) and Liii(ta) activation and output functions of cellij at time t, being L d the desidered output value, whereI < ill< I j N and tn is the learninc time fortransforming the n-pixel in I to the n-pixel in Id, whereI < <lMN.NThe symbol ui is the uray level of ij-pixel in input image. The ij-cell output ii (tr) can beapproximated by

where xij(t,) can be approximated by activation equation

dxij (tn)dt Xij(tr) + Akl (tn)Ykl(tr)

kl C Vr (ij)

n

+ L Bkl(tn)Ukl + Zij(tn)kl C V, (ij)

(1)

This activation equation is similar to the CNN activationequation, but coefficients Ak, and Bkl are functions of timet,. Similarly, X/kl(t) and ykl(t,) are the activation andoutput functions at time tr for cells kl in the neighborhoodof cell ij. At time t, we want minimize the error function

fij (tn ) =I2 [Yij (tn ) _yd; i 2

refereed to parameters Akl(tn), Bkl (tn) and z(tn), associ-ated to radius I neighborhood of ij cell, using an stochasticgradient descent method. The parameter's learning pro-ceeds according with relations

AAkl (tn) dq ij,(tn) ABkl (tn) A2ij (tn )o'1BkI (tn)

IAZ(tn) _lde,j(tn)wZ(tn)where ,j is the learninii rate. We, have

eij (tn)Aki (tna)

withaLi,,(t,)dAkI (tn)

[Yij (tn) - Yi (tn) I Ai (tn)

29i-cTxP j(t,-) Arij (t )I e J (t, 2&Akl.(tn)

so

AYij (tn) 2rcAkl (tn) 2 .j.tn) I[ I - Yi tnI XA i (tn)

Akl (tn)(2)

The chancre rate dxmi, (t,^) - (t ) is 19 parameterst) ~~~dtdependent for radius 1 neighborhoods, so variation pro-duced only for one parameter must be little. The partialderivates a must be little and aproximately zero

&njt" = in atvt aTioUsing supposition Ak(t in activation equation (1),we find

Yi (tn) 2I e iTW7 .t

1-4244-1 166-1/07/$25.00 ©2007 IEEE.

IOXi (trn)dAkl (tn)

153

Ykl (tn) + Akl (tn ) Akl (tn)Akl (tn )

IEEE Catalog Number: 07EX1762CISBN: 1-4244-1166-1Library of Congress: 2007923398

2007 4th International Conference on Electrical and Electronics Engineering (ICEEE 2007)

2I00If we use the approxiumation

dYkl(tn) Ykl (tn) .Ykl (t 1)Akl (tn 1r)

we get equation (2) is transformed to

=yj(n)--( (1 + Yij (tn)) (1 - Yij (tn))OAkl (tn) 2

Error's uradient isdcij (tn )PAkl (tn)

(Yij(tn) -Y'(tj (1 + Yij (tn)) (I- Yij(tn))

Ykl (tn)Ykl (tn)Akl (tn)

LYkl (tn- 1)1Akl (tn- I) i

Learning rule for Akl is

Akl (tn) -

'q [Yij (tn,) -yYj(tn)]I2 [I + Yij(tn)][1 - Yij(tn)]

.Ykl (tn) + Akl(tn) kl(t) kl (tn ) (3)

where n=1,2,... etc. This rule must be used in every kl-cellcontained in ij-cell neighborhood. At the beginning, valuesYkl (to), Akl (to) and Akl(tl) can be equal zero.

In a similar way we find

B,kl (tn+l ) = Bkl (tn)-

1 [Yij (tn) Yj(tn) [1 +Yij(tn)] [1-Yij(tn)]Uki (4)

Zkl(t +) = Zkl (tn)-

71 [yg; (tn) ij (tn) I 2 [1 + Yij (tn) [1 -Yij (tn) (5)

liaiming AlgorithmInput: Input and Output images.Output: MIatrices A, B and z parameter.1)Initiate A, B and z in zero.2) While stop criterion is not satisfied do step 2.1:2.1)For every pixel use learninu rules (3)-(5) of previoussection.3) Stop.

The first stop criterion is desirable, but convergenceis not always present, so we must use second criterion.Convergence lack is because imposing a task be learnedby algorithm when it is impossible to learn the task fora single CNN, or because exist incompatibilities in train-ing images, for example, specifying two diferent desiredoutput-pixel sets for the same set of input pixels containedrespectively in the output and input images.

IV. EXPERIMENTAL RESULTS

Using procedure described in previous section, we findthe followinu cloninu templates:

l):Move a pixel to right:

2.54B = 4.49

3.44

0.03A = 1.10

0.35

-0.50.24-0. p

0.81-0.32-0.46 -

3 -0.550.13

7 -0.23

2)M[ove a pixel to left:

-0.070.950.11

-0.15 0.41-0.30 0.452.34 4.00

0.58-0.121.98

-0.570.10-0.19

;

I-0.05

0.04-0.420.70

0.08-0.042.68

0.34

Equations (3),(4) and (5) are the learning rules for CNN 3)Move a pixel to right and down:clonig teplate /1 7

III. TRAINING ALGORITHMRules (3), (4) and (5) must be used in every input-

output pixels training pair. For a image with NxMl pixelsthere must be NxMl'vearninu steps in every itieration. Thenumber of iterations is equal number of times the inputand output images are used. The number of iterations isdetermined by a stop criterion. There are two stop criteria:

1) Compute norm betweenl two cloninlc templates ofconsecutive learning steps. If norm is lower a constantlevel TOL, stop aluorithm.2) If iteration s number is greatest a interger NMAX,stop algorithm.

/6.07B-= 2.492_43

l.191.48-0.62

2.50-1.17-0.63

4)Isolated dots extraction.

(032A= 0.09 -(

0.56 -'

-0.88 -1.67B= -1.68 3.62

-0.69 -1.18

1.r-1.76-0.99

2.15-0.87-0.18

1.16-1.10-0.14

\

053 -057\n0.12 -1.232.54 -0.59

-0.491.34

-0.70

1-4244-1 166-1/07/$25.00 ©2007 IEEE.154 IEEE Catalog Number: 07EX1762C

ISBN: 1-4244-1166-1Library of Congress: 2007923398

I-0 19

-11 90

Akl (tn,

A =

2007 4th International Conference on Electrical and Electronics Engineering (ICEEE 2007)

2I00

5)Isolated dots remotion:

0.92-0.420.98

/ 1.84B = 0.21

1.17

-0.111.33-0.22

0.52 1.13 \4.32 0.170.28 2.00,J

1.04-0.410.75

2.68

3.04-1.78-1.81

-1.784.130.04

-1.81 )0.04;2.20

-8.67

In following page some images processed using previoustemplates are showed. The left image is input image andright image is output image of CNN. Every case wasprocessed using zero initial conditions, zero fix boundaryconditions and time step 1 . All templates have all entriesno-zero.

-1.640.390.24

A=|

-8.93-3.06-9.15

-4.62-128.05-0.95

-5.28.151.37-2.08

7)Contour detection:

/ -0.29A = 0.06

-1.10

0.41-1.570.48

-3.60-5.45 /

-12.51-8.20-7.93

0.069.72-1.38

-1.576.84-1.07

8)Ricrht edge detection:

/ 0.56A = -0.19

0.73

9)Erotic

0.64B = 0.80

0.04

:)n:

( 3.453 = 7.54

3.79

0.488.220.79

8.923.61-0.10

20.38-2.519.88

-1.10).-1.38;-0.46

0.48-1.07-0.15

-1.385.84-0.94

0.10-0.501.11

-3.36-3.89-2.63

4.72-0.951.03

2.3615.10-0.37

-7.278.982.76

); z

)

V. CONCLUSIONSIn this paper we have shown a set of alternative tem-

plates for some image processing tasks and a method forextracting them. In [3]-[4] are presented cloning templatesfor the considered tasks, so cloning templates presented

-12.28 in this paper are alternative templates for such processingtasks. There is a lot of image processing tasks with un-known cloning template, so the described method in thispaper is an alternative way for the existinr template desrgnmethods [5]-[8]. A point for selecting a cloning templateis performance in VLSI implementations of CNN nets [9].It is a hope that templates with more no zero entries likepresented in this paper perform a better image processing

-4.98 because they have more information about the way forprocessing images. The extraction method described inthis paper can be applied for extracting cloning templatesfor tasks with unknown templates.

ACKNOWLEDGEMENTSThe authors would like to thank funding of National

Council for Science and Technology (CONACYT) andAguascalientes Autonomous University (UAA).

-9.75

-36.48

10)Dilatation:

-8.164.187 16

13.3314.6310.92

-0 948.216.76

10.35-5.683.81

0.765.032.16 /

2 91-2.498 59

41. 67

1)Diagonal lines Detection:

1 730.15-1 .66

0.15-1.390 38

REFERENCES

[1] L. Chua and L. Yang, "Cellular neniral networks: Theory", IEEETrans. Circuits Syst., vol. 35, pp. 1272-1290, Oct. 1988.

[2] L. Chua and L. Yang, "Cellular neural networks: Applications",IEEE Trans. Circrits Syst., vol. 35, pp. 1272-1290, Oct. 1988.

[3] T. Roska, L. Kek, L. Nemes, A. Zarandy, M. Brendel, and P.Szolgay, "CNN software library", in CADETWin, Budapest,Hunrgary: Computer and Automation Inrstitote of the H[ungarianAcademy of Sciences, 1998.

[4] L. Chua, CNN: "A paradigmii for coriiplexity", World Scientific,1998.

[5] T. Kozek, T. Roska, and L. C. Chua, "Genetic algorithm forCNN template learning", IEEE Trans. Circuits Syst. I, vol. 40,pp. 392-402, Mar. 1993.

[6] J. A. Nossek, "Design and learning with cellular neural net-works", Int. J. Circuit Theory Applicat., vol. 24, pp. 15-24, 1996.

[7] A. Zarandy, "The art of CNN template design", Int. J. CircuitTheory Applicat., vol. 27, no. 1, pp. 5-23, 1999.

[8] M. Haiiggi and G. S. Moschytz, "AI exact and direct aialyticalmethod for the design of optimally robust CNN templates",IEEE Trans. Circtiits Syst., vol. 46, pp. 304-311, Feb. 1999.

[9] P. Foldesy, L. Kek, A. Zarandy, aiid G. B. T. Roska, "Fault-tolerant design of analogic CNN templates and algorithms-PartI: The binary outprit case", IEEE Trais. Circuits Syst., vol. 46,pp. 312-322 Feb. 1999.

-1.660.383.68

1-4244-1 166-1/07/$25.00 ©2007 IEEE.155 IEEE Catalog Number: 07EX1762C

ISBN: 1-4244-1166-1Library of Congress: 2007923398

A = )

2007 4th International Conference on Electrical and Electronics Engineering (ICEEE 2007)

IJu J

Fig. 1. Move a pixel to right.

Imagen Original Tralaionm un pixel hacia arriba

)_ _ ~~~~ ~ ~ ~~~~~~~~~10|

204060 80 100 20 40 60 80120

Fig.2. Move a pixel to top

20 40 60 80 100 20 40 60 80 1oc

Fig. 2.Move a pixel torgtoanpdwnImagen Original Etracion hde dpuntos yislado

) ~~~~~~~~~~~~~~~~20_

) _ 2i ~~~~~~~~~~~~~60

) 12~~~~~~~~~~~~~~~0

20 40 60 80 100 20 40 60 80 10c

Fig.g. MovIoatpiel tot rigtrandtiownImagen Original Retmocion de puntos aislados

) ~~~~~~~~~~~~~~~~40

5 1 152040 60 355 0 1 502040 56 30 0

Fig. 4. Isolated dots Extraction

IFig. 7. Contour detection

Imagen Original Deteccio de drla

_ u _ 1~~~~~~~~~~~40

160

180

100 150 200 50100 1~10

120_

140

_ ~~ ~ ~~~~~~~~~60_

14040 60 850 200 20 40 ill

Fi. F ight9 Edgeoetitin

Imagen Original E-iaaco

_ ~~~~~~~~~~~~60 _

_ _ 60~~~~~~~~~~~8

3 ~~~~~~~~~~~~100

140

40 60 60 100 20 40 60

Fig.10. Dilattion

Fig 11 Dia onal Lines Detection

1\ ** fi w1* /

* / +l

Fig 6 Image co plement

1-4244-1 166-1/07/$25.00 ©2007 IEEE.156 IEEE Catalog Number: 07EX1762C

ISBN: 1-4244-1166-1Library of Congress: 2007923398