9
Biol. Cybern. 64, 15-23 (1990) ( yiological bemetics 9 Springer-Verlag 1990 Creative dynamics approach to neural intelligence M. Zak Center for MicroelectronicsTechnology, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Received December23, 1989/Accepted in revised form May 23, 1990 Abstract. The thrust of this paper is to introduce and discuss a substantially new type of dynamical system for modelling biological behavior. The approach was motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks - their rigid behavior compared with even simplest bio- logical systems. This approach exploits a novel paradigm in nonlinear dynamics based upon the con- cept of terminal attractors and repellers. It was demon- strated that non-Lipschitzian dynamics based upon the failure of Lipschitz condition exhibits a new qualitative effect- a multi-choice response to periodic external excitations. Based upon this property, a substantially new class of dynamical systems - the unpredictable sys- tems - was introduced and analyzed. These systems are represented in the form of coupled activation and learn- ing dynamical equations whose ability to be sponta- neously activated is based upon two pathological characteristics. Firstly, such systems have zero Jaco- bian. As a result of that, they have an infinite number of equilibrium points which occupy curves, surfaces or hypersurfaces. Secondly, at all these equilibrium points, the Lipschitz conditions fails, so the equilibrium points become terminal attractors or repellers depending upon the sign of the periodic excitation. Both of these patho- logical characteristics result in multi-choice response of unpredictable dynamical systems. It has been shown that the unpredictable systems can be controlled by sign strings which uniquely define the system behaviors by specifying the direction of the motions in the critical points. By changing the combinations of signs in the code strings the system can reproduce any prescribed behavior to a prescribed accuracy. That is why the unpredictable systems driven by sign strings are ex- tremely flexible and are highly adaptable to environ- mental changes. It was also shown that such systems can serve as a powerful tool for temporal pattern memories and complex pattern recognition. It has been demonstrated that new architecture of neural networks based upon non-Lipschitzian dynamics can be utilized for modelling more complex patterns of behavior which can be associated with phenomenological models of creativity and neural intelligence. 1 Introduction The biggest promise of artificial neural networks as computational tools lies in the hope that they will resemble the information processing in bioligical sys- tems. Notwithstanding many successes in this direc- tion, it is rapidly becoming evident that current models are characterized by a number of limitations. We will analyze these limitations using the Hopfield additive model as a typical representative of artificial neural networks: fr+u,= ~ TuV(uj)+Ii, i=1,2 ..... n, (1) j=l in which ui(t) is the mean soma potential of the ith neuron, Tij are constant synaptic interconnections, V(u) is a sigmoid function, Ii is an external input. Firstly, the neurons performance in this model is collective, but not parallel: any small change in the activity of an ith neuron instantaneously effects other neurons: c~*Jj dV - T,j # 0 i #j. (2) duj duj In contrast to that, the biological systems exhibit both collective and parallel performances. For instance, the right and the left hands are mechanically independent (i.e., their performance is parallel), but at the same time their activity is coordinated by the brain; that makes their performance collective. Secondly, the performance of the model (1) is fully prescribed by initial conditions. The system never "for- gets" these conditions: it carries their "burden" up to t--* oo. In order to change the system performance, the external input must overpower the "inertia of the past". In contrast to that, biological systems are much more flexible: they can forget (if necessary) the past, adapting their behavior to environmental changes. Thirdly, the features characterizing the system (1) involve a single scale: they are insulated from the real worm by a large range of scales. At the same time, biological systems involve mechanisms that span the

Creative dynamics approach to neural intelligence

  • Upload
    m-zak

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Creative dynamics approach to neural intelligence

Biol. Cybern. 64, 15-23 (1990) ( yiological bemetics �9 Springer-Verlag 1990

Creative dynamics approach to neural intelligence M. Zak

Center for Microelectronics Technology, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA

Received December 23, 1989/Accepted in revised form May 23, 1990

Abstract. The thrust of this paper is to introduce and discuss a substantially new type of dynamical system for modelling biological behavior. The approach was motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks - their rigid behavior compared with even simplest bio- logical systems. This approach exploits a novel paradigm in nonlinear dynamics based upon the con- cept of terminal attractors and repellers. It was demon- strated that non-Lipschitzian dynamics based upon the failure of Lipschitz condition exhibits a new qualitative effect- a multi-choice response to periodic external excitations. Based upon this property, a substantially new class of dynamical systems - the unpredictable sys- tems - was introduced and analyzed. These systems are represented in the form of coupled activation and learn- ing dynamical equations whose ability to be sponta- neously activated is based upon two pathological characteristics. Firstly, such systems have zero Jaco- bian. As a result of that, they have an infinite number of equilibrium points which occupy curves, surfaces or hypersurfaces. Secondly, at all these equilibrium points, the Lipschitz conditions fails, so the equilibrium points become terminal attractors or repellers depending upon the sign of the periodic excitation. Both of these patho- logical characteristics result in multi-choice response of unpredictable dynamical systems. It has been shown that the unpredictable systems can be controlled by sign strings which uniquely define the system behaviors by specifying the direction of the motions in the critical points. By changing the combinations of signs in the code strings the system can reproduce any prescribed behavior to a prescribed accuracy. That is why the unpredictable systems driven by sign strings are ex- tremely flexible and are highly adaptable to environ- mental changes. It was also shown that such systems can serve as a powerful tool for temporal pattern memories and complex pattern recognition. It has been demonstrated that new architecture of neural networks based upon non-Lipschitzian dynamics can be utilized for modelling more complex patterns of behavior which can be associated with phenomenological models of creativity and neural intelligence.

1 Introduction

The biggest promise of artificial neural networks as computational tools lies in the hope that they will resemble the information processing in bioligical sys- tems. Notwithstanding many successes in this direc- tion, it is rapidly becoming evident that current models are characterized by a number of limitations. We will analyze these limitations using the Hopfield additive model as a typical representative of artificial neural networks:

fr+u,= ~ TuV(uj)+Ii, i = 1 , 2 . . . . . n , (1) j = l

in which ui(t) is the mean soma potential of the ith neuron, Tij are constant synaptic interconnections, V(u) is a sigmoid function, Ii is an external input.

Firstly, the neurons performance in this model is collective, but not parallel: any small change in the activity of an ith neuron instantaneously effects other neurons:

c~*Jj dV - T,j # 0 i # j . (2)

duj duj

In contrast to that, the biological systems exhibit both collective and parallel performances. For instance, the right and the left hands are mechanically independent (i.e., their performance is parallel), but at the same time their activity is coordinated by the brain; that makes their performance collective.

Secondly, the performance of the model (1) is fully prescribed by initial conditions. The system never "for- gets" these conditions: it carries their "burden" up to t--* oo. In order to change the system performance, the external input must overpower the "inertia of the past". In contrast to that, biological systems are much more flexible: they can forget (if necessary) the past, adapting their behavior to environmental changes.

Thirdly, the features characterizing the system (1) involve a single scale: they are insulated from the real worm by a large range of scales. At the same time, biological systems involve mechanisms that span the

Page 2: Creative dynamics approach to neural intelligence

16

entire range from the molecular to the macroscopic, Harth et al. (1970); Anninonos et al. (1970); Nicolis (1985). Can these limitations be removed within the framework of classical neuro-dynamics? The answer is no. Indeed, all neural systems considered heretofore are based on classical dynamics and satisfy the Lipschitz conditions which guarantees the uniqueness of the solu- tions subject to prescribed sets of initial conditions. For the system (1) this condition requires that all the derivatives af~i/Ou: be bounded:

oi,, t3uj < oo. (3)

The uniqueness of the solution

ui =ui ( t , ul . . . . ,n) , i = 1, 2 , . . . , n , (4)

subject to the initial conditions u~ (i = 1, 2 . . . . . n) can be considered as a mathematical interpretation of rigid, predictable behavior of the corresponding dynamical system.

Actually, all the limitations of the current neural net models mentioned above are inevitable consequences of the Lipschitz condition (3). In this paper we will discuss a new architecture for neural nets which is free of these limitations. The architecture is based upon some prop- erties of non-Lipschitzian dynamics introduced by Zak (1990a, b). Due to the failure of the Lipschitz condition (3) at certain critical points of phase-space, the neural net forgets its past as soon as it approaches these points. In addition to that, it acquires the ability to be activated not only by external inputs but also by inter- nal periodic rhythms. Such a spontaneous activity re- sembles the brain activity. Due to the existence of the critical points mentioned above, the neural network becomes a weakly coupled dynamical system, Zak (1990b): Its neurons (or groups of neurons) are uncou- pled (and therefore, can perform parallel tasks) within the periods between the critical points, while the coordi- nation between the independent units (i.e., the collec- tive part of the performance) is carried out at the critical points at which the neural network is fully coupled. As shown by Zak (1989a), any infinitesimal input applied at a critical point causes finite response of the neural network. This property appeared to be an important tool for creating a chain of coupled subsys- tems of different scales whose range is theoretically unlimited. More sophisticated versions of the new neu- ral network architecture are introduced by Zak (1990c): there the activation and learning dynamics are coupled in such a way that the system can spontaneously change the locations and types of its attractors.

In this paper we will discuss some relevance of neural networks with this new architecture to biological systems.

2 Unpred ic tab le neural nets

Can a man-made dynamical system create new informa- tion? The answer to this question would significantly

contribute to better understanding of evolution of intel- ligence. So far man-made systems such as artificial neural networks can only process information: they can learn by examples, they can memorize and recognize patterns, etc. But all these performances are fully pre- dictable since they are prescribed in advance by the original dynamical structure.

In this section we will introduce a new type of man-made dynamical system which can acquire a large number of different structures without a specific inter- ference from the outside: driven by a vanishingly small noise, identical dynamical systems perform significantly different patterns of behavior. In other words, these systems extract information from "nothing".

Let us start with the following one-neuron dynami- cal system:

= y sin '/3 co - - u sin tot, a

y = Const, co = Const, a = Const . (5)

It is easily verifiable that at the equilibrium points

7rk~ uk = k . . . . , - 2 , - 1 , 0 , 1,2 . . . . . etc. (6)

co

the Lipschitz condition is violated:

~f~/Ou ~ o o at u ~ u k . (7)

If u - 0 at t = O, then during the first period

0 < t < - - , (8) co

the point u = 0 is a terminal repeller since sin cot > 0, and the solution at this point splits into two (positive and negative) branches whose divergence is character- ized by unbounded Lyapunov exponent. Consequently, with an equal probability u can move into the positive or the negative direction. For the sake of concreteness, we will assume that it moves in the positive direction. Then the solution will approach the second equilibrium point ul = Trot/co at

=--co arccos 1 21/3 , (9)

in which B is the Beta function. It is easy to verify that the point Ul will be a

terminal attractor at t = tl if

tl <,.n/co, i.e., if Z >~ B(~'~) (10) Qt 2 4/3

Therefore, u will remain at the point u~ until it becomes a terminal repeller, i.e., until t > h- Then the solution splits again: one of two possible branches approach the next equilibrium point u2 = 2Try/co, while the other re- turns to the point uo = 0, etc. The periods of transition from one equilibrium point to another are all the same and are given by (9), Fig. 1.

It is important to notice that these periods t are bounded only because of the failure of the Lipschitz condition at the equilibrium points, Zak (1987). Other

Page 3: Creative dynamics approach to neural intelligence

\ /

�9 ~ u ~ /

I / ~ _ ~ , \ \

Fig. 1. Oscillations about the attractor u = 0

wise they would be unbounded since the time of ap- proaching a regular attractor (as well as the time of escaping a regular repeller) is infinite, Zak (1989a, b).

Thus, the evolution of u prescribed by (5) is totally unpredictable: it has 2 m different scenarios where m = E(t/t*), Fig. 2, and E denote the integer function.

u3~

Fig. 2. Information born in chaos

17

Let us assume that the dynamical system (5) is driven by a vanishingly small noise e(t):

O) ti = y sin I/3 -- u sin ogt + e(t), [e(t) l ,~ y. (11)

The noise can be ignored when fi # 0, or when u = 0, but the system is stable. However, it becomes sigificant during the instants of instability.

Since actually a vanishingly small noise is always present, one can interpret the unpredictability discussed above as a consequence of small random inputs to which the one-neuron dynamical system (11) is ex- tremely sensitive.

Despite the fact that the system (11) is characterized by unpredictable behavior, nevertheless, the locations of its critical p o i n t s - terminal attractors and repel lers- are prescribed in advance. Indeed, the coordinates of these points are given by (6). However, one can remove this limitation by introducing the following one-neuron- one synapsis dynamical system (Zak 1990a):

ti = sin 1/3 ~o Tu sin cot + e(t) , (12)

= sin t/3 ~oTu sin cot + ~/(t), (13)

~o = const; eft) ~ 0, r/if) ~ 0 .

which is represented by coupled activation and learning equations, while e and r/ represent a vanishingly small noise.

The system (12), (13) possesses two pathological properties.

Firstly, it has zero Jacobian:

O~lOu gf410T J = t3T/t3u aI"/dT = 0 " (14)

Because of that, the system has an infinite number of equilibrium points which occupy a family of hyperbolas in the configuration space u, T (Fig. 3):

nk T k U k = - - , k . . . . - - 2 , - - 1 , 0 , 1 , 2 . . . . . e tc . (15)

Gt o

k = 2

u

Fig. 3. Spontaneous changes of dynamical structure

Page 4: Creative dynamics approach to neural intelligence

18

Secondly, at all the equilibrium points, the Lipschitz condition fails since:

Ou 0% OT ~ , if Tu ~1.% (16)

As a result of that, the characteristic roots of the Jacobian (14) at the equilibrium points (16) are:

~ i f s i n c o t > 0 /~1 = O, 22 ~ - ~ if sin cot < O" (17)

One should note that, strictly speaking, the formula for 2 2 in (17) can be applied only if the explicit time t in (12), (13) is considered as a slow changing parameter,

i.e., if

co ~ 22. (18)

However, since 1 2 2 ] ~ , the inequality (18) holds for all bounded co. Global properties of the solutions to (12), (13) were analyzed in the paper by Zak (1990a).

One can introduce more general unpredictable dy- namical systems as the following:

] zi i = t~ i sin 1/3 Tij V(uj ) sin cot + ei (t) (19) LJ= ! i = 1, ,2 . . . . n ,

a

I/ , Q

C

iT

"A

" i ; ; ; =u

b

= ', I I I

T i

'A �9 A '

~ D I I I I ~ u

C

,B

d

A ,

T

.D I I ; ; = u

e

C

T

I ; l i ~ u

Fig. 4. Unpredictable motions

Page 5: Creative dynamics approach to neural intelligence

sinlt3Ej l ,20, in which ff~ = const, T,.j = const, e~, ~/;j ~ O.

All the critical points of this system occupy a family of hypersurfaces:

r Z(u ) k . . . . - 2 , j= l

- 1, 0, 1, 2 . . . . . etc. (21)

Thus, as in the previous case, the solutions as well as the locations of the critical points of this system are unpredictable while the degree of the unpredictability now has the order of

(2n) m where m = E ( t / t * ) , (22)

and t is the period of transition from one critical point (21) to another. Actually (22) corresponds to the degree of unpredictability at the vanishingly small noises e~ (t) and r/ij(t), or, to be more precise, sgn e~ and sgn r/~j at the critical points.

So far, the unpredictable systems (11), (12), (13) were presented in a non-autonomous form since they contain time in an explicit form via the periodic excita- tions sin cot. However, one can represent them in an autonomous form by introducing a new variable:

sin cot = - v 2 ,

while

Vl = 60V2 + V ' (1 - - V 2 - - V22)

-cor,+v2(1-Vl -V )J "

This dynamical system has a stable limit cycle

Vl = cos cot, v2 = sin cot,

which generates periodic excitations. This limit cycle activating the dynamical systems

( l l ) , (12), and (13), can be associated with the brain rhythms, Basar (1980), circadian pacemakers, Daan and Berde (1978), or with environmentally specified movement patterns, Sch/Sner and Kelso (1988).

Numerical simulation describing unpredictable spontaneous changes of locations of equilibrium points were performed by Zak and Barhen (1989), for the following equations:

zi = - ( u - T tanh gu + eo cos ~t)1/3/)2 ,

7 ~ = (u - T tanh gT + no c o s ~ l ) 2 / 3 v 2 ,

in which e0--,0 (but le0y2[-~ oo) and

~ = c o + ~ ( , ( = 1 , 2 . . . . . etc.

The additional bias can be ignored when the system is stable, but it becomes significant during the periods of instability. Different patterns of initial disturbances are simulated by different (, while in a real physical model they occur randomly. Figure 4a -e demonstrates differ- ent scenarios of behavior of the system for different ~, while in all the case eo = 102~ O~ = 10. In Fig. 4a, with co = 8" 10 3 the motion starts with u = 0 , T = 5 and

19

approaches the terminal attractor B. After resting at this point it escapes through the trajectory BC (for

= 1) or CD (for ( = 2). Similar behaviors are demon- strated in Fig. 4b, c, d, e with

09 = 5" 10 3, 5 " 10 3, 9 " 10 3, 5 " 10 3 ,

u = l . 0 , 0, 0, 0 ,

T=5 .0 , 0,3.0, 4.0.

3 Unpredictability and creativity

Let us return now to the simplest unpredictable dynam- ical system (11) and consider a set of such systems:

lii = ~' sinl/3 09 ui sin cot + ei(t), ei(t) --*0,

i = 1, 2 . . . . , n . (23)

These equations describe dynamics of n identical un- coupled neurons driven by vanishingly small random noise. But since the components ei(t) of this noise are not necessarily identical, the motions of each neuron, in general, will be different from the motions of the others, while these differences will be finite rather t han infin- itesimal. Such a phenomenon can be interpreted as an "emergence from chaos" of a "society" of neurons with different "personalities" despite the fact that the initial conditions for all of them are almost identical. It is remarkable that the "character" of each "personality" has a random nature coming from the random origin of the noise ei(t).

The next step in the developing of the neurons "society" (23) can be based upon the following scenario: let us assume that there exists an objective function to be extremized by the neurons. It can be, for instance, a functional �9 to be minimized:

7

= S [u(t) --f(t)] 2 dt --* min , (24) 0

in which t '= const > 0, and f ( t ) is a prescribed function. Suppose that the performance of each neuron is

measured by the value:

7

~i = S [u,(/) _f.(/)]2 d t , (25) 0

and let

~b = 090 + ~i - ~j - co i , (26)

in which coi is a new frequency of the periodic excita- tions which will be different for each neuron.

As follows from (26), those neurons whose perfor- mance is above the average:

1 n ~i > - ~ r (27)

?lj=l

Page 6: Creative dynamics approach to neural intelligence

20

will receive higher energy input through the frequency co and will act faster. On the contrary, those neurons whose performance is below the average:

1 IJ) i < - - ~ I~j, (28)

nj=l

will slow down their activity and eventually they will be out of competition.

This primitive scenario of a natural selection is only an illustration of possible development of discrimina- tion in the neuron's "society" (23). In addition to that, the neurons can learn from their experience memorizing useful behavior, and therefore, improving their perfor- mances; they can also cooperate and compete with each other, etc. But it is worth emphasizing that all these activities can be based upon conventional tools of information processing such as learning by examples, memorizing and recognition of patterns, local and global minimization etc. The most important step is the "act of creation", when identical neurons in the same environment acquired different patterns of behavior without a special interference from outside.

Let us now discuss the connection between the unpredictability and creativity of dynamical systems. We have defined earlier the unpredictability as a prop- erty of a dynamical system to have a multi-choice response to a periodic parametrical excitation. How- ever, the ability to have such a multi-choice is a neces- sary, but not a sufficient condition for a dynamical system to be creative since any chosen behavior must be useful. In other words, the unpredictable dynamical system, in addition, must have an ability to evaluate the usefulness of different choices and to select the best (or, at least, a good one). As a possible tool for such an evaluation, a functional (24) can be utilized. In this respect, we will discuss two extreme cases.

At first, we assume that the unpredictable dynami- cal system does not "know" the analytical structure of the functional (24), and it can only compute the values of (24) for each particular behavior. (Actually biologi- cal systems face such a problem in real life.) Then, the only way the system can minimize the functional (24) is to sort out all the possible behaviors by direct computa- tions. Clearly, this strategy inevitably leads to a combi- natorial explosion of the number of possible behaviors (see 22), and therefore, it is unacceptable for practical applications.

As an alternative to this situation, one can come up with another extreme by assuming that the dynamical system has the complete analytical model of the func- tional (24). (This situation is as unrealistic as the previous one.) Then, before acting, the system must "think": it should find the best behavior by minimizing the functional (24). Let us perform such a minimiza- tion.

As shown above, the unpredictable dynamical sys- tems are driven by a vanishingly small noise. However, in practice, the only important part in this noise is the sign of e(t) at the critical points. Indeed, consider, for example, (11) and suppose that

~k sgn e(tk) = + , + , --, + , --, --, etc. at tk = - - . . .

tO (29)

k - -0 , 1,2 . . . . . etc.

The values of e(t) in between the critical points are not important since, by our assumption, they are small in comparison to values of the derivative ~, and there- fore, can be ignored. Hence, the only part of the noise e(t) which is significant in determining the motion of the neuron (1 l) is the sign string (29): specification of this string fully determines the dynamics (11). Figure 5 demonstrates three different scenarios of motions for three different sets of strings. Obviously, the number of possible scenarios is equal to the number of different combinations of the signs in (29) which is 2 n (where n is the number of critical points).

Let us consider now the procedure of encoding of a prescribed motion ~(t) exploiting the system (11). This means that we would like to find such a string which would force the solution of (11) to trace the motion if(t). For this purpose let us modify (11) as follows:

ti = 7 sin1/3 tO -- u sin tOt + e0(a - u), 0 < eo '~ 1. (30) ~t

Suppose that at the beginning

u < ~. (31)

Then

sgn [e0(~ - u)] = + ,

i.e., u(t) will increase until

(32)

u > ~ . (33)

U, To = 1-I

A A h AA/ O~ -~ -~ t ~

+ § 2 4 7 2 4 7 2 4 7

0 (

- - + + - - - - + - - - - + + - - + + - - + + - - + + - - - - + - - - - + + - - - - +

+ + - - - - + + - - - - + + - - - - + + - - - - § + - - - - § § + - - - - +

Fig. 5. Temporal patterns and their codes

Page 7: Creative dynamics approach to neural intelligence

After that u(t) will decrease thereby tracing the motion a(y).

Let us modify now (11) as the following (Zak 1990c):

ui = ? sinl/3 ~ ui sin cot + ~ ~ TuV(uj). (34) O~i j= l

Suppose that this dynamical system should repro- duce some prescribed behavior;

l~ i =/~/(t), i = 1, 2 , . . . , n. (35)

Then the synaptic interconnections T,.j must satisfy the following conditions:

sgnj~l.= TijV =sgn ~; - u ,

i = 1 , 2 . . . . . n; k = l , 2 . . . . . m.

Now one can introduce the energies:

E = ~ k E I ~ sgn ~ TijV = i=1 j=l

- sgn[fii ( ? ) - u ( ~ ) ] } 2'

(36)

(37)

(38)

and the learning dynamics for the sought synaptic interconnections T0.:

0g T~J = OT~jE' i,j = 1, 2 , . . . , n . (39)

Hence, the synaptic interconnections T U defined as point attractors of (39), memorize the functions (in the form at a finite-dimensional approximation through the sign-strings (36)) which minimize a functional of the type (24).

Thus, it has been demonstrated, that a set of the key strings (29) uniquely defines a motion of some dynamical systems. But is the inverse problem unique? Obviously, not. Indeed, one can easily find an infinite number of different curves ~(t) having the same values as u(t) at the ctitical points. Hence all of them have the same finite-dimensional representation u(t). In other words, the motion u(t) performed by the dynamical system (11) and the string (29) can be interpreted as a typical representative of the family of temporal patterns fi(t) in the same way in w/hich a static attractor repre- sents a family of static patterns located within its basin of attraction. The boundaries of the "basin" of the "temporal" attractor u(O can be adjusted by the fre- quency co. Actually the non-uniqueness of the inverse problem demonstrated above has the same nature as the non-unique correspondence between the finite-di- mensional representation of a continuum and the con- tinuum itself. However, it can be exploited for temporal pattern recognition, signal processing etc. in the same

u~

21

?(t) ~l(t )

Fig. 6. Temporal patterns and their "attractor"

way the non-unique correspondence between a static attractor and the patterns within the basin of its attrac- tion are utilized for static memories and static pattern recognition (Fig. 6).

4 S truc tured m i c r o w o r l d

In the previous sections we have considered an environ- ment as a source of infinitesimal random excitations ei(t) and ~/;(t) of unpredictable dynamical systems. Let us now take a deeper look at the environment and try to distinguish its "hidden" microstructure whose scale is much smaller than the scale of the original dynamical system. The simplest ordered microdynamics can be represented by a periodic attractor (see (29). Then (11) can be modified as follows:

= ~' sinl/3 m -- u sin cot + eo sgn sin 12t, ~o ~ 0. (40)

The periodic motion eo sin mt can be generated by a periodic attractor of the following dynamical system:

o)1 = ~co: + co~ (8o - co~ - co~), (41)

o~2 = -~co2 + co2(eo - co~ - co,~),

where co~ = eo cos cot. o)2 = -e.o cos cot, solO. Obviously, the dynamical systems (40) and (41)

have different scales since by definition

eo <~ 1. (42)

Nevertheless, the microscale dynamical system (41) fully controls the behavior of the original dynamical system (40) via the sign of the last term in (40), i.e., sgn sin ~t. Indeed, the discrete critical values of u can be found as:

Uk=~ m=l~ sgn sin (nk ~ ) , (43)

and consequently, varying the ratio ~/co one can obtain different patterns of dynamical behavior. For instance,

Page 8: Creative dynamics approach to neural intelligence

22

three different behaviors in Fig. 4 can be reproduced by the limit cycle (41) with the ratios f2/co = x ~ , x/~ and x//~, respectively. The further details of this archi- tecture can be found in the paper by Zak (1990c).

5 Invariant pattern representations

In this section we will introduce a new approach to pattern representations using the new neural net archi- tecture. The traditional approach is based upon decom- position of a pattern as an ordered set of physical parameters into features represented by some function- als over selected subsets of the pattern elements. The features are usually chosen heuristically, based upon an insight into the problem. The resulting set of feature values for a pattern is then defined as the pattern vector. Obviously this is a symbolic way of a pattern representation: the pattern vector "does not look" like the original pattern.

More "natural" pattern representation can be per- formed by the new architecture discussed above. Let us start with the following system:

/~l = sinl/3 Ul sin cot + e l , [e I 1--~ 0 (44)

/~2 = sinl/3 u2 sin cot + e2, leE[ --, 0 ,

in which el and e 2 are the sign-strings of the type (29). This dynamical system performs a finite-dimensional representation of a curve:

u2 = qb(Ul), (45)

but since (45) do not contain any variable parameters, this representation is invariant with respect to transla- tions, rotations and affine transforms. This means that (44) represent a broad class of curves with the same topological properties. Such a pattern can result from a generalization procedure.

Let us turn now to the opposite of generalization, i.e. to specification procedure and introduce variable metric coefficients Tll and TEE:

tJl = sin 1/3 (T l lU 1 -F Tl2u2) sin cot + el ,

ti2= sin 1/3 (T21ul + T22u2) sin cot + e2,

Tij = sin 1/3 (Tiiui + Tijuj) sin cot + eij ,

i , j = l, 2 , (46)

in which eij are the sign strings of the type (29). In contrast to the strings el and e2 which control the

topology, the strings e~j control the metrics of the curve (45) through the appropriate changes of the synaptic interconnections T~y. This means that (45) represent the same subclasses of curves whose metrics are prescribed.

Let us introduce now the following system:

ti I = s in 1/3 Tiju j sin cot + e~, ~ , j=l

Tij=sinl/3(j~=l T i ju j ) s inco t+e i j . (47)

One can interpret this system as a collection of c u r v e s :

u,+ l = dp(u,) , (48)

whose topologies and metrices are prescribed by the strings, ei, ei +l, and e,.j, ei+ l. j, ~'i,j+ l , ~i+ l . j + I, r e s p e c - t ive ly . But in addition to that, the rest of the strings, i.e. ej, j+2, ej+ 2j etc., control the metrical correlations be- tween different curves in (48) which turns a random collection curves into a "picture".

All the strings e~ and e;j can be represented in the form of weakly coupled neural nets, (Zak 1990b) for instance (see 34):

e,=62~,I '~juj , ~ 2 ~ 0 , (49)

where 7~u are weak synaptic interconnections which can be learned using the procedure described by (37)-(39).

Analogously,

~'ij=~ 2 ~ ~ ~ijpqZpq. (50 ) p = l q = l

Let us assume now that the only ei strings are prescribed while e~j are represented by an infinitesimal noise. Then the system (47) will "create" different pic- tures consisting of randomly combined curves (48) with the prescribed topology. It is important to emphasize that all these pictures with randomly distorted metrical relationships will still belong to the class (44), although no one of them has been previously learned. In other words, the dynamical system (47) can "create" new pictures which belong to a certain class.

6 Summary and discussion

The thrust of this paper is to introduce and discuss a substantially new type of dynamical system for mod- elling biological behavior. The approach was motivated by an attempt to remove one of the most fudamental limitations of artificial neural ne tworks - their rigid behavior compared with even simplest biological sys- tems. This approach exploits a novel paradigm in non- linear dynamics based upon the concept of terminal attractors and repellers. Incorporation of these new types of equilibrium points into dynamical systems required a revision of some fundamental concepts in theory of differential equations associated with the fail- ure of Lipschitz condition, such as uniqueness of solu- tions, infinite time of approaching or escaping the attractors, bounded Lyapunov exponents, etc. In the course of this revision it was demonstrated that non- Lipschitzian dynamics based upon the failure of Lips- chitz condition exhibits a new qualitative ef fec t - a multi-choice response to periodic external excitations. Based upon this property, a substantially new class of dynamical sys tems- the unpredictable sys tems- was introduced and analyzed. These systems are represented in the form of coupled activation and learning dynami- cal equations whose ability to be spontaneously acti- vated is based upon two pathological characteristics.

Page 9: Creative dynamics approach to neural intelligence

Firstly, such systems have zero Jacobian. As a result of that, they have an infinite number of equilibrium points which occupy curves, surfaces or hypersurfaces. Sec- ondly, at all these equilibrium points, the Lipschitz condition fails, so the equilibrium points become termi- nal attractors or repellers depending upon the sign of the periodic excitation. Both of these pathological char- acteristics result in multi-choice response of unpre- dictable dynamical systems.

It has been shown that the unpredictable systems can be controlled by sign strings which uniquely define the system behaviors by specifying the direction of the motions in the critical points. By changing the combi- nations of signs in the code strings the system can reproduce any prescribed behavior to a prescribed accu- racy. That is why the unpredictable systems driven by sign strings are extremely flexible and are highly adapt- able to environmental changes. It was also shown that such systems can serve as a powerful tool for temporal pattern memories and complex pattern recognition.

Two different types of microdynamical systems gen- erating the code strings for the unpredictable systems were discussed. The first type is a hierarchy of a master- slave dynamical system of different scales, while each of the microdynamical systems is represented by a combi- nation of periodic attractors. As a result of the dynam- ical learning, the parameters of these attractors acquire values providing a prescribed code string, and there- fore, a prescribed behavior of the unpredictable system. The second type of microdynamical device generating a prescribed code string is represented by weakly coupled dynamical systems. In this neural network architecture the interconnections between the neurons are performed on the level of microdynamics: these interconnections play the role of parameters generating a prescribed code string. The required values of these interconnections are found by means of the dynamical learning.

Thus, it has been demonstrated that new architec- ture of neural networks based upon non-Lipschitzian dynamics can be utilized for modelling more complex patterns of behavior which can be associated with phenomenological models of creativity and neural intel- ligence.

23

Acknowledgemenr This research was carried out at the Center for Space Microelectronic Technology, Jet Propulsion Laboratory, Cali- fornia Institute of Technology. Support for the work came from Agencies of the U.S. Department of Defense, including the Innova- tive Science and Technology Office of the Strategic Defense Initiative Organization and Department of Energy, through an agreement with the National Aeronautics and Space Administration.

References

Annionos PA, Beck B, Csermely TJ, Harth E, Pertile G (1970) Dynamics of neurai structures. J Theor Biol 26:121-148

Basar E (1980) EEG-brain dynamics. Elsevier, Amsterdam Daan S, Berde C (1978) Two coupled oscillators: simulations of the

circadian pacemaker in mammalian activity rhythms. J Theor Biol 70:297-313

Harth E, Csermely TJ, Beek B, Lindsay RD (1970) Brain functions and neural dynamics. J Theor Biol 26:93-120

Nicolis JS (1985) Chaotic dynamics of information processing with relevance to cognitive brain functions. Kybernetes 14:167-172

Sch6ner G, Kelso JAS (1988) A synergetic theory of environmentally- specified and learned patterns of movement coordination. Biol Cybern 58:81-89

Zak M (1987) Deterministic representation of chaos with application to turbulence. Math Model 9:599-612

Zak M (1988) Terminal attractors for associative memory in neural networks. Phys Lett A 133:18-22

Zak M (1989a) Non-Lipschitzian dynamics for neural net modelling. Appl Math Lett 2:69-74

Zak M (1989b) Terminal attractors networks. Neural Network 2:(3) Zak M (1990a) Spontaneously activated systems in neurodynamics.

Complex Syst 3 Zak M (1990b) Weakly connected neural nets. Appl Math Lett

3:(3) Zak M (1990c) Creative dynamics approach to neural intelligence.

Proceedings of the Fourth Annual Parallel Processing Sympo- sium, April 4-6, 1990, pp 262-288

Zak M, Barhen J (1989) Neural networks with creative dynamics, 7th Int. Conference on Math and Computer Modelling, Aug. 2-5, 1989, Chicago, Ill

Dr. Michail Zak Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 USA