6
HAVE 2007 - IEEE International Workshop on Haptic Audio Visual Environments and their Applications Ottawa - Canada, 12-14 October 2007 Entertainment Oriented Intelligent Virtual Environment with Agent and Neural Networks Li Jia 1' Miao Zhenjiang2 'Institute of Information Science, Beijing Jiaotong University Beijing 100044, P. R. China E-mail: 06120386Abjtueducn Abstract -Intelligent Virtual Environment (IVE) combines Virtual an architecture for implementing systems that integrate Logic Reality and Artificial Intelligence (AI), i.e. It incorporates AI Programming, Object-Oriented Programming and VRE5'. This technology into virtual environment, which makes the virtual architecture supports the participation of many intelligent environment more interactive and credible. Using IVE concept, we agents in a shared world. Additional efforts have been made design and implement an entertainment IVE application platform in the field of multi-agent virtual environment [6][7]. with intelligent agents and neural networks. The humanlike in this pe wenprvideuan entertnment intelligent objects in the virtual environment are implemented with ' intelligent agents, which are endowed with intelligence for self- Our platform can be considered as a multi-agent system that learning and collaborating with each other by applying BP and ESP composed of homogeneous agents. These agents are endowed neural network to the agent controllers. with intelligence for self-learning and collaboration by BP and ESP neural networks. The Enforced Sub-Population Keywords Intelligent Virtual Environment, agent, neural network, (ESP), which is one of the neuroevolution algorithms, has Backpropagation, neuroevolution, Neuroevolution with Enforced been proved powerful for task assignment problem during the Sub-Populations collaboration of a team of homogeneous agents [15]. In section 2, the intelligent agents are introduced. Bp and I. INTRODUCTION ESP neural networks used in our IVE are presented in section 3. In section 4, we describe our entertainment IVE Virtual Reality (VR) is an interactive, computer generated application platform as well as its design consideration. simulating environment that combines multiple human senses, Section 5 presents some operating results of the entertainment such as visual, auditory, haptic, etc. Users can immerse into IVE application platform. Finally we give our conclusions. the virtual environment using interactive devices and interact with it. II. INTELLIGENT AGENTS FOR IVE The environments that make use of VR techniques are referred as Virtual Environments (VEs). Nowadays, the use IVE implements the humanlike intelligent objects with of Artificial Intelligence (Al) in VEs has been explored. The intelligent agents. Agent is an important concept in Al. As the VE with Al technologies is called Intelligent Virtual development of Al technologies, the research and application Environments (IVE), which attracts great interests and of Agent draw lots of attention. At present, the applications of attention. Agent include many fields, such as individual digital assistant, IVE is the integration of virtual reality and artificial management of network, long-distant conference, CAD, intelligence. It endows the virtual environment with medical consultation, etc. intelligence such as implementing some humanlike intelligent The behavior of an agent with independent intelligence objects with intelligent agents. Therefore, the virtual should markedly reflect characteristics of its intelligence. It is environment is more interactive and credible, and it enriches generally considered that in a given circumstance, an agent the contents of the virtual environment. should be provided with the characteristics as follows [11]: IVE draws people's great attention from many fields. Its (1) Autonomy: agents are able to perform certain applications include military, engineering training, behaviors without direct intervention or entertainment, commerce, education, etc. Especially, its supervision from human or other agents, according entertainment application has a great development potential to their intention, faith or habit. ']. Many efforts have been made in the field of IVE: a team (2) Reactivity: agents not only apperceive and of researchers in University of California at Los Angeles influence the environment, but react according to develop a PC-based software system that supports an active, the change of the environment. responsive, networked virtual world 2]. In this virtual world, (3) Sociality: agents should be able to communicate they use a new developed scripting language to design with other agents by information transmission, and behaviors and relationships between characters and objects. have strong collaboration ability. Johnson in University of Southern California implements a (4) Independence: agents execute their mission system for developing virtual environment in which independently, and do not need intervention from pedagogical capabilities are incorporated into autonomous human or other agents. agents that interact with trainees and simulation of objects in (5) Evolution: During interaction, agents should be the environment[3]. Carazza in Bradford University in Britain able to adapt themselves to the environment, self- discusses the framework of IVE in perspective of knowledge learn, and self-evolve step by step. representation layer[4]. Vosinakis in Piraeus in Grace provides 978-1-4244-1571-7/07/$25.OO ©2007 IEEE 90

[IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

  • Upload
    miao

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

HAVE 2007 - IEEE International Workshop onHaptic Audio Visual Environments and their ApplicationsOttawa - Canada, 12-14 October 2007

Entertainment Oriented Intelligent Virtual Environment with Agent and NeuralNetworks

Li Jia 1' Miao Zhenjiang2'Institute of Information Science, Beijing Jiaotong University Beijing 100044, P. R. China E-mail: 06120386Abjtueducn

Abstract -Intelligent Virtual Environment (IVE) combines Virtual an architecture for implementing systems that integrate LogicReality and Artificial Intelligence (AI), i.e. It incorporates AI Programming, Object-Oriented Programming and VRE5'. Thistechnology into virtual environment, which makes the virtual architecture supports the participation of many intelligentenvironment more interactive and credible. Using IVE concept, we agents in a shared world. Additional efforts have been madedesign and implement an entertainment IVE application platform in the field of multi-agent virtual environment [6][7].with intelligent agents and neural networks. The humanlike in this pe wenprvideuan entertnmentintelligent objects in the virtual environment are implemented with 'intelligent agents, which are endowed with intelligence for self- Our platform can be considered as a multi-agent system thatlearning and collaborating with each other by applying BP and ESP composed of homogeneous agents. These agents are endowedneural network to the agent controllers. with intelligence for self-learning and collaboration by BP

and ESP neural networks. The Enforced Sub-PopulationKeywords Intelligent Virtual Environment, agent, neural network, (ESP), which is one of the neuroevolution algorithms, hasBackpropagation, neuroevolution, Neuroevolution with Enforced been proved powerful for task assignment problem during theSub-Populations collaboration of a team of homogeneous agents [15].

In section 2, the intelligent agents are introduced. Bp andI. INTRODUCTION ESP neural networks used in our IVE are presented in section

3. In section 4, we describe our entertainment IVEVirtual Reality (VR) is an interactive, computer generated application platform as well as its design consideration.

simulating environment that combines multiple human senses, Section 5 presents some operating results of the entertainmentsuch as visual, auditory, haptic, etc. Users can immerse into IVE application platform. Finally we give our conclusions.the virtual environment using interactive devices and interactwith it. II. INTELLIGENT AGENTS FOR IVE

The environments that make use of VR techniques arereferred as Virtual Environments (VEs). Nowadays, the use IVE implements the humanlike intelligent objects withof Artificial Intelligence (Al) in VEs has been explored. The intelligent agents. Agent is an important concept in Al. As theVE with Al technologies is called Intelligent Virtual development of Al technologies, the research and applicationEnvironments (IVE), which attracts great interests and ofAgent draw lots of attention. At present, the applications ofattention. Agent include many fields, such as individual digital assistant,

IVE is the integration of virtual reality and artificial management of network, long-distant conference, CAD,intelligence. It endows the virtual environment with medical consultation, etc.intelligence such as implementing some humanlike intelligent The behavior of an agent with independent intelligenceobjects with intelligent agents. Therefore, the virtual should markedly reflect characteristics of its intelligence. It isenvironment is more interactive and credible, and it enriches generally considered that in a given circumstance, an agentthe contents of the virtual environment. should be provided with the characteristics as follows [11]:

IVE draws people's great attention from many fields. Its (1) Autonomy: agents are able to perform certainapplications include military, engineering training, behaviors without direct intervention orentertainment, commerce, education, etc. Especially, its supervision from human or other agents, accordingentertainment application has a great development potential to their intention, faith or habit.']. Many efforts have been made in the field of IVE: a team (2) Reactivity: agents not only apperceive andof researchers in University of California at Los Angeles influence the environment, but react according todevelop a PC-based software system that supports an active, the change of the environment.responsive, networked virtual world 2]. In this virtual world, (3) Sociality: agents should be able to communicatethey use a new developed scripting language to design with other agents by information transmission, andbehaviors and relationships between characters and objects. have strong collaboration ability.Johnson in University of Southern California implements a (4) Independence: agents execute their missionsystem for developing virtual environment in which independently, and do not need intervention frompedagogical capabilities are incorporated into autonomous human or other agents.agents that interact with trainees and simulation of objects in (5) Evolution: During interaction, agents should bethe environment[3]. Carazza in Bradford University in Britain able to adapt themselves to the environment, self-discusses the framework of IVE in perspective of knowledge learn, and self-evolve step by step.representation layer[4]. Vosinakis in Piraeus in Grace provides

978-1-4244-1571-7/07/$25.OO ©2007 IEEE 90

Page 2: [IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

In this paper, we stress on the sociality and the evolution Where, W2 is the connection strength matrix of hiddencharacteristics, namely, the collaboration and self-learning layer and output layer, and 02 is the activation thresholdability of the agents. vector of output layer.

There are many approaches can endow agents intelligence The activation function of output layer 92(could befor self-learning and collaboration. In our IVE we resort to linear function or S-type function, which is up to the specificArtificial Neural Networks due to their many human like .

features. ~~~~~~~~~~~~question.features. The differences between the actual output and the desired

output will be propagated backward. And the modifiedformula of the connection strength of the basic BP algorithmis as follows:

There are many approaches can endow agents intelligence aEfor learning. Artificial neural networks (ANNs) are a good Ac(t +1)= -7 (4)implementation choice in general, for several reasons: theyare a universal computing architecture, they are fast, they Where, 7 is the learing rate, the error functiongeneralize and tolerate noisy inputs, and they are robust in the 12face of damage or incomplete inputs. There are many ANN 2 k 0 k

learing algorithms, such as the Evolution Algorithm (EA) is k k and k are respectively the desiredand the Backpropagation (BP). Neuroevolution is a method output and the actual output of the network.that combines neural network and evolution algorithm. In thismethod single neurons are subject to evolution, not complete -networks. The Enforced Sub-Population (ESP) is one of theneuroevolution algorithms, which has been shown to workwell for learning control tasks.

In this paper, we apply BP and ESP algorithm to agentcontrollers to endow the agents the intelligence of self-learning and collaboration respectively.

A. Backpropagation (BP) Neural Network SigmoidTyfpe

BP network is one of feed forward network whichcomprises an input layer, several hidden layer and an output Figure. 1 three layer neural network structurelayer. Figure 1 shows the structure of a commonly used threelayers BP network. If the numbers of neurons of input layer, But, in the practical application, the prototype BPhidden layer and output layer are respectively n, q and m, algorithm has some problems: the learning speed of the BPthen the network can converge to the non-linear mapping network is slow, there exists plain section and it tends to gettn te{ninr mA into the local minimum, etc. There is an often-used improvedfrom the n dimension input vector x = (X n J to the measure that is the method of adding momentum item['3]. The

m dmny o vt modified formula ofthe connection strength is as follows:m dimension output vector - nT JAc\(k + 1) = -i7VE + aAc\t(k) (5)

The input vector first propagates to the hidden layer, and AEthrough the activation function gl(x) we get the output of Here, a is the momentum factor, z c (0o,) VE =__hidden layer: Adding momentum item makes the change of weight

n-vwlXn_l *-l vectors reflects not only local gradient information but alsoX -Nl ijXXJ the current trend of error super surface, thus it can avoid the

w=l (1) local minimum problem to certain extent, and has betterHere, w1 is the connection strength matrix of the input convergent speed.

layer and hidden layer, and 01 the activation threshold ofhidden layer. g1(u) is the activation of hidden layer which B. Neuroevolution with Enforced Sub-Populations (ESP)generally is S-type function, namely

1 (2) The Enforced Sub-Population (ESP) is a direct-encodingI + exp(- u) neuroevolutionary algorithm, i.e. it specifies a network's

* we~~~~~~~~~~~vightsi diriectly in thei chlrnmsmesl] InEP hThen the output information of hidden layer propagates to weights directly in the chromome15. In ESp theth oupu laer an th fiadeutcmsot chromosomes only encode the weights for individual neurons

, q > ~~~~~~~~~~ratherthan complete networks, and a separate sub-populationY - SW2H_02 ~~~~~~iSmaintained for each neuron in the network. The sub-

9 j=lJ / = ¢ . . . ¢ m(3) populations are kept separate during breeding.

91

Page 3: [IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

Neural Networks are assembled by drawing one neuronal The scene of the entertainment platform can be consideredchromosome at random from each sub-population. The as a virtual environment. The map of the environment isassembled network is tested by evaluating its performance on divided into cells by tiling the plane with squares. Each cellthe target task. Finally, its evolutionary fitness is ascribed has its terrain property which can be wall, road, block or tree.back to each neuron that participated in the evaluation. This Neither the paladin nor monsters can enter any cell thatprocess is repeated during a generation until all the neurons designed as wall, tree or block.have been evaluated. As generations pass the sub-populations Each character in the environment has its move speedco-evolve to produce neurons that work well with the others property. When caught by any monster, the paladin will losein a network. one of his lives and decrease his move speed because of

In our IVE platform, we apply a fully-connected feed- being injured. The decrement of his move speed is computedforward network with a single hidden layer to the agent from the hit power of the attacking monster. However, thecontrollers for the collaboration intelligence of the agents paladin will be able to eliminate monsters by firing to them(figure 2). with a gun.

In ESP a separate breeding population is maintained for Guns, apples and fires are something can be picked up oreach neuron in a network. Our chromosomes represent both consumed by characters. They are initially positioned atinput and output weights for the neurons in the hidden layer. several selected road cells. The character that has consumedTo represent collaboration network we used four sub- an apple will get an extra life and increase its move speed;populations of chromosomes, each of which is for one neuron contrarily, the character that has walked through a firein the hidden layer. We used flat arrays of floating point stupidly will lose one of its lives and decrease its move speed.numbers for the chromosomes, representing the Once the paladin gets the crown, the door to the next levelconcatenation of a single neuron's input and output weights. will open. The mission is accomplished when the paladinThe chromosome of neuronj in the hidden layer is shown in goes into the exit door.figure 3. Each monster in the environment is an intelligent agent.

They chase the paladin with a path-finding method. In orderto endow the agents with intelligence for learning thepaladin's move pattern and for collaborating with each other,

_k _ 3weapplied ANN to their controller.

A. Application ofBP Neural Network in our IVE platform

In order to endow the agents with intelligence for learnigOutput Neurons Paladin's move pattern, we applied BP Neural Network to

Hidden layer their controller. Thus, the agents are able to forecast Paladin's(3tNeurons position and adapt their moves base on their pre-programmed

behaviors.Inputs We build a neural network that maps the time serial of

Figure2: Collaboration network. The values obtained by agents are Paladin's position to the forecast position and control thepropagated through an artificial neural network to task the agent base on the monsters with it. The neural network is a fully-connectednetwork's output. W'i represents the weight of the link from input neuron i to feed-forward BP network with a single Sigmoid-type hiddenhidden neuron j, and W# is the weight of the link from hidden neuronj to layer, and there are two neurons in its output layer,output neuron k. respectively representing the horizontal and vertical

coordinates of the forecast position.In order to overcome the problems of local minimum and

wlthe convergent speed that basic BP algorithm has, we turn to

[ljW2lj W31j W4j W5j 6j WJ2 j2 S) | one of its improved measures, adding momentum item.The network is trained online. After each implementation

Figure 3: chromosome of hidden neuronj in hidden layer. j=1,2,3,4 of Paladin's move, the network is trained with the currentPaladin's position as ideal output. With a forecast position

IV. THE DESIGN AND IMPLEMENTATION OF OUR got from the neural network, the agent judges the rationalityENTERTAINMENT IVE of that forecast position before its next move: if Paladin can

not enter the forecast position according to the game rules orPaladin is a simple real-time entertainment platform that is if the direction that Paladin is moving does not toward the

based on IVE concept. forecast position at all, the agent will consider the forecastThe entertainment environment requires the Paladin position illogical and move toward the current position of

maneuvered by users to barge in the dungeon alone, where Paladin, otherwise, they will move toward the forecastthe monsters assemble, and recapture the crown, whilst position.avoiding attack from the monsters tail after.

92

Page 4: [IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

B. Application ofESP in our IVE platform When the besiegement failed, what we cared was how muchthe monsters got close to paladin.

During the collaboration of the agents in our IVE, we turnto ESP to solve the problem of tasking agents. We use fully-connected feed-forward network with a single S-type hiddenlayer. There are 6 neurons in input layer, 4 neurons in hiddenlayer and 2 neurons in output layer. Therefore, each neuron inhidden layer has 8 connection weights and 1 threshold.

The evolutionary fitness of the collaboration network isevaluated in another testing virtual environment (figure 4). Inthis testing VE, there is only one path with two exits. Thelength of the path is initialized with random integer value -between 3 and 9. The initial position of Paladin is chosenrandomly in any cell in the path, and the positions of the twoagents are also initialized randomly in any cell outside thepath. Figure 4: the testing virtual environment

If the two exits of the path are ExitO and Exitl respectively,the inputs of the collaboration network are: V. SOME RESULTS OF THE IVE PLATFORM

DistanceOO (DistanceOl): the length of the shortest pathfrom Paladin to ExitO (ExitI). To show the intelligent for self-learning of the agents, we

DistancelO (Distance 11): the length of the shortest path have simplified the environment to having only one monsterfrom Monsterl to ExitO (Exitl). present in the map. Figure 5 shows a screen-shot of the initial

Distance20 (Distance21): the length of the shortest path position in the entertainment environment.from Monster2 to ExitO (Exitl). We maneuvered Paladin to move in the environment. The

The outputs of the network represent the task that the monster can work out the shortest way to Paladin accordingagent should get. The structure of the collaboration network to the Paladin's position it detected, and moveis shown in figure 2. correspondingly (figure 6).

The weights of the network are tuned with ESP. So we If we maneuvered Paladin to make the round of a blockneed to model the movement of paladin to repeat the fitness clockwise over and over, the monster without neural networkevaluation. When besieged by monsters, most of the users just follows the Paladin as near as it could to try to catch upchoose to evade the monsters. Therefore, the modeling of the with him, while the monster with a neural network learns themovement of paladin only takes the distance between paladin pattern of Paladin's move and turns back to make the roundand the monsters into account. of the block anticlockwise to head off Paladin at the other

The evading strategy of the paladin is that moving to the side (figure 7). At this time, if Paladin turns back and makedirection that makes the sum of the distances to the two round of the block anticlockwise to escape the wise monster,monsters bigger. the monster would learn Paladin's move pattern again after

If paladin flees from the red area in figure 4 successfully, several circles.we consider the besiegement as a failure; else if paladin isarrested by the monsters, the besiegement is successful.We use four sub-populations to represent the collaboration

network, each of which is for one neuron in the hidden layer.The size of the sub-population is 40. Each chromosomeconsists of 9 genes which represent 6 input weights, 2 outputweights and 1 threshold of a single neuron in hidden layer.

Each network was evaluated 20 times in the testing virtualenvironment. And we take the average fitness of these 20times evaluation as the evolutionary fitness of the evaluatednetwork.

The formula of fitness evaluation:

{11 ~besicgcmcnt successfu1

1Jd- - d btesiegcmcnt failing

isteiiilaeaedsac7ewe h Figure 5: A screen-shot of the initial position in the game. The iconWhere, aoi h nta vrg itnebtenterepresents ahuman body is Paladin.

monsters and paladin, andde is the ultimate average distancebetween the monsters and paladin at the end of one test.

93

Page 5: [IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

Figure 6: The black line is the path of the agent.

Figure 7: The red ring represents the position the monster forecasted. TheFgr :cmaio fbfr fe h piiainoh

broken black line is the path of an agent without neural network, while theFgr :cmaio coflbfore n fe hpiiationoftwtheblack real line is the path of an agent with neural network.colbrtnneok

Base on the VE shown in Figure 5, we insert another agent VI. CONCLUSIONSinto the IVE in Experiment 2 to show the intelligent forcollaboration of the agents. When users maneuvered paladin In this paper, we introduce Intelligent Virtual Environmentinto any path with just two exits (such as the area in red and provide an entertainment IVE platform called Paladinsquare in figure 8), the agents collaborate with each other to with agents that are endowed with intelligence for self-besiege paladin and using the collaboration network to task learning and collaboration by BP and ESP neural networks.the agents more efficiently. Compared with common game's predictability and

As shown in Figure 8, before the optimization of repeatability characteristics, Paladin allows users to interactcollaboration network, the assignment of the agents is with evolving agents that also can collaborate with each other,stochastic. The situation shown in Figure 8(a) often appears. which makes the game more unpredictable and interesting.Obviously, in this situation the assignment is illogical and And the intelligent agents make the VE more vivid.unefficient. After the optimization of collaboration network Additional, our IVE platform shows that artificial neuralwith ESP, the assignment becomes more logical (as shown in network is one feasible approach to endow the agent in IVEfigure 8(b)). with intelligence for self-learning and collaboration due to

These results allow us to conclude that the monsters in their many excellent features. There are many kinds of neuralPaladin have the intelligence for learning from the interaction networks that you can choose. For our IVE platform wewith users to a certain extent and collaborating with each chose BP neural networks to implement the self-learning

=~~~~~~~~~~~~~apiain whaeuda2Dvral envronentandth

black real ine is the ath of an aintelligenceal andwthe collaborationntofrhk aet i h

Bas onth VEshon n Fgur 5 weinsrtanoheragnt I. ONLU94N

Page 6: [IEEE 2007 IEEE International Workshop on Haptic, Audio and Visual Environments and Games - Ottawa, ON, Canada (2007.10.12-2007.10.14)] 2007 IEEE International Workshop on Haptic,

environment is limited. In the next stage of our research, weneed to convert the 2D virtual environment into 3D, andimprove the intelligence of agents farther.

ACKNOWLEDGEMENT: This work is supported byNational Basic Research Program (973) 2006CB303105 and2004CB3 181 10, University Key Research Fund 2004SZ002.

REFERENCES

[1] Daniel J, Janet W. "Computer Game with Intelligence". Fuzzy Systems,2001 The 10th IEEE International Conference on volume3, 2-5Dec.2001 p.1355-1358

[2] FAN Hui, HUA Zhen, LI Jin-jiang, YUAN Da. "Study of IntelligentVirtual Environment" Microelectronics and Computers. Vol. 21 No.6,2004 p. 100-103

[3] Johnson W L, Ricbel J, Stiles R. Integrating pedagogical agents intoVirtual Environments [J]. Presence: Teleoperators and VirtualEnvironments, 1998, 7(6): 523-546

[4] Carazza M. High-level interpretation in dynamic virtual environments[A]. ECAI'98 Workshop on IVE [C]. UK, 1998

[5] Vosinakis S, Anastassakis G. DIVA: Distributed Intelligent VirtualAgents [A]. Proceeding of Intelligent Virtual Agents [C]. UK, 1999.

[6] Reignier P, Harrouet F, Morvan S. AReVi: A Virtual RealityMultiagent Platform [A]. Heudin J-C (Ed): Virtual Worlds 98, LNAI1434 [C], 1998. 229-240.

[7] Calderoni S, Marcenac P. NUTANT: a multiagent toolkit for artificiallife simulation [A]. Proceedings of Technology of Objection-OrientedLanguages, TOOLS 26 [C], 1998. 218-229.

[8] Han Yong C, Miikkulainen R. Cooperative coevolution of multi-agentsystems [A]. The University of Texas. Technical Report AIol-287[C].USA,2001, 287-295.

[9] PAN Zhi-geng, XU Wei-wei, ZHANG Ming-min. Intelligent VirtualEnvironment. Journal of System Simulation. Vol,13 Suppl. Nov. 2001.

[10] FAN Hui, HUA Zhen, LI Jin-jiang, YUAN Da. "Study of IntelligentVirtual Environment" Microelectronics and Computers. Vol. 21 No.6,2004 p. 100-103

[11] HUANG Guan-shan, XU Dong-mei. "Analysis of Theories andStructure of Agent" Microcomputer and Development. Nov.2003 p.32-34,98

[12] ZHAO Long-wen, HOU Yi-bin. "The Concept Model and ApplicationTechnology of Agent". Computer Engineering and Science. Vol.22,No.6,2000 p.75-79

[13] GAO Xue-peng, CONG Shuang. "Comparative study on fast learningalgorithms ofBP networks" Control and Decision. Mar.2001 p.167-171

[14] ZHOU Zhi-hua, CAO Cun-gen. "Neural Network and Its Application".BeiJing: Tsinghua University Press. 2004

[15] Bryant B D, Miikkulainen R. Neuroevolution for adaptive teams [A].The 2003 Congress on Evolutionary Computation [C]. 2003, 3: 2194-2201.

[16] ZHANG Zhi-peng, LAO Qi-cheng. "Overview and Application ofVirtual Reality". IT in Manufacture. Apr.2005, p.79-81

95