3
omputing with neural networks Why the interest and some of the advantages to solving problems like humans I Modern digital computers have made great leaps in both speed and pro- cessing power. In spite of this, there are certain tasks that cannot be performed sufficiently well by computers. This is due to the fact that almost all computers today are based on John von Neu- mann’s computing concept which requires computers to perform tasks by following an algorithm (a set of instruc- tions). However, there are several tasks for which algorithms do not exist or for which it is very difficult, if not impossi- ble, to come up with a series of logical instructions that will lead to solutions. Such tasks, referred to as unstructured computations, are being handled by neutral networks. Neural nets, also called connectionist systems, neuro- computers, and parallel distributed pro- cessing models, are a class of systems that have many simple processors - neurons - which are highly intercon- nected. The idea of neural networks was inspired by the structure of the human brain and by envy of what the brain can do. Although human beings compare poorly to the simplest calculator when it comes to multiplying ten-digit num- bers, we are so much better at recogniz- ing the face of a person, the sound of a voice, or the smell of an odor. We are good at quickly recalling associated facts or past events in our lives and, when confronted with new situations, we learn how to fit these situations into our existing knowledge harmoniously. How are such powers of pattern recog- nition, associative recall, and learning possible? The answers are beginning to emerge from the resurgence of interest in neural networks. This interest is prompted by two facts. First, the ner- vous system of simple animals can easi- ly solve problems that are very difficult for conventional computers. Such prob- lems include machine or computer vision, pattern recognition, speech recog- nition, signal processing in the presence of noise and uncertainty, and adaptive learning. Second, the abili- ty to model biological ner- vous system functions using man-made machines increases understanding of that biological function. The 1980s witnessed a Fig. 1 Feed-forward neural net resurgence of interest in neutral networks. This new interest is due to the development of new net topologies and algorithms, new analog VLSI implementations as well as by a growing fascination with the function- ing of the human brain. The recent interest is also driven by the realization that human-like performance in the areas of speech and image recognition will require enormous amounts of pro- cessing. Neural nets provide one tech- nique for obtaining the required processing capacity using large num- bers of simple processing elements operating in parallel. The field of neural networks and the development of neural network tools for personal computers have expanded almost unbelievably in the past years. The list of applications has grown from one highlighting biological and psycho- logical uses to ones as diverse as bio- medical waveform classification, music composition, and prediction of the com- modity future market. Neurocomputing There are two popular models of neural networks - the feed-forward model and the feedback model. In the feed-forward model, the neurons are arranged in layers as shown in Fig. 1. The inputs are applied to first layer, and the outputs are collected from the last layer. Thus, the connection is loop-free. In the feedback model, illustrated in Fig. 2, the architecture can be described as an undirected graph. Stability and convergence are important issues to be addressed in feedback model. The operation of the feed-forward model is that of a combinational circuit, where the inputs propagate and interact in one direction to produce the output. The operation of the feedback model is closer to that of a sequential or asyn- chronous computer, where the system is initialized to a state and evolves in time to a final state. The neuron in both models performs the same way. As shown in Fig. 3, a neuron sums input from other neurons. The output depends on the input through a set of real numbers called the weights 01, 02, ..., ON a weight for every input variable. The neuron turns on, or “fires,” by sending a series of voltage spikes down the axon if the sum N i= 1 is greater than an internal threshold t. If the sum is less than the threshold, the neuron does not fire and no output is generated. As evident in Fig. 3, every connection entering a processing ele- ment or neuron has an adaptive coeffi- cient called a weight assigned to it. This weight, which is stored in the local memory of the processing element, is 14 0278-6648/93/$3.00 0 1993 IEEE IEEE POTENTIALS

Computing with neural networks

  • Upload
    m

  • View
    216

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Computing with neural networks

omputing with neural networks Why the interest and some of the advantages to solving problems like humans I

Modern digital computers have made great leaps in both speed and pro- cessing power. In spite of this, there are certain tasks that cannot be performed sufficiently well by computers. This is due to the fact that almost all computers today are based on John von Neu- mann’s computing concept which requires computers to perform tasks by following an algorithm (a set of instruc- tions). However, there are several tasks for which algorithms do not exist or for which it is very difficult, if not impossi- ble, to come up with a series of logical instructions that will lead to solutions. Such tasks, referred to as unstructured computations, are being handled by neutral networks. Neural nets, also called connectionist systems, neuro- computers, and parallel distributed pro- cessing models, are a class of systems that have many simple processors - neurons - which are highly intercon- nected.

The idea of neural networks was inspired by the structure of the human brain and by envy of what the brain can do. Although human beings compare poorly to the simplest calculator when it comes to multiplying ten-digit num- bers, we are so much better at recogniz- ing the face of a person, the sound of a voice, or the smell of an odor. We are good at quickly recalling associated facts or past events in our lives and, when confronted with new situations, we learn how to fit these situations into our existing knowledge harmoniously. How are such powers of pattern recog- nition, associative recall, and learning possible? The answers are beginning to emerge from the resurgence of interest in neural networks. This interest is prompted by two facts. First, the ner- vous system of simple animals can easi- ly solve problems that are very difficult for conventional computers. Such prob-

lems include machine or computer vision, pattern recognition, speech recog- nition, signal processing in the presence of noise and uncertainty, and adaptive learning. Second, the abili- ty to model biological ner- vous system functions using man-made machines increases understanding of that biological function.

The 1980s witnessed a Fig. 1 Feed-forward neural net

resurgence of interest in neutral networks. This new interest is due to the development of new net topologies and algorithms, new analog VLSI implementations as well as by a growing fascination with the function- ing of the human brain. The recent interest is also driven by the realization that human-like performance in the areas of speech and image recognition will require enormous amounts of pro- cessing. Neural nets provide one tech- nique for obtaining the required processing capacity using large num- bers of simple processing elements operating in parallel.

The field of neural networks and the development of neural network tools for personal computers have expanded almost unbelievably in the past years. The list of applications has grown from one highlighting biological and psycho- logical uses to ones as diverse as bio- medical waveform classification, music composition, and prediction of the com- modity future market.

Neurocomputing There are two popular models of

neural networks - the feed-forward model and the feedback model. In the feed-forward model, the neurons are arranged in layers as shown in Fig. 1. The inputs are applied to first layer, and the outputs are collected from the last layer. Thus, the connection is loop-free. In the feedback model, illustrated in Fig. 2, the architecture can be described

as an undirected graph. Stability and convergence are important issues to be addressed in feedback model.

The operation of the feed-forward model is that of a combinational circuit, where the inputs propagate and interact in one direction to produce the output. The operation of the feedback model is closer to that of a sequential or asyn- chronous computer, where the system is initialized to a state and evolves in time to a final state.

The neuron in both models performs the same way. As shown in Fig. 3, a neuron sums input from other neurons. The output depends on the input through a set of real numbers called the weights 01, 02, ..., ON a weight for every input variable. The neuron turns on, or “fires,” by sending a series of voltage spikes down the axon if the sum

N

i = 1

is greater than an internal threshold t. If the sum is less than the threshold, the neuron does not fire and no output is generated. As evident in Fig. 3, every connection entering a processing ele- ment or neuron has an adaptive coeffi- cient called a weight assigned to it. This weight, which is stored in the local memory of the processing element, is

14 0278-6648/93/$3.00 0 1993 IEEE IEEE POTENTIALS

Page 2: Computing with neural networks

Fig. 2 Feedback neural net

gories: natural problems and optimization prob- lems. Natural problems, such as pattern recogni- tion, are typically imple- mented on a feed-forward network. Optimization problems, such as the famous Traveling Sales- man problem, are typically implemented on a feed- back network.

Learning In general, neural nets

do not perform well at pre- cise numerical computa- tion. This form of computation is highly

generally used to amplify, attenuate, and possibly change the sign of the sig- nal in the incoming connection. Often, the transfer function sums this and other weighted input signals to deter- mine the value of the processing ele- ment’s next output signal. Thus the weights determine the strength of the connections from neighboring process- ing elements.

It is evident that the set of functions that can be implemented using a single neuron is the set of threshold functions. The potential benefits of neural nets extend beyond the high computation

complex for the human brain, either. Neural nets can, however, be taught to determine whether or not a visual image of a face is that of a man or woman, or recognize a particular per- son’s face, even with a different facial expression or hairdo just as humans can.

Adaptation or learning is a major focus of neural net research. Computer systems that learn and adapt to chang- ing environments have been a dream of computer scientists for many years. The ability to adapt and continue learn- ing is essential in areas such as speech

recognition where training data is limited and new talkers, new words, new dialects, new phrases, and new environments are continously encountered.

An often-quoted advan- tage of neural nets is their ability to learn a mapping simply by presenting examples of the mapping to it. A neural network learns by modifying the weights on its synapses. Most transfer functions

Fig. 3 Neural net computation unit

rates provided by massive parallelism. Neural nets typically provide a greater degree of fault tolerance than von Neu- man sequential computers because there are many more processing ele- ments, each with primarily local con- nections. Damage to a few elements or links thus need not impair the overall performance significantly.

Most of the interest in neural nets arose from their use in performing use- ful computations. Roughly speaking, these computations fall into two cate-

include a learning law which is an equation that

modifies all or some of the weights in the local memory in response to the input signals and the values supplied by the transfer function. Thus the learning law will be an expression that specifies how to compute the appropriate weight changes. It is the means by which the network adapts itself and learns.

A neural network learns either by supervised training or by graded train- ing. In either case, it runs through a series of trials. In supervised training, the network is supplied with both

input data and desired output data. After each trial, the network compares its own output with the right answers, corrects any differences, and tries again, iterating until the output error reaches an acceptable level. In graded training, the network is given input data but no desired output data. Instead, after each trial it is given a grade or performance score that tells it how well it is doing. In either super- vised or graded training, once training is done, the network is ready to process genuine input.

Advice It must be mentioned in concluding

that neurocomputing does not replace algorithmic computing. The two com- plement each other nicely. While algo- rithmic computing is ideal for precise numerical computation, neurocomput- ing is ideal for pattern recognition and adaptive control problems. For exam- ple, the potential defense applications include aircraft detection and multisen- sor fusion. The potential for neural nets to solve these kinds of complex and exciting military problems is the stimu- lus for the DARPA Neural Network Program. As hardware and simulator technology improve, we can only expect major advances in our ability to capitalize on this powerful, newly reborn technology. The capabilities of neurocomputing are likely to grow with every successive generation. And so is the demand for more experts in this area.

Neurocomputing is an interdiscipli- nary area of research. Several people around the world having a variety of backgrounds have been involved. Researchers are almost likely to have biological or psychological back- grounds as well as electrical engineer- ing degrees. If you are interested in this area, you may need to plan taking some courses in neurobiology and psycholo- gy in addition to pursuing a graduate degree in electrical engineering. In the meanwhile, you may know more about neural networks by reading IEEE Com- munications Magazine for November 1989 and IEEE Engineering in Medi- cine and Biology Magazine for Sep- tember 1990. These two volumes are specially devoted to neural networks and provide a wide coverage of the area.

OCTOBER 1993 15

Page 3: Computing with neural networks

I

Also consider becoming a member of the International Neural Network Society. The Society journal, Neural Network, began bimonthly publication in January 1989, and a subscription is included as part of your annual dues.

Read more about it J. C. Lupo, “Defence Applications

of Neural Networks,” IEEE Communi- cation Magazine, November 1989, pp.

Y. S. Abu-Mostafa, “Information Theory, Complexity, and Neural Net- works,” IEEE Communication- Maga- zine, November 1989, pp. 25-28.

J . Alspector, “Neural-Style Microsystems that Learn,” IEEE Com- munications Magazine, November

R. Hecht-Nielsen, “ Neurocom- puting: picking the human brain,” IEEE Spectrum, March 1988, pp. 36- 41.

R. Hecht-Nielsen, Neurocomput- ing. Reading, MA: Addison-Wesley,

82-88.

1989, pp. 29-36.

1990. M. Caudill, “Neural Networks

Primer- Part I,” AI Expert, December

R. P. Lippmann, “An Introduction to Computing with Neural Nets,” IEEE ASSP Magazine, April 1987, pp. 4-22.

DAPPA Neural Network Study. Fairfax VA: AFCEA International Press, 1988.

R. C. Eberhart and R. W. Dobbins (eds.), Neural Network PC Tools: A Practical Guide. San Diego, CA: Acad- emic Press, 1990.

1987, pp. 46-52.

About the authors Matthew N.O. Sadiku received his

B. Sc. degree in 1978 from Ahmadu Bello University, Zaria, Nigeria and his M.Sc. and Ph.D. degrees from Ten- nessee Technological University, Cookeville, TN in 1982 and 1984 respectively. From 1984 to 1988, he was an assistant professor at Florida Atlantic University, where he did grad- uate work in computer science. Since

August, 1988, he has been with Temple University, Philadelphia, PA, where he is presently an associate professor. He is the author of over forty professional papers and four books including Ele- ments of Eletromagnetics (Saunders, 1989) and Numerical Techniques in Electromagnetics (CRC, 1991).

His current research interests are in the areas of numerical techniques in electromagnetics and computer com- munication networks. He is a registered professional engineer and a member of ASEE and IEEE. He was the IEEE Region 2 Student Activities Committee Chairman.

Maria Mazzara received the B.S. and M.S. degrees in Electrical Engi- neering from Temple University, Philadelphia. She is currently pursuing the doctoral program in human factors engineering at the Florida Institute of Technology. She was the student edi- tor of IEEE Potentials from 1989 to 1991 and the IEEE Region 2 Student Representative.

ONE OF THE MOST ACTIVE IEEE SOCIETIES!

BENEFITS OF MTT-S MEMBERSHIP 0 Technical Meetings and Conferences 0 Receive Monthly Transactions 0 Meet with Industrial Leaders 0 Receive Quarterly Newsletters 0 Participate in Local Chapter

D Lectures, Social Events, &Awards CI Annual Symposium & Exhibits, etc.

MICROWAVE AREAS OF INTEREST 0 Solid State Devices 0 Automatic RF Techniques (ARFT)

0 Biological & Medical Applicatlons 0 Submlliimeter-Wave Techniques

CI Digital Microwave Systems 0 Fiber and integrated Optlcs 0 Computer-Aided Designs 0 High Power Techniques

0 integrated Circuits 0 Monolithic Circuits

0 Superconductlvity 0 Millimeter Wave

0 Network Theory 0 Field Theory 0 Ferrites

I

0 Low Noise Techniques

The benefits you receive from membership will far offset the costs involved (the value of publications you receive is over $50.00). *--*--*-

1%- - - *- - - , Yes! I’m an IEEE Student Member. My member-

I Please give me a free Mll-S membership. I Name

ship number is

Address

I I city State I I I - Return form to: I I

Country

IEEE Service Center, 445 Hoes Lane, PO Box 133 1, Piscataway, NJ 08855 133 1

I -11711

16

-

IEEE POTENTIALS

~~