Transcript

It is an honor and a pleasure for me to acceptthis award from ACM. It is especially gratifyingto share this award with Ed Feigenbaum, who

has been a close friend and helpful colleague fornearly 30 years.

As a second-generation artificial intelligence(AI) researcher, I was fortunate to have known andworked with many of the founding fathers of AI. Byobserving John McCarthy, my thesis advisor, duringthe golden age of AI Labs at Stanford in the 1960s,I have learned the importance of fostering andnurturing diversity in research far beyond one’sown personal research agenda. Although his ownprimary interest was in common-sense reasoningand epistemology, under John’s leadership,research in speech, vision, robotics, language,knowledge systems, game playing, and music,thrived at the AI labs. In addition, a great deal ofpath-breaking systems research flourished in areassuch as Lisp, time-sharing, video displays, and aprecursor to Windows called “pieces of glass.”

From Marvin Minsky, who was visiting Stanfordand helping to build the Mars Rover in ’66, Ilearned the importance of pursuing bold visions ofthe future. And from Allen Newell and HerbSimon, my colleagues and mentors at CarnegieMellon University (CMU) for over 20 years, Ilearned how one can turn bold visions into practi-cal reality by careful design of experiments and fol-lowing the scientific method.

I was also fortunate to have known and workedwith Alan Perlis, a giant in the ’50s and ’60s com-puting scene and the first recipient of the TuringAward in 1966, presented at the ACM conferenceheld in Los Angeles, which I attended while still agraduate student at Stanford.

While I did not know Alan Turing, I may be oneof the select few here who used a computerdesigned by him. In the late 1950s, I had the plea-sure of using a mercury delay-line computer (Eng-lish Electric Deuce Mark II) based on Turing’s

original design of ACE. Given his early papers on"Intelligent Machines," Turing can be reasonablycalled one of the grandfathers of AI, along withearly pioneers such as Vannevar Bush.

That brings me to the topic of this talk, “Todream the possible dream.” AI is thought to be animpossible dream by many. But not to us in AI. Itis not only a possible dream, but, from one pointof view, AI has been a reality that has been demon-strating results for nearly 40 years. And the futurepromises to generate an impact greater by ordersof magnitude than progress to date. In this talk Iwill attempt to demystify the process of what AIresearchers do and explore the nature of AI and itsrelationship to algorithms and software systemsresearch. I will discuss what AI has been able toaccomplish to date and its impact on society. I willalso conclude with a few comments on the long-term grand challenges.

Human and Other Forms of Intelligence

Can a computer exhibit real intelligence?Simon provides an incisive answer: “Iknow of only one operational meaning for

‘intelligence.’ A (mental) act or series of acts isintelligent if it accomplishes something that, ifaccomplished by a human being, would be calledintelligent. I know my friend is intelligent becausehe plays pretty good chess (can keep a car on theroad, can diagnose symptoms of a disease, cansolve the problem of the missionaries and canni-bals, etc.) I know that computer A is intelligentbecause it can play excellent chess (better than allbut about 200 humans in the entire world). I knowthat Navlab is intelligent because it can stay on theroad. The trouble with those people who thinkthat computer intelligence is in the future is thatthey have never done serious research on humanintelligence. Shall we write a book on ‘WhatHumans Can’t Do?’ It will be at least as long asDreyfus’ book. Computer intelligence has been a

• T u r i n g •

A w a r d

To Dream The PossibleDream

Raj Reddy

Th

e C

ompu

ter

Mus

eum

106 May 1996/Vol. 39, No. 5 COMMUNICATIONS OF THE ACM

fact at least since 1956, when the Logic Theorymachine found a proof that was better than theone found by Whitehead and Russell, or when theengineers at Westinghouse wrote a program thatdesigned electric motors automatically. Let’s stopusing the future tense when talking about com-puter intelligence.”

Can AI equal human intelligence? Somephilosophers and physicists have made successfullifetime careers out of attempting to answer thisquestion. The answer is, AI can be both more andless than human intelligence. It doesn’t take largetomes to prove that they cannot be 100% equiva-lent. There will be properties of human intelli-gence that may not be exhibited in an AI system(sometimes because we have no particular reasonfor doing so or because we have not yet gottenaround to it). Conversely, there will be capabilitiesof an AI system that will be beyond the reach ofhuman intelligence. Ultimately, what will beaccomplished by AI will depend more on whatsociety needs and where AI may have a comparativeadvantage than on philosophical considerations.

Let me illustrate the point by two analogies thatare not AI problems in themselves but whose solu-tions require some infusion of AI techniques.These problems, currently at the top of theresearch agenda within the information industry,are digital libraries and electronic commerce.

The basic unit of a digital library is an electronicbook. An electronic book provides the same infor-mation as a real book. One can read and use theinformation just as we can in a real book. However, itis difficult to lie in bed and read an electronic book.With expected technological advances, it is con-ceivable a subnotebook computer will weigh lessthan 12 ounces and have a 6¨ x 8¨ high resolutioncolor screen, making it look and feel like a bookthat you might read in bed. However, the analogystops there. An electronic book cannot be used aspart of your rare book collection, nor can it beused to light a fire on a cold night to keep youwarm. You can probably throw it at someone, butit would be expensive. On the other hand, usingan electronic book, you can process, index, andsearch for information; open the right page; high-light information; change font size if you don’t

have your reading glasses; and so on. The point is,an electronic book is not the same as a real book.It is both more and less.

A key component of electronic commerce is theelectronic shopping mall. In this virtual mall, youcan walk into a store, try on some virtual clothing,admire the way you look, place an order and havethe real thing delivered to your home in 24 hours.Obviously, this does not give you the thrill of goinginto a real mall, rubbing shoulders with real peo-ple and trying on real clothing before you makeyour purchase. However, it also eliminates theproblems of getting dressed, fighting the trafficand waiting in line. More importantly, you canpurchase your dress in Paris, your shoes in Milanand your Rolex in Hong Kong without ever leavingyour home. Again, the point is that an electronicshopping mall is not the same as a real shoppingmall. It is both more and less.

Similarly, AI is both more and less than humanintelligence. There will be certain human capabili-ties that might be impossible for an AI system toreach. The boundary of what can or cannot be donewill continue to change with time. More important,however, it is clear that some AI systems will havesuper human capabilities that would extend thereach and functionality of individuals and commu-nities. Those who possess these tools will make therest of us look like primitive tribes. By the way, thishas been true of every artifact created by the humanspecies, such as the airplane. It just so happens thatAI is about creating artifacts that enhance the men-tal capabilities of the human being.

AI and AlgorithmsIsn’t AI just a special class of algorithms? In a senseit is; albeit a very rich class of algorithms, whichhave not yet received the attention they deserve.Second, a major part of AI research is concernedwith problem definition rather than just problemsolution. Like complexity theorists, AI researchersalso tend to be concerned with NP-complete prob-lems. But unlike their interest in the complexity ofa given problem, the focus of research in AI tendsto revolve around finding algorithms that provideapproximate, satisfying solutions with no guaran-tee of optimality.

Can AI equal human intelligence? Some philosophers and physicists

have made successful lifetime careers out of attempting to answer this question.

The answer is, AI can be both more and less than human intelligence.

COMMUNICATIONS OF THE ACM May 1996/Vol. 39, No. 5 107

The concept of satisfying solutions comes fromSimon’s pioneering research on decision makingin organizations leading to his Nobel Prize. Priorto Simon’s work on human decision making, it wasassumed that, given all the relevant facts, thehuman being is capable of rational choice weigh-ing all the facts. Simon’s research showed that“computational constraints on human thinking”lead people to be satisfied with “good enough”solutions rather than attempting to find rationaloptimal solutions weighing all the facts. Simoncalls this the principle of “bounded rationality.”When people have to make decisions under con-ditions which overload human thinking capabili-ties, they don’t give up, saying the problem isNP-complete. They use strategies and tactics ofoptimal-least-computation search and not those of opti-mal-shortest-path search.

Optimal-least-computation search is the study ofapproximate algorithms that can find the best pos-sible solution given certain constraints on the com-putation, such as limited memory capacity, limitedtime, or limited bandwidth. This is an area worthyof serious research by future complexity theorists!

Besides finding solutions to exponential prob-lems, AI algorithms often have to satisfy one or moreof the following constraints: exhibit adaptive goal-oriented behavior, learn from experience, use vastamounts of knowledge, tolerate error and ambiguityin communication, interact with humans using lan-guage and speech, and respond in real time.

Algorithms that exhibit adaptive goal oriented behav-ior. Goals and subgoals arise naturally in prob-lems where algorithm specification is in the formof “What” is to be done rather than “How” it is tobe done. For example, consider the simple taskof asking an agent, “Get me Ken.” This requiresconverting this goal into subgoals, such as lookup the phone directory, dial the number, talk tothe answering agent, and so on. Each subgoalmust then be converted from What to How andexecuted. Creation and execution of plans hasbeen studied extensively within AI. Other sys-tems such as report generators, 4GL systems, anddata base query-by-example methods, use simpletemplate-based solutions to solve the “What to

How” problem. In general, to solve such prob-lems, an algorithm must be capable of creatingfor itself an agenda of goals to be satisfied usingknown operations and methods, the so-called"GOMs approach." Means-ends analysis, a formof goal-oriented behavior, is used in most expertsystems.

Algorithms that learn from experience. Learning fromexperience implies the algorithm has built-inmechanisms for modifying internal structure andfunction. For example, in the “Get me Ken” task,suppose Ken is ambiguous and you help the agentto call the right Ken; next time you ask the agentto “Get me Ken,” it should use the same heuristicthat you used to resolve the ambiguity. Thisimplies the agent is capable of acquiring, repre-senting, and using new knowledge and engagingin a clarification dialog, where necessary, in thelearning process. Dynamic modification of inter-nal structure and function is considered to be dan-gerous in some computer science circles becauseof the potential for accidental overwriting of otherstructures. Modifying probabilities and modifyingcontents of tables (or data structures) have beenused successfully in learning tasks where the prob-lem structure permits such a formulation of learn-ing. The Soar architecture developed by Newell etal., which uses rule-based system architecture, isable to discover and add new rules (actually “pro-ductions”) and is perhaps the most ambitiousundertaking to date to create a program thatimproves with experience.

Algorithms that interact with humans using languageand speech. Algorithms that can effectively usespeech and language in human-computer interfacewill be essential as we move toward a society wherenonexperts use computers in their day-to-day prob-lems. In the previous example of the “Get me Ken”task, unless the agent can conduct the clarificationdialog with the human master using language andspeech or some other natural form of communica-tion, such as “Form Filling,” widespread use ofagents will be a long time coming. Use of languageand speech involves creating algorithms that candeal with not only ambiguity and nongrammatical-

• T u r i n g •

A w a r d

ity but also with parsing and interpreting of naturallanguage with a large, dynamic vocabulary.

Algorithms that can effectively use vast amounts ofknowledge. Large amounts of knowledge not onlyrequire large memory capacity but creates themore difficult problem of selecting the rightknowledge to apply for a given task and context.There is an illustration that John McCarthy is fondof using. Suppose one asks the question, “Is Rea-gan sitting or standing right now?” A system with alarge database of facts might proceed to systemati-cally search the terabytes of data before finallycoming to the conclusion that it does not know theanswer. A person faced with the same problemwould immediately say, “I don’t know,” and mighteven say, “and I don’t care.” The question ofdesigning algorithms “that know what they do notknow” is currently an unsolved problem. With theprospect of very large knowledge bases loomingaround the corner, “knowledge search” willbecome an important algorithm design problem.

Algorithms that tolerate error and ambiguity in commu-nication. Error and ambiguity are a way of life inhuman-to-human communication. Warren Teitel-man developed nearly 25 years ago an interfacecalled “Do What I Mean (DWIM).” Given therequirements of efficiency and getting the soft-ware done in time, all such ideas were deemed tobe frivolous. Today, with the prospect of Giga PCsaround the corner, we need to revisit such error-forgiving concepts and algorithms. Rather thansaying “illegal syntax,” we need to develop algo-rithms that can detect ambiguity (i.e., multiplepossible interpretations, including null) andresolve it where possible by simultaneous parallelevaluation or by engaging in clarification dialog.

Algorithms that have real-time constraints. Many soft-ware systems already cope with this problem, butonly through careful, painstaking analysis of thecode. Not much work has been done on how tocreate algorithms that can accept a “hurry-up”command! One approach to this problem appearsin the Prodigy system, which generates an approx-imate plan immediately and gradually improves

and replaces that plan with a better one as the sys-tem finds more time to think. Thus, it always hasan answer but the quality of the answer, improveswith time. It is interesting to note that many of theiterative algorithms in numerical analysis can berecast in this form of “anytime answers.”

Algorithms with self-awareness. Algorithms that canexplain their capabilities (e.g., a Help commandcapable of answering “how-to” and “what-if” ques-tions), and monitor, diagnose, and repair them-selves in the presence of viruses require internalmechanisms that can be loosely called "self-aware-ness." Online hypertext manuals and checksumsare simpler examples of such mechanisms.Answering how-to and what-if questions is harder.Monitoring and diagnosis of invading virusesimplies interpreting incoming action requestsrather than just executing them.

Such algorithm design issues seem to arise nat-urally in AI, and numerous solutions have beencreated by AI researchers in specific contexts. Fur-ther development and generalization of such solu-tions will enrich all of computer science.

Software Systems and AI

Isn’t AI just software? Isn’t a TV set just electron-ics? In a general sense, AI is just software,although AI systems tend to get large and com-

plex. Attempting to build AI systems often impliesbuilding the necessary software tools. For example,AI researchers were responsible for many of theearly advances in programming languages, such aslist structures, pointers, virtual memory, dynamicmemory allocation, and garbage collection, amongothers.

Large AI systems, especially those that aredeployed and in daily use, share many of the com-mon problems that are observed in developingother complex software systems, i.e., the problemsof “not on time,” “ over budget,” and “brittle.” The“mythical man-month” principle affects AI systemswith a vengeance. Not only do these systems tendto be large and complex, but they often cannot fallback on a learning curve based on having designedsimilar systems before. Thus, it is difficult to for-mulate requirement specifications or testing pro-

108 May 1996/Vol. 39, No. 5 COMMUNICATIONS OF THE ACM

It just so happens that AI is about creating artifacts that enhance

the mental capabilities of the human being.

COMMUNICATIONS OF THE ACM May 1996/Vol. 39, No. 5 109

cedures as is usual in conventional software engi-neering tasks.

There are a number of new concepts that areemerging within the context of AI that suggest newapproaches and methodologies for all of computerscience: •Plug-and-play architectures. To produce an inte-

grated concept demonstration of an AI systemthat routinely uses components that learn fromexperience, use knowledge, tolerate error, uselanguage, and operate in real time, one cannotstart from scratch each time. The applicationcomponent of such systems can be less than 10%of the total effort. If the system is to be opera-tional in a reasonable amount of time, one needsto rely on interfaces and software architecturesthat consist of components that plug and playtogether. This idea is similar to the old black-board concept: cooperating agents that can worktogether but don’t require each other explicitly.

•The 80/20 rule. The last 40 years of attempting tobuild systems that exhibit intelligent behaviorhas shown us how difficult the task really is. Arecent paradigm shift is to move away fromautonomous systems that attempt to entirelyreplace a human capability to systems that sup-port a human master as intelligent agents. Forexample, rather than trying to create a machinetranslation system, create a translation assistantthat provides the best alternative interpretationsto a human translator. Thus, the goal would beto create a system that does 80% of a task andleaves the remaining 20% to the user. Once sucha system is operational, the research agendawould target the remaining 20% and repeat the80/20 rule on the next version. Thus, the systemwould incrementally approach human perfor-mance, while at all times providing a usable arti-fact that improves human productivity by factorsof 3 to 5 after every iteration. However, the taskis not as simple as it may appear. With this newparadigm, the old problem of “how can a systemknow what it does not know” raises its ugly head.For an intelligent agent to be able to say, “Pleasewait, I will call my supervisor,” it must be self-aware! It must know what it can and cannot do.Not an easy task either, but one that needs to be

solved anyway.•Fail-fast strategies. In engineering design there are

two accepted practices: “get it right the firsttime” and “fail fast.” The former is used whenone is designing a product that has been pro-duced many times before. In building systems orrobots that have never been built before,attempting to “get it right the first time” may notbe the best strategy. Our recent experiment forNASA in building the Dante robot for exploringvolcanic environments is one such example. MostNASA experiments are required to be fail-safe,because in many missions human lives are at risk.This requirement to be error free leads to 15-year planning cycles and billion-dollar budgets.The fail-fast strategy says that if you are trying tobuild a complex system that has never been builtbefore, it is prudent to build a series of throw-away systems and improve the learning curve.Since such systems will fail in unforeseeable ways,rather than viewing failure as an unacceptableoutcome one should view failure as a stepping-stone to success. Our first Dante failed after tak-ing a few steps. The second one lasted a week.We are now working on a third generationmodel. Both time and cost expended on thisexperiment is at least an order of magnitudesmaller than those for comparable conventionalmissions.

•The scientific method. At an NRC study group meet-ing, a physicist asked, “Where is the science incomputer science?” I am happy to say that we arebeginning to have examples of the “hypothesis,experiment, validation, and replication” para-digm within some AI systems. Much of thespeech-recognition research has been followingthis path for the past 10 years. Using commondatabases, competing models are evaluatedwithin operational systems. The successful ideasthen seem to appear magically in other systemswithin a few months, leading to validation orrefutation of specific mechanisms for modelingspeech. ARPA deserves a lot of the credit forrequiring the community to use such validationprocedures. All of experimental computer sci-ence could benefit from such disciplined experi-ments. As Newell used to say, “Rather than

• T u r i n g •

A w a r d

argue, let us design an experiment to prove ordisprove the idea!”

AI and SocietyWhy should society support AI research and morebroadly, computer science research? The informa-tion technology industry, which was nonexistent 50years ago, has grown to be over 10% of the GNP andis responsible for over 5% of the total employmentin the country. While AI has not played a substantialrole in the growth of this industry, except for theexpert system technology, it appears possible thatwe are poised on the threshold of a number ofimportant applications that could have a majorimpact on society. Two such applications include anaccident-avoiding car and a reading coach:

The Navlab: The Carnegie Mellon Navlab projectbrings together computer vision, advanced sen-sors, high-speed processors, planning, and controlto build robot vehicles that drive themselves onroads and cross country.

The project began in 1984 as part of ARPA’sAutonomous Land Vehicle (ALV) program. In theearly 1980s, most robots were small, slow indoorvehicles tethered to big computers. The StanfordCart took 15 minutes to map obstacles, plan apath, and move each meter. The CMU Imp andNeptune improved on the Cart’s top speed, butstill moved in short bursts separated by long peri-ods of looking and thinking. In contrast, ARPA’s10-year goals for the ALV were to achieve 80 kphon roads and to travel long distances across openterrain.

The Navlab, built in 1986, was our first self-con-tained testbed. It had room for on-board genera-tors, on-board sensors, on-board computers, and,most importantly, on-board graduate students.Researchers riding on board were in a much betterposition to observe its behavior and to debug ormodify the programs. They were also highly moti-vated to get things working correctly.

The latest is the Navlab II, an army ambulanceHMMWV. It has many of the sensors used on ear-lier vehicles, plus cameras on pan/tilt mounts andthree aligned cameras for trinocular stereo vision.The HMMWV has high ground clearance for dri-

ving on rough terrain, and a 110-kph top speed forhighway driving. Computer-controlled motorsturn the steering wheel and control the brake andthrottle.

Perception and planning capabilities haveevolved with the vehicles. ALVINN is the currentmain road-following vision system. ALVINN is aneural network, which learns to drive by watchinga human driver. ALVINN has driven as far as 140km, and at speeds over 110 kph.

Ranger finds paths through rugged terrain. Ittakes range images, projects them onto the terrain,and builds Cartesian elevation maps. Ranger has dri-ven the vehicle for 16 km on our test course.

SMARTY and D* find and follow cross-countryroutes. D* plans a route using A* search. As the vehi-cle drives, SMARTY finds obstacles using GANE-SHA’s map, steers the vehicle around them andpasses the obstacles to D*. D* adds the new obstaclesto its global map and replans the optimal path.

Other Navlab modules include RACCOON,which follows a lead vehicle by tail-light tracking;YARF, which tracks lines and detects intersections;and PANACEA, which controls a pan/tilt mountto see around curves.

The 10-year goals of the ALV program have beenmet. Navlab technology is being integrated with spe-cialized planners and sensors to demonstrate prac-tical missions, and a new project funded by theDepartment of Transportation is investigatingNavlab technology for preventing highway acci-dents. As basic driving capabilities mature, theNavlab continues to provide new opportunities

110 May 1996/Vol. 39, No. 5 COMMUNICATIONS OF THE ACM

AI continues to be a possible dream worthy of dreaming.

Advances in AI have been significant. AI will continue to be

the generator of interesting problems and solutions.

both for applications and for continued research inperception, planning, and intelligent robot systems.

There are six million automotive accidents inthe U.S. each year, costing over $50 billion torepair. These accidents result in 40,000 fatalities,96% of which are caused by driver error. UsingNavlab technology can eliminate a significant frac-tion of those accidents, given appropriate sensorsand computation integrated into each car. AI andcomputer science researchers can justifiably beproud of such a contribution to society.

The LISTEN Project: At Carnegie Mellon, ProjectLISTEN is taking a novel approach to the problemof illiteracy. We have developed a prototype auto-mated reading coach that listens to a child readaloud, and helps when needed. The system isbased on the CMU Sphinx II speech recognitiontechnology. The coach provides a combination ofreading and listening, in which the child readswherever possible and the coach helps wherevernecessary—a bit like training wheels on a bicycle.

The coach is designed to emphasize compre-hension and ignore minor mistakes, such as falsestarts or repeated words. When the reader getsstuck, the coach jumps in, enabling the reader tocomplete thesentence. When the reader misses animportant word, the coach rereads the words thatled up to it, just like the expert reading teachersafter whom the coach is modeled. This contextoften helps the reader correct the word on the sec-ond try. When the reader runs into more difficulty,

the coach rereads the sentence to help the readercomprehend it. The coach’s ability to listenenables it to detect when and where the readerneeds help. Further research is needed to turn thisprototype into robust educational software. Exper-iments to date suggest that it has the potential toreduce children’s reading mistakes by a factor offive and enable them to comprehend material atleast six months more advanced than they canread on their own.

Illiteracy costs the U.S. over $225 billion dollarsannually in corporate retraining, industrial acci-dents, and lost competitiveness. If we can reduceilliteracy by just 20%, Project LISTEN could savethe nation over $45 billion a year.

Other AI research to have a major impact onsociety includes Feigenbaum’s early research onknowledge systems; Kurzweil’s work on readingmachines for the blind; Papert, Simon, Anderson,and Shank’s contributions to learning by doing;and the robotics work at MIT, Stanford, and CMU.

Several new areas where advances in AI are likelyto have an impact on society are:

•Creation of intelligent agents to monitor andmanage the information glut by filtering, digest-ing, abstracting, and acting as an agent of ahuman master;

•Making computers easier to use: use of multi-modal interfaces, DWIM techniques, intelligenthelp, advice giving agents, and cognitive models;

•Aids for the disabled: Development of aids forpeople with sensory, cognitive, or motor disabil-ities based on AI technologies appears promis-ing, especially the area of creating aids forcognitive disabilities such as memory and rea-soning deficiencies; and

•Discovery techniques: Deriving knowledge andinformation for a large amount of data,whether the data is scientific or financial, hasthe potential for a major impact.

Whether it is to save lives, improve education,improve productivity, overcome disabilities, or fos-ter discovery, AI has produced and will continue toproduce research results with the potential to have

COMMUNICATIONS OF THE ACM May 1996/Vol. 39, No. 5 111

• T u r i n g •

A w a r d

a profound impact on the way we live and work inthe future.

Grand Challenges in AIWhat's next for AI? There are several seeminglyreasonable problems which are exciting and chal-lenging, and yet are currently unsolvable. Solu-tions to these problems will require major newinsights and fundamental advances in computerscience and artificial intelligence. Such challengesinclude: a world champion chess machine, a trans-lating telephone, and discovery of a major mathe-matical result by a computer, among others. Here,I will present two such grand challenges which, ifsuccessful, can be expected to have a major impacton society:

Self-Organizing Systems: There has been a longand continuing interest in systems that learn anddiscover from examples, from observations, andfrom books. Currently, there is a lot of interest inneural networks that can learn from signals andsymbols through an evolutionary process. Twolong-term grand challenges for systems thatacquire capability through development are:read a chapter in a college freshman text (say,physics or accounting) and answer the questionsat the end of the chapter; and learn to assemblean appliance (such as a food processor) fromobserving a person doing the same task. Both areextremely hard problems, requiring advances invision, language, problem-solving techniques,and learning theory. Both are essential to thedemonstration of a self-organizing system thatacquires capability through (possibly unsuper-vised) development.

Self-Replicating Systems: There have been severaltheoretical studies in this area since the 1950’s.The problem is of some practical interest in areassuch as space manufacturing. Rather than uplift-ing a whole factory, is it possible to have a small setof machine tools that can produce, say, 80% of theparts needed for the factory (using the 80/20 rulediscussed earlier), using locally available raw mate-rials and assemble it in situ? The solution to thisproblem of manufacturing on Mars involves many

different disciplines, including materials andenergy technologies. Research problems in AIinclude knowledge capture for replication; designfor manufacturability; and design of systems withself-monitoring; self diagnosis; and self-repaircapabilities.

ConclusionLet me conclude by saying that AI continues to bea possible dream worthy of dreaming. Advances inAI have been significant. AI will continue to be thegenerator of interesting problems and solutions.However, ultimately the goals of AI will be realizedonly with developments within the broader con-text of computer science, such as the availability ofa Giga-PC---i.e., a billion-operations-per-secondcomputer at PC prices---and advances in softwaretools, environments, and algorithms.

AcknowledgmentI was fortunate to have been supported by ARPAfor nearly 30 years. I would like thank Lick Lick-lider, Ivan Sutherland, Larry Roberts, and BobKahn for their vision in fostering AI research eventhough it was not part of their own personalresearch agenda, and the current leadership atARPA: Duane Adams, Ed Thompson, and AllenSears, for continuing the tradition.

I am grateful to Herb Simon, Ed Feigenbaum,Jaime Carbonell, Tom Mitchell, Jack Mostow,Chuck Thorpe, and Jim Kocher for suggestions,comments, and help in the preparation of this lec-ture.

RAJ REDDY, Herbert A. Simon University Professor, is Dean of the School of Computer Science at CMU. His mailingaddress is Carnegie Mellon University, School of Computer Science, 5325 Wean Hall, Pittsburgh, PA 15123-3890.email:[email protected].

Permission to make digital/hard copy of part or all of this work for per-

sonal or classroom use is granted without fee provided that copies are

not made or distributed for profit or commercial advantage, the copy-

right notice, the title of the publication and its date appear, and notice

is given that copying is by permission of ACM, Inc. To copy otherwise, to

republish, to post on servers, or to redistribute to lists requires prior spe-

cific permission and/or a fee.

© ACM 0002-0782/96/0500 $3.50

112 May 1996/Vol. 39, No. 5 COMMUNICATIONS OF THE ACM

• T u r i n g •

A w a r d


Recommended