notes OF ARTIFICIAL INTELLIGENCE

Embed Size (px)

Citation preview

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    1/15

    INTRODUCTION OF ARTIFICIAL INTELLIGENCEArtificial Intelligence, or AI for short, is a combination of computer science,physiology, and philosophy. AI is a broad topic, consisting of different fields, frommachine vision to expert systems. The element that the fields of AI have in

    common is the creation of machines that can "think".

    In order to classify machines as "thinking", it is necessary to define intelligence.To what degree does intelligence consist of, for example, solving complex

    problems, or making generalizations and relationships? Andwhat about perception and comprehension? Research into theareas of learning, of language, and of sensory perception haveaided scientists in building intelligent machines. One of themost challenging approaches facing experts is building systems

    that mimic the behavior of the human brain, made up of billionsof neurons, and arguably the most complex matter in theuniverse. Perhaps the best way to gauge the intelligence of amachine is British computer scientist Alan Turing's test. Hestated that a computer would deserves to be called intelligent ifit could deceive a human into believing that it was human.

    Artificial Intelligence has come a long way from its early roots, driven bydedicated researchers. The beginnings of AI reach back before electronics,

    to philosophers and mathematicians such as Boole and otherstheorizing on principles that were used as the foundation of AILogic. AI really began to intrigue researchers with the invention ofthe computer in 1943. The technology was finally available, or so itseemed, to simulate intelligent behavior. Over the next four decades,despite many stumbling blocks, AI has grown from a dozenresearchers, to thousands of engineers and specialists; and from

    programs capable of playing checkers, to systems designed to diagnose disease.

    AI has always been on the pioneering end of computer science. Advanced-levelcomputer languages, as well as computer interfaces and word-processors owe theirexistence to the research into artificial intelligence. The theory and insightsbrought about by AI research will set the trend in the future of computing. Theproducts available today are only bits and pieces of what are soon to follow, butthey are a movement towards the future of artificial intelligence. Theadvancements in the quest for artificial intelligence have, and will continue toaffect our jobs, our education, and our lives.

    Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but

    with the development of the electronic computer in 1941, the technology finallybecame available to create machine intelligence. The term artificial intelligence

    http://library.thinkquest.org/2705/approaches.htmlhttp://ei.cs.vt.edu/~history/Turing.htmlhttp://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Boole.htmlhttp://ei.cs.vt.edu/~history/Turing.htmlhttp://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Boole.htmlhttp://library.thinkquest.org/2705/approaches.html
  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    2/15

    was first coined in 1956, at the Dartmouth conference, and since then ArtificialIntelligence has expanded because of the theories and principles developed by itsdedicated researchers. Through its short modern history, advancement in the fieldsof AI have been slower than first estimated, progress continues to be made. From

    its birth 4 decades ago, there have been a variety of AI programs, and they haveimpacted other technological advancements.

    The Era of the Computer:

    In 1941 an invention revolutionizedevery aspect of the storage andprocessing of information. Thatinvention, developed in both the USand Germany was the electronic

    computer. The first computersrequired large, separate air-conditioned rooms, and were aprogrammers nightmare, involving theseparate configuration of thousands of

    wires to even get a program running.

    The 1949 innovation, the stored program computer, made the job of entering aprogram easier, and advancements in computer theory lead to computer science,and eventually Artificial intelligence. With the invention of an electronic means of

    processing data, came a medium that made AI possible.

    The Beginnings of AI:

    Although the computer provided the technology necessary for AI, itwas not until the early 1950's that the link between humanintelligence and machines was really observed. Norbert Wienerwasone of the first Americans to make observations on the principle offeedback theory feedback theory. The most familiar example offeedback theory is the thermostat: It controls the temperature of an

    environment by gathering the actual temperature of the house,comparing it to the desired temperature, and responding by turning the heat up ordown. What was so important about his research into feedback loops was thatWiener theorized that all intelligent behavior was the result of feedbackmechanisms. Mechanisms that could possibly be simulated by machines. Thisdiscovery influenced much of early development of AI.

    In late 1955, Newell and Simon developed The Logic Theorist, considered bymany to be the first AI program. The program, representing each problem as a tree

    model, would attempt to solve it by selecting the branch that would most likelyresult in the correct conclusion. The impact that the logic theorist made on both the

    http://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Wiener_Norbert.htmlhttp://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Wiener_Norbert.htmlhttp://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Wiener_Norbert.html
  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    3/15

    public and the field of AI has made it a crucial stepping stone in developing the AIfield.

    In 1956 John McCarthy regarded as the father of AI,organized a conference to draw the talent and expertise ofothers interested in machine intelligence for a month ofbrainstorming. He invited them to Vermont for "TheDartmouth summer research project on artificialintelligence." From that point on, because of McCarthy, thefield would be known as Artificial intelligence. Althoughnot a huge success, (explain) the Dartmouth conference didbring together the founders in AI, and served to lay thegroundwork for the future of AI research.

    Knowledge Expansion

    In the seven years after the conference, AI began to pick up momentum. Althoughthe field was still undefined, ideas formed at the conference were re-examined, andbuilt upon. Centers for AI research began forming at Carnegie Mellon and MIT,and a new challenges were faced: further research was placed upon creatingsystems that could efficiently solve problems, by limiting the search, such as theLogic Theorist. And second, making systems that could learn by themselves.

    In 1957, the first version of a new program The General Problem Solver(GPS) was

    tested. The program developed by the same pair which developed the LogicTheorist. The GPS was an extension of Wiener's feedback principle, and wascapable of solving a greater extent of common sense problems. A couple of yearsafter the GPS, IBM contracted a team to research artificial intelligence. HerbertGelerneter spent 3 years working on a program for solving geometry theorems.

    While more programs were being produced, McCarthy was busy developing amajor breakthrough in AI history. In 1958 McCarthy announced his newdevelopment; the LISP language, which is still used today. LISP stands for LIStProcessing, and was soon adopted as the language of choice among most AIdevelopers.

    In 1963 MIT received a 2.2 million dollar grant from the United States governmentto be used in researching Machine-Aided Cognition (artificial intelligence). Thegrant by the Department of Defense's Advanced research projects Agency (ARPA),to ensure that the US would stay ahead of the Soviet Union in technological

    advancements. The project served to increase the pace of development in AI

    http://library.thinkquest.org/2705/people.htmlhttp://www-ee.stanford.edu/compsci/faculty/McCarthy_John.htmlhttp://library.thinkquest.org/2705/people.html
  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    4/15

    research, by drawing computer scientists fromaround the world, and continues funding.

    The Multitude of programs

    The next few years showed a multitude ofprograms, one notably was SHRDLU.SHRDLU was part of the microworldsproject, which consisted of research andprogramming in small worlds (such as with a limited number of geometric shapes).The MIT researchers headed by Marvin Minsky, demonstrated that when confinedto a small subject matter, computer programs could solve spatial problems andlogic problems. Other programs which appeared during the late 1960's wereSTUDENT, which could solve algebra story problems, and SIR which could

    understand simple English sentences. The result of these programs was arefinement in language comprehension and logic.

    Another advancement in the 1970's was the advent of the expert system. Expertsystems predict the probability of a solution under set conditions. For example:

    Because of the large storage capacity of computers at the time, expert systems hadthe potential to interpret statistics, to formulate rules. And the applications in themarket place were extensive, and over the course of ten years, expert systems hadbeen introduced to forecast the stock market, aiding doctors with the ability to

    diagnose disease, and instruct miners to promising mineral locations. This wasmade possible because of the systems ability to store conditional rules, and astorage of information.

    During the 1970's Many new methods in the development of AI were tested,notably Minsky's frames theory. Also David Marr proposed new theories aboutmachine vision, for example, how it would be possible to distinguish an imagebased on the shading of an image, basic information on shapes, color, edges, andtexture. With analysis of this information, frames of what an image might be couldthen be referenced. another development during this time was the PROLOGUElanguage. The language was proposed for In 1972,

    During the 1980's AI was moving at a faster pace, and further into the corporatesector. In 1986, US sales of AI-related hardware and software surged to $425million. Expert systems in particular demand because of their efficiency.Companies such as Digital Electronics were using XCON, an expert systemdesigned to program the large VAX computers. DuPont, General Motors, andBoeing relied heavily on expert systems Indeed to keep up with the demand for thecomputer experts, companies such as Teknowledge and Intellicorp specializing in

    creating software to aid in producing expert systems formed. Other expert systemswere designed to find and correct flaws in existing expert systems.

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    5/15

    The Transition from Lab to Life

    The impact of the computer technology, AI included was felt. No longer was thecomputer technology just part of a select few researchers in laboratories. Thepersonal computer made its debut along with many technological magazines. Suchfoundations as the American Association for Artificial Intelligence also started.There was also, with the demand for AI development, a push for researchers to joinprivate companies. 150 companies such as DEC which employed its AI researchgroup of 700 personnel, spend $1 billion on internal AI groups.

    Other fields of AI also made there way into the marketplace during the 1980's. Onein particular was the machine vision field. The work by Minsky and Marr werenow the foundation for the cameras and computers on assembly lines, performingquality control. Although crude, these systems could distinguish differences shapes

    in objects using black and white differences. By 1985 over a hundred companiesoffered machine vision systems in the US, and sales totaled $80 million.

    The 1980's were not totally good for the AI industry. In 1986-87 the demand in AIsystems decreased, and the industry lost almost a half of a billion dollars.Companies such as Teknowledge and Intellicorp together lost more than $6million, about a third of there total earnings. The large losses convinced manyresearch leaders to cut back funding. Another disappointment was the so called"smart truck" financed by the Defense Advanced Research Projects Agency. Theprojects goal was to develop a robot that could perform many battlefield tasks. In

    1989, due to project setbacks and unlikely success, the Pentagon cut funding forthe project.

    Despite these discouraging events, AI slowly recovered. New technology in Japanwas being developed. Fuzzy logic, first pioneered in the US has the unique abilityto make decisions under uncertain conditions. Also neural networks were beingreconsidered as possible ways of achieving Artificial Intelligence. The 1980'sintroduced to its place in the corporate marketplace, and showed the technologyhad real life uses, ensuring it would be a key in the 21st century.

    AI put to the Test

    The military put AI based hardware to the test of war duringDesert Storm. AI-based technologies were used in missilesystems, heads-up-displays, and other advancements. AI has alsomade the transition to the home. With the popularity of the AIcomputer growing, the interest of the public has also grown.Applications for the Apple Macintosh and IBM compatible computer, such asvoice and character recognition have become available. Also AI technology has

    made steadying camcorders simple using fuzzy logic. With a greater demand for

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    6/15

    AI-related technology, new advancements are becoming available. InevitablyArtificial Intelligence has, and will continue to affecting our lives.

    APPROACHES---

    METHODS USED TO CREATE ARTIFICIAL INTELLIGENCE

    In the quest to create intelligent machines, the field of Artificial Intelligence hassplit into several different approaches based on the opinions about the mostpromising methods and theories. These rivaling theories have lead researchers inone of two basic approaches; bottom-up and top-down. Bottom-up theorists believethe best way to achieve artificial intelligence is to build electronic replicas of thehuman brain's complex network of neurons, while the top-down approach attemptsto mimic the brain's behavior with computer programs.

    Neural Networks and Parallel Computation

    The human brain is made up of a web of billions of cells called neurons, andunderstanding its complexities is seen as one of the last frontiers in scientificresearch. It is the aim of AI researchers who prefer this bottom-up approach toconstruct electronic circuits that act as neurons do in the human brain. Althoughmuch of the working of the brain remains unknown, the complex network ofneurons is what gives humans intelligent characteristics. By itself, a neuron is notintelligent, but when grouped together, neurons are able to pass electrical signals

    through networks.

    The neuron "firing", passing a signal to the next in the chain.

    Research has shown that a signal received by a neuron travels through the dendriteregion, and down the axon. Separating nerve cells is a gap called the synapse. Inorder for the signal to be transferred to the nextneuron, the signal must be converted fromelectrical to chemical energy. The signal can then

    be received by the next neuron and processed.

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    7/15

    Warren McCulloch after completing medical school at Yale, along with WalterPitts a mathematician proposed a hypothesis to explain the fundamentals of howneural networks made the brain work. Based on experiments with neurons,McCulloch and Pitts showed that neurons might be considered devices for

    processing binary numbers. An important back of mathematic logic, binarynumbers (represented as 1's and 0's or true and false) were also the basis of theelectronic computer. This link is the basis of computer-simulated neural networks,also know as Parallel computing.

    A century earlier the true / false nature of binary numbers was theorized in 1854 byGeorge Boole in his postulates concerning the Laws of Thought. Boole's principlesmake up what is known as Boolean algebra, the collection of logic concerningAND, OR, NOT operands. For example according to the Laws of thought thestatement: (for this example consider all apples red)

    Apples are red-- is True Apples are red AND oranges are purple-- is False Apples are red OR oranges are purple-- is True Apples are red AND oranges are NOT purple-- is also True

    Boole also assumed that the human mind works according to these laws, itperforms logical operations that could be reasoned. Ninety years later, ClaudeShannon applied Boole's principles in circuits, the blueprint for electroniccomputers. Boole's contribution to the future of computing and Artificial

    Intelligence was immeasurable, and his logic is the basis of neural networks.

    McCulloch and Pitts, using Boole's principles, wrote a paper on neural networktheory. The thesis dealt with how the networks of connected neurons couldperform logical operations. It also stated that, one the level of a single neuron, therelease or failure to release an impulse was the basis by which the brain makes true/ false decisions. Using the idea of feedback theory, they described the loop whichexisted between the senses ---> brain ---> muscles, and likewise concluded thatMemory could be defined as the signals in a closed loop of neurons. Although we

    now know that logic in the brain occurs at a level higher then McCulloch and Pittstheorized, their contributions were important to AI because they showed how thefiring of signals between connected neurons could cause the brains to makedecisions. McCulloch and Pitt's theory is the basis of the artificial neural networktheory.

    Using this theory, McCulloch and Pitts then designed electronic replicas of neuralnetworks, to show how electronic networks could generate logical processes. Theyalso stated that neural networks may, in the future, be able to learn, and recognizepatterns. The results of their research and two of Weiner's books served to increase

    enthusiasm, and laboratories of computer simulated neurons were set up across thecountry.

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    8/15

    Two major factors have inhibited the development of full scale neural networks.Because of the expense of constructing a machine to simulate neurons, it wasexpensive even to construct neural networks with the number of neurons in an ant.Although the cost of components have decreased, the computer would have to

    grow thousands of times larger to be on the scale of the human brain. The secondfactor iscurrent computer architecture. The standard Von Neuman computer, thearchitecture of nearly all computers, lacks an adequate number of pathwaysbetween components. Researchers are now developing alternate architectures foruse with neural networks.

    Even with these inhibiting factors, artificial neural networks have presented someimpressive results. Frank Rosenblatt, experimenting with computer simulatednetworks, was able to create a machine that could mimic the human thinkingprocess, and recognize letters. But, with new top-down methods becoming popular,

    parallel computing was put on hold. Now neural networks are making a return, andsome researchers believe that with new computer architectures, parallel computingand the bottom-up theory will be a driving factor in creating artificial intelligence.

    Top Down Approaches; Expert Systems

    Because of the large storage capacity of computers, expert systems had thepotential to interpret statistics, in order to formulate rules. An expert system worksmuch like a detective solves a mystery. Using the information, and logic or rules,an expert system can solve the problem. For example it the expert system was

    designed to distinguish birds it may have the following:

    Charts like these represent the logic of expert systems. Using a similar set of rules,experts can have a variety of applications. With improved interfacing, computersmay begin to find a larger place in society.

    Chess

    AI-based game playing programs combine intelligence with entertainment. Ongame with strong AI ties is chess. World-champion chess playing programs can see

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    9/15

    ahead twenty plus moves in advance for each move they make. In addition, theprograms have an ability to get progressably better over time because of the abilityto learn. Chess programs do not play chess as humans do. In three minutes, DeepThought (a master program) considers 126 million moves, while human

    chessmaster on average considers less than 2 moves. Herbert Simon suggested thathuman chess masters are familiar with favorable board positions, and therelationship with thousands of pieces in small areas. Computers on the other hand,do not take hunches into account. The next move comes from exhaustive searchesinto all moves, and the consequences of the moves based on prior learning. Chessprograms, running on Cray super computers have attained a rating of 2600 (seniormaster), in the range of Gary Kasparov, the Russian world champion.

    Frames

    On method that many programs use to represent knowledge are frames. Pioneeredby Marvin Minsky, frame theory revolves around packets of information. Forexample, say the situation was a birthday party. A computer could call on itsbirthday frame, and use the information contained in the frame, to apply to thesituation. The computer knows that there is usually cake and presents because ofthe information contained in the knowledge frame. Frames can also overlap, orcontain sub-frames. The use of frames also allows the computer to add knowledge.Although not embraced by all AI developers, frames have been used incomprehension programs such as Sam.

    Conclusion

    This page touched on some of the main methods used to create intelligence. Theseapproaches have been applied to a variety of programs. As we progress in thedevelopment of Artificial Intelligence, other theories will be available, in additionto building on today's methods.

    APPLICATIONS

    What we can do with AI

    We have been studying this issue of AI application for quite some time now and know

    all the terms and facts. But what we all really need to know is what can we do to get our

    hands on some AI today. How can we as individuals use our own technology? We hope

    to discuss this in depth (but as briefly as possible) so that you the consumer can use AI

    as it is intended.

    First, we should be prepared for a change. Our conservative ways stand in the way of

    progress. AI is a new step that is very helpful to the society. Machines can do jobs that

    require detailed instructions followed and mental alertness. AI with its learning

    capabilities can accomplish those tasks but only if the worlds conservatives are ready to

    change and allow this to be a possibility. It makes us think about how early man finally

    http://library.thinkquest.org/2705/programs#samhttp://library.thinkquest.org/2705/programs#sam
  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    10/15

    accepted the wheel as a good invention, not something taking away from its heritage or

    tradition.

    Secondly, we must be prepared to learn about the capabilities of AI. The more use we

    get out of the machines the less work is required by us. In turn less injuries and stress to

    human beings. Human beings are a species that learn by trying, and we must beprepared to give AI a chance seeing AI as a blessing, not an inhibition.

    Finally, we need to be prepared for the worst of AI. Something as revolutionary as AI is

    sure to have many kinks to work out. There is always that fear that if AI is learning

    based, will machines learn that being rich and successful is a good thing, then wage war

    against economic powers and famous people? There are so many things that can go

    wrong with a new system so we must be as prepared as we can be for this new

    technology.

    However, even though the fear of the machines are there, their capabilities are infinite

    Whatever we teach AI, they will suggest in the future if a positive outcome arrives fromit. AI are like children that need to be taught to be kind, well mannered, and intelligent.

    If they are to make important decisions, they should be wise. We as citizens need to

    make sure AI programmers are keeping things on the level. We should be sure they are

    doing the job correctly, so that no future accidents occur.

    AIAI Teaching Computers Computers

    Does this sound a little Redundant? Or maybe a little redundant? Well just sit backand let me explain. The Artificial Intelligence Applications Institute has many

    project that they are working on to make their computers learn how to operatethemselves with less human input. To have more functionality with less input is anoperation for AI technology. I will discuss just two of these projects: AUSDA andEGRESS.

    AUSDA is a program which will exam software to see if it is capable of handlingthe tasks you need performed. If it isn't able or isn't reliable AUSDA will instructyou on finding alternative software which would better suit your needs. Accordingto AIAI, the software will try to provide solutions to problems like "identifying the

    root causes of incidents in which the use of computer software is involved,studying different software development approaches, and identifying aspects ofthese which are relevant to those root causes producing guidelines for using andimproving the development approaches studied, and providing support in theintegration of these approaches, so that they can be better used for the developmentand maintenance of safety critical software."

    Sure, for the computer buffs this program is a definitely good news. But whatabout the average person who think the mouse is just the computers foot pedal?Where do they fit into computer technology. Well don't worry guys, because us

    nerds are looking out for you too! Just ask AIAI what they have for you and itturns up the EGRESS is right down your alley. This is a program which is studying

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    11/15

    human reactions to accidents. It is trying to make a model of how peoples reactionsin panic moments save lives. Although it seems like in tough situations humanswould fall apart and have no idea what to do, it is in fact the opposite. QuickDecisions are usually made and are effective but not flawless. These computer

    models will help rescuers make smart decisions in time of need. AI can't bepositive all the time but can suggest actions which we can act out and therefor leadto safe rescues.

    So AIAI is teaching computers to be better computers and better people. AItechnology will never replace man but can be an extension of our body whichallows us to make more rational decisions faster. And with Institutes like AIAI- wecontinue each stay to step forward into progress.

    No worms in these Apples

    by Adam Dyess

    Apple Computers may not have ever been considered as the state of art in ArtificialIntelligence, but a second look should be given. Not only are today's PC'sbecoming more powerful but AI influence is showing up in them. From Macros toVoice Recognition technology, PC's are becoming our talking buddies. Who elsewould go surfing with you on short notice- even if it is the net. Who else would

    care to tell you that you have a business appointment scheduled at 8:35 and 28seconds and would notify you about it every minute till you told it to shut up. Evenwith all the abuse we give today's PC's they still plug away to make us happy. Weuse PC's more not because they do more or are faster but because they are gettingso much easier to use. And their ease of use comes from their use of AI.

    All Power Macintoshes come with Speech Recognition. That's right- you tell thecomputer to do what you want without it having to learn your voice. Thisimplication of AI in Personal computers is still very crude but it does work given

    the correct conditions to work in and a clear voice. Not to mention the requirementof at least 16Mgs of RAM for quick use. Also Apple's Newton and other hand heldnote pads have Script recognition. Cursive or Print can be recognized by thesenotepad sized devices. With the pen that accompanies your silicon note pad youcan write a little note to yourself which magically changes into computer text ifdesired. No more complaining about sloppy written reports if your computer canread your handwriting. If it can't read it though- perhaps in the future, you cancorrect it by dictating your letters instead.

    Macros provide a huge stress relief as your computer does faster what you could

    do more tediously. Macros are old but they are to an extent, Intelligent. You havetaught the computer to do something only by doing it once. In businesses, many

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    12/15

    times applications are upgraded. But the files must be converted. All of thebusinesses records but be changed into the new software's type. Macros save thework of conversion of hundred of files by a human by teaching the computer tomimic the actions of the programmer. Thus teaching the computer a task that it can

    repeat whenever ordered to do so.

    AI is all around us all but get ready for a change. But don't think the change will beharder on us because AI has been developed to make our lives easier.

    The Scope of Expert Systems

    As stated in the 'approaches' section, an expert system is able to

    do the work of a professional. Moreover, a computer system can

    be trained quickly, has virtually no operating cost, never forgetswhat it learns, never calls in sick, retires, or goes on vacation.

    Beyond those, intelligent computers can consider a large amount

    of information that may not be considered by humans.

    But to what extent should these systems replace human experts? Or, should they atall? For example, some people once considered an intelligent computer as apossible substitute for human control over nuclear weapons, citing that a computercould respond more quickly to a threat. And many AI developers were afraid of the

    possibility of programs like Eliza, the psychiatrist and the bond that humans weremaking with the computer. We cannot, however, over look the benefits of having acomputer expert. Forecasting the weather, for example, relies on many variables,and a computer expert can more accurately pool all of its knowledge. Still acomputer cannot rely on the hunches of a human expert, which are sometimesnecessary in predicting an outcome.

    In conclusion, in some fields such as forecasting weather or finding bugs incomputer software, expert systems are sometimes more accurate than humans. Butfor other fields, such as medicine, computers aiding doctors will be beneficial, butthe human doctor should not be replaced. Expert systems have the power and rangeto aid to benefit, and in some cases replace humans, and computer experts, if usedwith discretion, will benefit human kind.

    PEOPLE:-

    PETER ROSS

    Currently Senior Lecturer in AI (from Oct 96)

    Head of the Department of AI, University of Edinburgh

    http://library.thinkquest.org/2705/programs.html#elizahttp://library.thinkquest.org/2705/programs.html#eliza
  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    13/15

    Do you think Computers will ever be able to think and talklike humans? Yes, but it's a long way off.

    What is the most exciting part of AI that encourages youstay in the field? Two things: the developing study of complexdynamical systems, and the exploration of evolutionarycomputing ideas.

    AUSTIN TATE

    Professor of Knowledge-Based Systems, University of EdinburghTechnical Director of Artificial Intelligence Applications Institute

    You will see deep space probes with advanced automation and AI

    travel out from our planet, yu wil see autonomous sea and landvehicles epxloring parts of our own planet too inhospitable forman to travel there. You will be able to have a personal assistantor co-worker who will work alongside you, get to know your tasks,processes and preferences. It will do those things you wish youhad time to do yourself but which are never at the top of youragenda. The same system will adapt itself to becoming an activeaid as you and your family age. Someday, it might even be ableto draft an answer to an email message like this one, as it willknow the subject well enough.

    David Waltz

    Vice President, Computer Science Research, NEC ResearchInstitute

    What do you see as some fundemental ways that AI ingeneral will impactpeople's lives in the future?

    Systems will be smarter -- or perhaps just less stupid. Many Webapplications will use AI to tailor system behavior to match yourpatterns and tastes; houses, cars, appliances, etc. will be smarter,saving energy, adapting their behavior to your needs and thecurrent situation; automatic accident avoidance for cars will befollowed by self-driving cars; household robots are possible in 15years, likely in 30; education will become much more gearedtoward teaching students to find and use Web resources, and lesstoward memorizing anything. Work as we now know it may

    become unnecessary, and the overall productivity and wealth ofsocieties can become vastly greater.

  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    14/15

    I think that technology will move toward processors andmemories on the same chip, leading to intelligent memory. Anintelligent memory could search/compare each action you do withall the items it storesMatched items can be used to suggest

    shortcuts, remind you of things you've done or need to do, etc.Computers will be much more proactive, though they can becomeunobtrusuve if requested. People will have continuous portableWeb access, and will depend heavily on it for work,entertainment, communication, education, etc.

    ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````

    What is Artificial Intelligence?

    Intelligence is the ability to think, to imagine, to create, memorize, understand, recognize patterns,make choices, adapt to change and learn from experience. Artificial intelligence is a human endeavorto create a non-organic machine-based entity, that has all the above abilities of natural organicintelligence. Hence it is called as 'Artificial Intelligence' (AI).

    It is the ultimate challenge for an intelligence, to create an equal, another intelligent being. It is theultimate form of art, where the artist's creation, not only inherits the impressions of his thoughts, butalso his ability to think!

    According to Alan Turing( in 1936 Turing machines were the first abstract models of today'scomputers. ) if you question a human and an artificially intelligent being and if by their answers, youcan't recognize which is the artificial one, then you have succeeded in creating artificial intelligence.

    AAAI: American Association for Artificial Intelligence The AAAI is anonprofit scientific society devoted to the

    promotion and advancement of AI.

    ACM: the Association for ComputingMachineryACM is an internationalscientific dedicated to advancing information technology

    AIAI: Artificial Intelligence Applications Institute AIAIis maintaining andimproving its position for the application of knowledge based techniques.

    AIAI is a technology transfer organisation that promotes theapplication of Artificial Intelligence research for the benefit ofcommercial, industrial, and government clients. AIAI hasconsiderable experience of working with small innovative companies,and with research groups in larger corporations.

    AT&T Bell Labs The main page for AT&T Bell Labs where new ArtificialIntellegence is being researched and applied.

    Carnegie Mellon University Artificial Intelligence Repository A collectionof files, programs and publications of interest to Artificial Intelligence

    research

    http://www.aaai.org/http://www.aaai.org/http://www.acm.org/http://www.acm.org/http://www.acm.org/http://www.aiai.ed.ac.uk/http://www.aiai.ed.ac.uk/http://www.research.att.com/http://www.research.att.com/http://www.cs.cmu.edu/Web/Groups/AI/html/repository.htmlhttp://www.cs.cmu.edu/Web/Groups/AI/html/repository.htmlhttp://www.aaai.org/http://www.acm.org/http://www.acm.org/http://www.aiai.ed.ac.uk/http://www.research.att.com/http://www.cs.cmu.edu/Web/Groups/AI/html/repository.html
  • 8/7/2019 notes OF ARTIFICIAL INTELLIGENCE

    15/15

    Applications of AI

    Artificial Intelligence in the form of expert systems and neural networks have applications in every fieldof human endeavor. They combine precision and computational power with pure logic, to solveproblems and reduce error in operation. Already, robot expert systems are taking over many jobs inindustries that are dangerous for or beyond human ability. Some of the applications divided by

    domains are as follows:

    Heavy Industries and Space:Roboticsand cybernetics have taken a leap combined with artificiallyintelligent expert systems. An entire manufacturing process is now totallyautomated, controlled andmaintained by a computer system in car manufacture, machine tool production, computer chipproduction and almost every high-tech process. They carry out dangerous tasks like handlinghazardous radioactive materials. Robotic pilots carry out complex maneuvering techniques ofunmanned spacecrafts sent in space. Japan is the leading country in the world in terms of roboticsresearch and use.

    Finance: Banks use intelligent software applications to screen and analyze financial data. Softwaresthat can predict trends in the stock market have been created which have been known to beathumans in predictive power.

    Computer Science: Researchers in quest of artificial intelligence have created spin offs like dynamicprogramming, object oriented programming, symbolic programming, intelligent storage managementsystems and many more such tools. The primary goal of creating an artificial intelligence still remainsa distant dream but people are getting an idea of the ultimate path which could lead to it.

    Aviation: Air lines use expert systems in planes to monitor atmospheric conditions and system status.The plane can be put on auto pilot once a course is set for the destination.

    Weather Forecast: Neural networks are used for predicting weather conditions. Previous data is fedto a neural network which learns the pattern and uses that knowledge to predict weather patterns.

    Swarm Intelligence: This is an approach to, as well as application of artificial intelligence similar to a

    neural network. Here, programmers study how intelligence emerges in natural systems like swarms ofbees even though on an individual level, a bee just follows simple rules. They study relationships innature like the prey-predator relationships that give an insight into how intelligence emerges in aswarm or collection from simple rules at an individual level. They develop intelligent systems bycreating agent programs that mimic the behavior of these natural systems!

    Is artificial Intelligence really possible? Can an intelligence like a human mind surpass itself andcreate its own image? The depth and the powers of the human mind are just being tapped. Whoknows, it might be possible, only time can tell! Even if such an intelligence is created, will it share oursense of morals and justice, will it share our idiosyncrasies? This will be the next step in the evolutionof intelligence. Hope I have succeeded in conveying to you the excitement and possibilities thissubject holds!

    http://www.buzzle.com/articles/robotics/http://www.buzzle.com/articles/robotics/http://www.buzzle.com/articles/robotics/http://www.buzzle.com/articles/industrial-automation.htmlhttp://www.buzzle.com/articles/industrial-automation.htmlhttp://www.buzzle.com/articles/weather-forecast/http://www.buzzle.com/articles/robotics/http://www.buzzle.com/articles/industrial-automation.htmlhttp://www.buzzle.com/articles/weather-forecast/