24
AI Unplugged Unplugging Artificial Intelligence Annabel Lindner Stefan Seegerer Activities and teaching material on artificial intelligence

Unplugging Artificial Intelligence ... · 2 Preface AI (Artificial Intelligence) is becoming a topic of increasing social importance. PoliticalreactionslikethepublicationoftheAIstrategyoftheGermanGovernmentin

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

  • AI Unplugged

    Unplugging ArtificialIntelligence

    Annabel LindnerStefan Seegerer

    Activities and teaching materialon artificial intelligence

  • 2

    PrefaceAI (Artificial Intelligence) is becoming a topic of increasing social importance.Political reactions like the publication of the AI strategy of the German Government inlate 2018 are one indicator for that. But more importantly, we are already interactingwith AI systems as if it were the most natural thing in the world, for example, whenusing language assistants such as Siri or Alexa. Nevertheless, according to surveys,over 50% of Germans do not know what artificial intelligence is.

    To address this issue, we have put together a collection of Unplugged Activitiesrelated to the topic of AI. Unplugged Activities provide approaches that help learnersof all ages to experience the ideas and concepts of computer science actively and dowithout the use of a computer.

    This brochure contains five activities you can use to teach ideas and concepts ofartificial intelligence to learners of all ages.

    Nowadays, AI is primarily realized through machine learning, but artificial intelligenceis far more than that: AI is not only about technical aspects, but also raises questionsof social relevance. This brochure shows possibilities, how these topics can bediscussed with children and adults.

    If you have any questions, comments or remarks about this material, please do nothesitate to contact us at [email protected].

    Some activities require additional material.Print templates can be found here:

    https://aiunplugged.org

    https://ddi.cs.fau.de/schule/ai-unplugged

  • 3

    Classification with Decision Trees 4

    #deeplearning 8

    Reinforcement Learning 12

    Back to the Roots 16

    The Turing Test 20

    Further Ideas 23

    Contents

  • 4

    The Good-Monkey-Bad-Monkey Game

    Target group Primary School Level, Secondary School Level

    That’s what it’s allabout

    How does a computer make decisions independently? How doesa computer decide whether a person is athletic, should acquire aloan, etc.? Such classification processes are a frequentapplication of AI. In this activity, students have the opportunity tocreate their own classification model using a decision tree. In theend, the best one of the students’ models is selected for furtherclassification tasks.

    These ideas arebehind it

    • AI classifies data based on patterns.

    • AI uses the classification model that best fits the given data.

    • Classification models are not perfect.

    • Certain combinations of characteristics indicate a certaincategory.

    What you need • Monkey cards (preparation: cutting the template,alternatively use digital version)

    • Blackboard with magnets or pinboard

    Here’s how itworks

    Students examine how a series of sample elements (trainingdata) belongs to a category. To do this, they develop criteria thatcan be used to classify new elements. Subsequently, theresulting models are tested with new examples (test data) andthe accuracy of the prediction is determined.

    0102

    0304

    0506

    0708

    0910

    1112

    1314

    1516

    1718

    1920

    Annabel LindnerStefan Seegerer

    Classification withDecision Trees

  • 5

    Context

    We are animal keepers in a zoo andresponsible for feeding the monkeys.The monkeys look very cute, but wehave to be careful because somemonkeys are biting. We already knowwhether the monkeys in the zoo bite.However, new monkeys will be joiningthe group soon and we need to considerhow to find out which new monkeys biteand which don't - preferably withoutgetting too close to their teeth.

    Activity Description

    Depending on the target group, youchoose the elementary game versionwith 20 picture cards (blue) or theadvanced version with 40 picture cards(blue and green). These 20 or 40monkeys are all animals of the zoo, i.e.we already know if they are going to bite.They are split into training and test data.Based on the training data, we think ofcriteria that determine whether themonkeys bite and check their reliabilitybased on test data. The training data is- subdivided into the two categoriesbiting and not biting - pinned on theboard. The test data is not revealed atfirst. You can think of rules by which todistinguish the monkeys yourself or useone of the proposals below (usingreduced subsets is also possible). Therules that apply in the examples areillustrated with decision trees. First,

    make your students aware of the detailsthey could focus on by illustrating theprocedure with an example. Forexample, compare the monkey cards 01to 04 and 05 to 08. In this example, theshape of the mouth is an indication forbiting monkeys, but not the eyes (Fig. 1).Alternatively, with older students, youcan also use the simple version of thegame (Version 1) to demonstrate therules and the necessary procedures.

    Fig. 1: In this simple example all monkeyswith bared teeth are biting.

    The students form teams of two and usethe training data to develop criteria fordistinguishing biting from non-bitingmonkeys. These must be clearly notedso that they can be applied to newexamples by another team afterwards. Apossibility to record the criteria is adecision tree. It should be the goal thatthe existence or absence of a particularcharacteristic permits a clearassignment to one of the groups. Theuse of decision trees is optional,alternatively, it is also possible to

    Biting Non-biting

    Version 1 (blue)

    Version 1 training databiting: 6, 7, 8, 15non-biting: 1, 2, 4, 9, 12, 14, 17, 18

    Version 1 test databiting: 3, 5, 11, 19non-biting; 10, 13, 16, 20

    01 02

    03 04

    05 06

    07 08

    Beißt Beißt nicht

    gefletschteZähne

    offeneAugen

    Eyeswideopen

    Baredteeth

    X-shaped eyes?

    Does ist smile?

    Biting

    Biting Non-biting

    Yes! No!

    Yes! No!

  • 6

    explicitly write down decision rules. Atthe end of the training phase, the criteriaformulated are exchanged with anotherteam. Now, the students are shown thepictures of the remaining monkeys (testdata) one after the other. For eachimage, the teams decide whether themonkey will bite or not using thescheme of rules developed by theirclassmates. Each team notes downtheir decisions. After showing allmonkeys, it is evaluated which team hasassessed best the biting behaviour ofthe monkeys. It comes to the students'attention that many classificationmodels categorize most monkeyscorrectly, but that it is difficult toproperly classify all animals. For us asanimal keepers it is therefore clever touse the most successful model whenfeeding the new monkeys, even if itdoesn't guarantee that we will never getbitten.

    In the advanced version, image no. 21(see Fig. 2) can be used to illustrate theproblems of an AI system when thecharacteristic value of an element

    differs significantly from the trainingdata. We have no experience with thecharacteristics of image no. 21, becausethis monkey has a new, unknown mouthshape. Accordingly, an appropriateassignment of the monkey is notpossible. In practice, the behaviour of anAI system is very difficult to predict forthis case. Instead of image no. 21 youcan also use the image of a differentanimal to emphasize the differentcharacteristics of the new element.Subsequently, these examples can beapplied to reality: a bank does not granta credit to a specific customerunexpectedly, the self-propelled carrecognizes leaves on the road as adangerous situation and mistakenlyslams on the brakes. In these situations,the AI system can also be dangerous, ifit is not comprehensible, how thesedecisions were made.

    Fig. 2: For monkey 21, no explicit criteria canbe derived from the data.

    Version 2 (blue & green)

    Version 2 training databiting: 1, 2, 5, 9, 10, 14, 15, 16, 17, 28, 33, 35, 36non-biting: 4, 7, 12, 19, 22, 23, 24, 25,

    30, 32, 37, 38, 39, 40

    Version 2 test databiting: 6, 13, 18, 34non-biting; 3, 8, 11, 20, 21, 26, 27, 29, 31

    21

    Open mouth?

    Accessory?

    At least oneeye open?Biting

    Non-biting Non-biting

    Yes! No!

    Yes!

    No!

    Yes! No!

    Biting

  • 7

    Background

    Category formation is made possible byrecognizing repetitive patterns inindividual elements. But how do theseaspects relate to artificial intelligence?

    In so-called supervised learning, the AIsystem observes a series of input andoutput pairs (training data) and learnshow they relate to each other as well aswhich patterns are typical for whichcategory. This knowledge is then usedto classify new elements into thecategories. Test data, whose categorieswe know, but the AI system doesn’t, isused to determine the quality of thelearned classification model.

    The same principle is used for neuralnetworks and other AI applications. Thisprocedure can lead to various problems,because no model is perfect. Dependingon the training data, the classificationmodel can overweight or neglect certaincharacteristics of the training data so

    that no general statements and, thus, nocorrect classification of unknownelements is possible. A lot of trainingdata can help to reduce these effects,but does not always lead to moreaccurate results, since too muchtraining data can also result inoverfitting. In this case, the AI systemlearns the training data “by heart” and isno longer able to generalize to new data.

    It makes sense to address theseaspects of machine learning as part ofthe activity. When applying their rules inthe test phase, let the students explainwhich characteristics they used toclassify the monkeys. This will illustratethat the students have created differentsets of rules. Point out that aclassification model is unlikely to be100% accurate and that the model thatbest classifies the test data will bechosen in the end. Have studentsdescribe their own “learning process”and then compare it to that of acomputer.

    Fig. 3: Training data split into two categories

  • 8

    #deeplearningImage recognition with neural networks

    Target group Secondary School Level

    That’s what it’s allabout

    How can a computer “recognize” things? How does a computerdecide whether a photo shows a cat? How can it distinguishbuildings from people? Recognizing objects based on theirshape or their appearance is very easy for people. For thecomputer, which, for example, can be used in a self-propelled carto recognize the objects in its environment, this represents acomplex task. In this activity, pupils have the opportunity to findout, how computers recognize the content of images.

    These ideas arebehind it

    • Neural networks assign inputs to specific outputs: Raw data,such as images, are classified, for example, by assigningterms to the objects in the image.

    • Neural networks consist of different abstraction layers thatcan identify increasingly complex features.

    • The classes of objects to be recognized must already beknown to the AI system.

    What you need • Photo cards of houses, cats andcars for each group

    Here’s how itworks

    The students recreate the image recognition process of a(simplified) neural network. They take on the roles of the differentlayers within such a network. They extract features from aphotograph and classify the image. They recognize the limits ofthe system and consider which modifications to the network arenecessary to achieve better results with their network.

  • 9

    Context

    As humans, we rely heavily on what wesee. If we see a cat, we knowimmediately that it is a cat, if we see adog, we recognize it at once. Acomputer, on the other hand, can'tdetect this as easily, but it can learn it -just like we did in infancy. The computeris shown a lot of pictures of dogs, butalso of other animals. With thisinformation, it learns which patterns inan image can be used to distinguish adog from a cat. Properly trained, thecomputer cannot only automaticallylabel images, but also detect skincancer or - built into a car - react toobstacles on the road.

    Activity Description

    Start by discussing how a computermight recognize the contents of animage. Answers will often refer todefined rules or a comparison with animage database, but nowadayscomputers do it differently. Dividestudents into groups of three, eachgroup receives a stack of photo cards. Inevery group there are three rolls, eachrepresenting one layer of a neuralnetwork (see Fig. 4). The tasks of theroles are the following:

    A picks an image from the stack ofphoto cards (B and C should not see theimage!), creates two different sketchesof it (30 seconds each) and passes themon to B. Thereby, it is important that C

    does not see the sketches.

    B receives the sketches from A andchecks whether square shapes,triangular shapes or round shapes areincluded. Then B passes the collectedinformation on to C.

    C evaluates the information receivedusing the following table and announceswhether the original picture is a house, acar or a cat.

    Finally, A determines whether thesolution is correct.

    Let the students try the game indifferent roles. It is also possible to havea role played by two students or toexplicitly assign each role to threestudents, who each represent a neuron(i.e. a knot in the layer) and no longer awhole layer.

    After a short trial, hand out someimages to the students that either donot fit into the categories that the netcan recognize or have features that donot allow a clear classification. Forexample, a picture of a dog is notcorrectly recognized by the net - simplybecause the net does not know thecategory "dog". Based on this finding,students are now considering how thenet can be changed and expanded to

    Rectangularshape?

    Triangularshape?

    Roundshape?

    House Yes Yes NoCar Yes No YesCat No Yes Yes

    Abb. 4: The students’ roles

    A

    Containsrectangularshape?

    Containstriangularshape?

    Containsroundshape?

    B CTriangular Shape: YesRound Shape: YesRectangular Shape: No

    Cat

  • 10

    recognize dogs or other objects in thefuture. First of all, a new output categorymust be introduced. At the same time,however, the number of characteristicsidentified by the net is no longersufficient. Consequently, either furthercharacteristics must be added thatallow the categories to be distinguished,or several characteristics must becombined to form a more complexpattern. This merging ultimatelyinvolves the addition of further layers inthe neural network. Fig. 6, for example,can be used to help studentsunderstand how to combine simplefeatures into more complex patterns.

    Background

    When it is difficult to translate a givenproblem into logical rules, artificialneural networks are often used inproblem-solving. Typical examplesinclude the understanding of texts or therecognition of objects in images. Thedesign idea for artificial neural networksoriginates from neurobiology and isbased on the structure of the humanbrain. Analogous to a human nerve cell,which processes various stimuli andtransmits an impulse, an artificialneuron also deals with various inputsand can transmit a signal. A weight isassigned to the input edges, i.e. theyhave a varying influence on the output ofthe neuron. An artificial neuron is thussomewhat similar to a human neuron,but functions more like a simplecalculator: it multiplies the edge weightsand input values, adds them up andpasses on a result.

    Just like in the human nervous system,many artificial neurons are connectedand form a network in this way. Theneurons are organized in layers.Depending on the complexity of aproblem, a net can consist of two ormore layers. In the initial situation ofthis activity, there are three layers. If anet has further layers between the inputand the output layers this is called deeplearning.

    In practice, image recognition and

    classification usually work by means ofso-called convolutional neuralnetworks, which are specialized inrecognizing patterns and are thereforevery suitable for classifying images.This type of neural network ischaracterized by the fact that it uses so-called convolutions to extractcharacteristics and patterns from inputdata. Nowadays, these networks canclassify images faster than humans.

    How exactly does this work? Digitalphotos are composed of small colorelements - pixels - arranged in a grid.Each pixel has a certain color value. Forthe computer, photos are - as opposedto humans - only numerical values.Initially, networks for image recognitiontry to recognize simple features. For thispurpose, filters are placed over theimage. This is similar to what we do inphoto editing programs, for example,when we use a high pass filter (see Fig.7). Ultimately, this is a mathematicalcalculation, which captures severalpixels and calculates a new pixel.

    Depending on the filter, it can be

    Fig. 7: Applying a filter

    Fig. 5: Example for artificial neuronalnetworks - left: simple network, right:

    "deeper" network

  • 11

    detected, for instance, that pixels withsimilar brightness values can bebrought together to form edges. On asubsequent level, features such ashorizontal and vertical lines, circles orcorners are extracted. Common imageprocessing programs such as Gimpallow entering such filters as a matrix, agood possibility to explore their effectsyourself.

    In the game, the drawings made by thestudents also serve as filters, as theyextract the central elements of theobject depicted in the photo. Thesketches are then used to identifygeometric shapes in the pictures.However, using only three features is agreat simplification compared to a realneural network, which has severalmillion neurons in a multitude of layers.

    On the first levels of a neuronal network,there is a multitude of simple and rathergeometric filters. These patterns arethen (again by the application of filters)combined to more complex patterns. On"deeper levels" not only corners andedges can be recognized, but also partsof objects, like eyes or fur, and finallyeven complete objects, like dogs or cats.While being processed, (superfluous)

    information is repeatedly discarded,since, for example, the exact position ofa diagonal line in an image is of littleinterest for the recognition of an objectin many cases. In the end, a probabilityvalue indicates how likely an image canbe assigned to a specific category.

    However, a neural network cannot easilyrecognize the content of any image.Rather, the application frame is verylimited: The neural net must first be"trained" with a very large number ofimages (several thousand). It learnswhich characteristics are decisive forimages belonging to a certain category.Thus, the neural network can onlycorrectly classify images whosecategory it already knows. For example,a net that is supposed to distinguishbetween dogs and cats cannotrecognize other animals, but rathersorts them into one of the two familiarcategories. However, a trained neuralnet can fulfil its task much faster thanhumans could ever do. Therefore, imagerecognition methods are already beingused e.g. in self-propelled cars to detectvarious objects in road traffic(oncoming traffic, pedestrians, etc.) or inskin cancer detection.

    Edges and simplegeometric shapes areidentified

    Complex shapes andobjects like tyres orears are identified

    Whole objects arerecognized based onthese shapes and objects

    Fig. 6: Neural nets identify increasily complex features

  • 12

    Reinforcement LearningBeat the Crocodile

    Target group Primary School Level, Secondary School Level

    That’s what it’s allabout

    We are familiar with computers that can play chess and beathuman players with superior ease. The Chinese board game Go,on the other hand, has long been considered so complex thatonly humans can master it - until Google used AlphaGo tofrighten professional human players. In this activity, we see howcomputers can learn strategies for games, even though they onlyknow the rules of the game.

    These ideas arebehind it

    • Computers can learn by "reward" and "punishment".

    • Computers evaluate the benefits of random actions basedon reward and punishment.

    • Computers learn strategies or action sequences by strivingfor maximum reward.

    What you need • Per pair of students: 1 "mini chess" field, 3 monkeyand 3 crocodile cards, 1 overview of possiblemoves

    • Colourful sweets (e.g. chocolate tokens) or papertokens to evaluate the moves in 4 different colours(yellow, red, orange, blue; approx. 20 per colour)

    Here’s how itworks

    Two students each play a game of "mini chess" against eachother. One student assumes the role of a "paper" computer. Atfirst, the computer selects its moves randomly, but graduallylearns with a candy token system which moves help it to win andwhich ones end in defeat. Using the strategy that develops thisway, the computer gets better and better over time.

    Annabel Lindner

    Stefan Seegerer

    Annabel Lindner

    Stefan

    Seegerer

    Computer: Zug 1

    Computer: Zug 1

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    Computer: Zug 2

    This activity is based on an idea of CS4Fun.(http://www.cs4fn.org/machinelearning/sweetlearningcomputer.php)

  • 13

    Context

    How do humans learn to play board orvideo games? Maybe we watch othersplaying or try out how certain actions ormoves influence the game. The morefrequently we win, the better we get in agame. We develop strategies todetermine which moves are mostsuccessful in certain game situations. Inthe same way, a computer learns to playgames.

    Activity Description

    The game follows simple chess rules:Each piece moves like a pawn, i.e. it canonly move straight ahead and can onlybeat opposing pieces diagonally. Astudent takes over the monkeys andacts as a human player. Another studentassumes the role of the computer in theform of crocodiles.

    Fig. 8: Possible movements of a game piece

    One side has won if it manages...

    • to lead a piece to the other end of theplaying field.

    • to beat all opposing pieces.

    • to ensure that the opponent cannotmake any more moves in the nextround.

    Fig. 9: Board before the game starts

    As preparation, printouts of thecomputer's move options are spread outin front of the player taking over thecrocodiles. Then, chocolate tokens (orother colored tokens) are distributedonto these moves. For each colouredarrow, you place a corresponding colourtoken in the area to the right of eachgame situation (see Fig. 11).

    The human player starts. He or she canmove freely according to the rules of thegame. Then it's the crocodiles' turn. Theplayer compares the current playingfield with the possible moves andselects the appropriate playing situationfrom the given alternatives. For quickerorientation, the turn each playingsituation belongs to is indicated. In thefirst round only the two possibilities forturn 1 have to be considered, in thesecond round the 10 moves for turn 2and in round 3 the 7 moves for turn 3.Symmetric game situations are notlisted twice. The crocodile player thencloses his or her eyes and randomlypicks one of the tokens placed next tothe respective game situation and shiftsit to the depicted board. The colour ofthe token determines which move ismade and the player moves the pieceaccording to the arrow of the samecolour. If, for example, a red token is

  • 14

    drawn, the crocodile is moved followingthe red arrow.

    Fig. 10: Crocodile is moved along the redarrow

    This procedure is repeated until thewinner of the round has beendetermined. Before a new round isplayed, the computer adjusts itsstrategy as follows:

    • Crocodiles have won: An additionaltoken in the colour of the lastwinning turn is placed on the squareof that turn.

    • Monkeys have won: The chocolatetoken that determined the last moveof the crocodile player is removed.The player of the monkey may eat it.

    In addition, all tokens are placed to theright of the playing field again.

    Optional: In order to simplify the rules,you can also neglect adding additionaltokens when the crocodiles win.

    Background

    At first, the computer will hardly have achance to win because it chooses itsactions randomly (by picking a tokenwith its eyes closed). The more gamesthe computer finishes, the better it gets:it "learns" which moves help to win andwhich moves should be avoided

    because they ended in defeat in thepast. In this way, the computer'sstrategy is refined gradually.

    Since the computer is punished forlosing and rewarded for winning, wealso speak of reinforcement learning -learning through reward andpunishment:

    • Punishment = Taking away a piece ofcandy in a match that led to defeat.

    • Reinforcement = Adding a piece ofcandy to a turn that led to victory.

    This procedure "sorts out" those movesthat resulted in defeat, so that at somepoint only "good" moves remain. Inpractice, strategies that do not lead tosuccess would not be eliminatedimmediately, but only the probability oftheir occurrence would be reduced. Inthis way, the AI system gradually learnswhich strategy is most suitable in whichsituations, but does not immediatelyexclude individual strategies that havenot led to success in every case.Although this procedure is simplified inthe game by immediately removingmoves that have led to defeat, it cannever happen that all possible movesare eliminated for a game situation. Foreach situation there is at least onepossible action that does not lead to animmediate defeat.

    In this way, computers can learn to wina game simply by knowing the rules ofthe game or its possible inputs. Forexample, if a computer learns to play thevideo game Super Mario, it will initiallyonly hit the keys randomly. This couldlead to the computer stopping forminutes or running into the sameopponent several times. It analyzes theobjects or pixels in the image and reactswith inputs. Its goal is to maximize thepoints scored in the game which act as

  • 15

    a reward. The further the computer canmove to the right, the greater thepositive gain. Over time, it will learn, forexample, that jumping increases itsreward if an opponent is immediately tohis right, as it advances further in thelevel by jumping over the opponent. Inthis way, an AI system's performanceimproves bit by bit in a game, wherebythe system always tries to maximize itsreward (or more exactly: a certainfunction).

    As part of the decontextualisation, havestudents analyse how the computer'sbehaviour develops. It should become

    clear that the computer comes to anefficient game strategy by assessingpurely random actions. Afterwards, forexample, a video about the game SuperMario (see website) can be used toshow how reinforcement learning takesplace in a neural network. Let thestudents reflect on the limits of thestrategies learned by the system. Youcan combine this activity very well withthe Back to the Roots: Crocodile Chessand Classic AI-activity to highlight thecontrast between learning systems andtraditional AI applications such as rule-based systems.

    Fig. 11: Game setup: The distribution of the chocolate tokens shows the strategies learned

  • 1616

    Back to the RootsCrocodile Chess and Classic AI

    That’s what it’s allabout

    The previous exercises deal extensively with learning AI systems.But that's not all AI has to offer: The origins of AI are in logic andthe idea to formalize knowledge by using a mathematicaldescription making it available for machines this way. Thedifferences between learning AI and traditional approaches andthe limits of these systems are shown in this activity. For thispurpose, the preceding Reinforcement Learning activity isperformed with an expert system and thus illustrates the verydifferent approaches.

    These ideas arebehind it

    • Knowledge must be representable in a formal way in order tobe processed automatically.

    • Expert systems can combine rules and facts to generate newknowledge.

    • Such AI systems do not make independent decisions butwork according to the rules of logic.

    • AI systems have processing mechanisms for automaticallydeducing information from existing knowledge.

    What you need • Per pair of students: 1 "mini chess" field, 3 monkey and 3crocodile cards, 1 overview of rules for the next move

    Here’s how itworks

    Just as in the Reinforcement Learning activity, two students playa game of "mini chess" against each other. One student assumesthe role of a "paper" computer. If, as recommended, this activityis combined with the previous Reinforcement Learning version, itis a good idea to exchange roles. Instead of randomly choosingits moves, however, the computer now works according topredefined rules, which are made available as copies.

    Target group Primary School Level, Secondary School Level

  • 1717

    Context

    How can a computer be programmed toplay board or video games? Computerscan only "understand" the rules of agame and act accordingly if the rules arepresented in such a way that a computercan process them. Knowledge musttherefore be formally represented to beavailable for machine processing (e.g.by mathematical terms). In this case,the computer can evaluate it with thehelp of logic and derive its actions fromit. AI systems are therefore not reallyintelligent, but skillfully use differentpossibilities to derive their behaviourfrom the knowledge available.

    Activity Description

    The game follows simple chess rulesand has the same basic conditions asdescribed in the Reinforcement Learningactivity: Each piece moves like a pawn,i.e. it can only move straight ahead andonly hit opponent pieces diagonally. Onestudent takes over the monkeys andacts as a human player. Another studentassumes the role of the computer in theform of the crocodiles.

    Fig. 12: Board before the game starts

    One side has won if it manages...

    • to lead a piece to the other end of theplaying field.

    • to beat all opposing pieces.

    • to ensure that the opponent cannotmake any more moves in the nextround.

    In preparation, the player taking over thecrocodiles is given a printout of the rulesfor the computer's moves. Thesereplace the move options and tokensfrom the Reinforcement Learningactivity. The human player starts. He orshe can move freely according to therules of the game. Then it's thecrocodiles' turn. The player comparesthe current game situation with the “ruletable” and selects the appropriatescenario from all 10 options,symmetrical situations are not listedtwice. Then he or she makes the movethat the rule demands.

    Fig. 13: Crocodile is moved alongside the redarrow

    This procedure is repeated until awinner is determined. Several roundscan be played to check whether thecomputer is always able to win with thehelp of its rules.

    Afterwards, present the extendedversion of the crocodile chess gamewith 4x4 fields (see website) to thestudents. Highlight the fact that therules available to the computer are nolonger sufficient. Let the students drawa comparison to the mini-chess variantin the Reinforcement Learning activity,in which the computer is learning: evenhere the "knowledge" of the computer isno longer sufficient. This is where bothsystems reach their limits.

    The students are now thinking aboutwhat procedures are necessary to adapt

  • 18

    the computer's "knowledge" to the 4x4version of the game. They realize thatthe rule-based computer has to beadjustedmanually by humans by addingnew rules about the best move. Incomparison, the learning system canlearn the best behaviour for the 4x4 fieldin the same way as it did in the previousactivity, i.e. by assessing randombehaviour, as soon as all new possiblemoves have been added. The learningsystem thus needs a new training phasein which the knowledge for the extendedgame is implicitly acquired, while thenew rules must be explicitly added tothe rule-based expert system. Here, thehuman being has the task of firstdetermining the best move for eachgame situation from all possible movesand then formalizing all rulescompletely and in a comprehensive way.This is not necessary for the learningsystem. This explicit formalrepresentation of rules that control thesystem's actions is, however, hardly, if atall, possible for complex and multi-stepproblems. In this context, learning AIsystems - in comparison to expertsystems - offer the great advantage thatthey can determine such procedures"independently". Moreover, they caneven detect correlations in data thatcannot be identified by humans, e.g.because the consideration of manythousands of data records is necessaryto find them.

    Background

    Logic and knowledge processing playan important role in many areas ofcomputer science and are furthermorecore topics of artificial intelligence.Since the natural language isambiguous and too diverse to be anappropriate medium for makingknowledge accessible for machines, thequestion about the best possiblerepresentation for machines is ofdecisive importance since thebeginnings of AI.

    In this context, approaches of thistraditional form of AI rely on symbolicknowledge representation, i.e. theexplicit representation of knowledge incomputer systems, for example with thehelp of logic. This enables theunambiguous, uniform and preciserepresentation of knowledge, which isnecessary for the processing with acomputer. Such representationmethodsare used, for example, in rule-basedexpert systems, which still play a role incommercial applications today. In suchsystems, logical statements thatrepresent facts and knowledge aboutrules are automatically used to drawconclusions about how the computerhas to act. In this activity, the factscorrespond to the current gamesituation and the rule knowledge to theinstructions which move has to bemade.

    The factual basis represents validstatements. A set of rules represented in"if ... then" form forms the rule base.Formally, this "if ... then" form can beexpressed, for example, by propositionallogic. A control system (inferenceengine) selects suitable rules based onthe facts, evaluates them and actsaccordingly. In complex expert systems,the conclusions drawn from rules canalso serve as input facts for further rulesand thus contribute to the expansion ofthe factual basis. In this activity, thetask of the control system is taken overby the student who plays the role of thecomputer. With the help of the movespecifications, which represent the rulebase, he or she has to deduce the nextmove from the current game situation.

    In a rule-based system, this procedureis called data-driven or forwardchaining, because it is tried to achieve ayet unknown goal on the basis of facts.In contrast to this, there is backwardchaining, which attempts to prove a

  • 19

    hypothesis. In the game we are playing"backwards". Starting with a gamesituation in which the computer haswon, we are trying to deduce the movesnecessary to get there.

    Even machine learning, which is thedominant method in artificialintelligence today and has alreadyreplaced expert systems and othertraditional AI applications in manyareas, cannot do without knowledgerepresentation. In technologies such asneural networks, however, this is doneimplicitly, so that one speaks of sub-symbolic systems in this case: A

    systematic behaviour is trained andthus a kind of implicit knowledge aboutunderlying correlations is acquired.However, it is difficult to gain insight intothe concrete solution processes in thesenetworks, since general rules underlyingthe data are only indirectly representedin the neural network, for example inedge weights and activation thresholdsof the neurons. More about neuralnetworks can be found in the activity#deeplearning. CS4FN also offersanother activity that models an expertsystem for noughts and crosses in anunplugged way.

    Fig. 14: Game setup: The rules for the person taking over the computer are clearly defined.

  • 20

    The Turing Test“And oh! I am glad that nobody

    knew I'm a computer!"Target group Secondary School Level

    That’s what it’s allabout

    How does a machine have to behave in order to be consideredintelligent? What exactly does artificial intelligence mean?Researchers have been working on these questions since thebeginnings of artificial Intelligence. With the Turing test, AlanTuring came up with an idea on how to determine whether amachine is intelligent in 1950.This activity reenacts the Turingtest with students and aims to stimulate discussion aboutwhether computers can actually show something like humanintelligence. It also reveals how easy it is to be misled by amachine through carefully chosen examples of "intelligence".

    These ideas arebehind it

    • Intelligent systems use certain strategies to imitate humanbehaviour.

    • Special methods are needed to evaluate the intelligence ofmachines.

    • The definition of (artificial) intelligence is not clear.

    What you need • Worksheets/slides with given Turing test questions for thewhole class

    • A copy of the answers to the Turing test questions

    • 4 voluntary students in the roles of computer (1x), human (1x)and runners (2x)

    Here’s how itworks

    In this activity, students play a question-and-answer game inwhich they try to distinguish a computer from a human being byasking questions and analysing the answers. One studentassumes the role of a computer, another one simply reacts as ahuman being. They are questioned by their classmates and theclass has to determine who represents which role based on theiranswers.

    This activity originates from the original CS Unplugged materials. These are licensed underCreative Commons CC-BY-SA by Bell, Witten, and Fellows. The original material has beenadapted in this description, which is licensed under CC-BY-SA as well.

  • 21

    Context

    For centuries, philosophers havedisputed whether a machine is capableof human intelligence or whether thehuman brain is perhaps just a very goodmachine. Some people think thatartificial intelligence is an absurd idea,others believe that we will eventuallydevelop machines that are as intelligentas we are. Artificial intelligence has a lotof potential, but on the other hand, theidea of intelligent machines also fuelsfears.

    Activity Description

    Before starting the game, discuss withthe students whether they considercomputers to be intelligent or assumethat computers will ever be intelligent.Ask them how to decide whether acomputer is intelligent and brieflyintroduce the Turing test, which issimulated in the activity.

    To prepare the activity, four volunteers

    are selected to take on the roles of acomputer and a human being (seeFigure 15). In addition, there are tworunners who ensure the fair course ofthe game and are equipped with a pieceof paper and a pen to note down theanswers. The roles of 'human' and'computer' are secretly assigned by theteacher before these two students leavethe classroom and head into twoseparate rooms (alternatively you canuse partition walls, but make sure thatthe students do not see each other). Thestudent assuming the role of thecomputer receives a copy of theanswers to the Turing test questions.Each of the runners is responsible forone role, which one is also kept secret.

    Now, the class has to find out whichstudent has assumed the role of thecomputer. To do this, they select onequestion per round from the worksheetdistributed, which is to be asked to thecomputer and the person. After aquestion has been chosen, the studentsshould explain why they consider thisquestion suitable for distinguishing thecomputer from the human being. Thisargumentation is the central element ofthe task, as the class reflects on how theanswers of a person and an "intelligent"computer might differ.

    Next, the runners pose the question totheir classmates in the other rooms andthe answers are brought back to theclass. The human being is obliged toanswer the question briefly and honestly- in other words, to give a humananswer. The computer, on the otherhand, selects the appropriate answerfrom the worksheet. If the instructionsare written in italics, the computer hasto work out an answer itself (e.g. thecurrent time). In transmitting theanswers given, the runners must beparticularly careful not to reveal withwhom they are interacting.

    The class now discusses which answeris likely to come from a computer.Repeat the process with a few more

    Fig. 15: Design of the Turing test

    Answers

    Human orComputer?

    Computer

    Runner Runner

    Human

    Class

  • 2222

    questions, if possible until the class canmake a clear decision about who thecomputer is. If the class cannot reliablydistinguish between human andcomputer, the computer has passed theTuring test.

    Background

    Although no current computer programdisposes of anything like generalintelligence, the question of whethercomputers are basically capable of it isstill unanswered. This is mainly due tothe fact that the very definition ofintelligence is controversiallydiscussed.

    Against this background, the Britishmathematician Alan Turing proposed amethod for determining the intelligenceof a machine without needing an exactdefinition of intelligence in 1950. Thisso-called Turing test lets the computerdemonstrate its "intelligence". Thescenario of the test is similar to theactivity described above: A questionerinteracts both with a person and acomputer via chat. If he or she cannotreliably distinguish between the two, thecomputer has passed the Turing test.Since communication takes place viachat, the computer cannot reveal itselfthrough physical characteristics, suchas voice pitch. A well-known example ofsuch an interaction system is thechatbot Eliza. The answers given by astudent in the role of the computer arenot unlike those given by an "intelligent"

    computer program. Some of theanswers will very quickly expose thecomputer: a human will hardly be able togive the root of 2 to 20 digits. Otherquestions, in which the computeralways uses a certain answer pattern,will reveal it only after some time. Forexample, answers to "Do you like XY?"-questions are not conspicuous whenviewed independently. However, if youcombine several questions of this type,it becomes clear that the computerworks formulaically to generateanswers from the questions. Theanswers can also show that thecomputer has misinterpreted aquestion, although this could alsohappen to a human being. Manyanswers are vague and further inquirywould make clear that the computer didnot really understand the content of thequestion. Moreover, it is often safer forthe computer to answer with "I don'tknow" (e.g. to the question about theroot of 2). This feigns human traits, butcan also lead to unmasking if this tacticis used too often or with too simplequestions. Delayed and erroneousanswers, for example to arithmeticproblems, can also mislead thequestioner for longer. Computers arethus able to feign their ability to talk, forexample by formulaic answers,mirroring the statements of theinterlocutor, reactions to keywords, theuse of idioms and the resumption oftopics, but this is only a facade that iseasy to see through.

  • 2323

    Further Ideas

    Face Recognition Our front door can distinguish us from the postman, our photomanagement software automatically tags our friends: facerecognition is a common application of AI. In doing so, thetechnology should be as flexible as to recognize us even inwinter with a cap and in summer with sunglasses. This activityconveys this principle through cartoon characters.

    Links and details about these activities can be found on our website.

    Monkey, SherlockMonkey

    How can knowledge be represented in such a way that acomputer can "understand" it and draw logical conclusionsfrom it? Logic and formal knowledge representation are of greatimportance here! AI systems are therefore not really"intelligent", but cleverly use different possibilities to representknowledge. This kind of knowledge representation can also bemapped in logic puzzles: Corresponding puzzles require thecombination of different facts according to certain rules, inorder to then find a solution.

    UnsupervisedLearning

    In addition to Supervised and Reinforcement Learning, there arealso so-called Unsupervised Learning procedures: computerslearn without previously known target values and withoutrewards. From a set of data points alone, categories (e.g.customers with high purchasing potential in webshops) oranomalies (e.g. suspicious activities on web servers) can beidentified. Use chalk to draw a grid of coordinates (e.g. in theschoolyard) and ask your students to position themselvesappropriately in the grid using the two axes. Depending on theaxes selected, not only clusters but also outliers or anomaliescan be identified.

    Brain-in-a-Bag In this activity, students simulate the functioning of a neural netthemselves with cords and toilet paper rolls. The final net isthen able to play a game.

  • ImprintEditor:Professorship for Computer Science EducationFriedrich-Alexander-Universität Erlangen-NürnbergMartensstraße 391058 Erlangenhttps://aiunplugged.org

    Editing and design:Annabel Lindner, Stefan Seegerer

    All text and graphics in this brochure (except the FAU logo or stated otherwise) arelicensed under CC BY NC 3.0, which means that you may edit, reproduce anddistribute the material in any format or medium, but not commercially. All you have todo is name the author.

    https://ddi.cs.fau.de/schule/ai-unplugged