31
Artificial Intelligence Search in Problem Solving Chapter 3

Chapter 3. Problem Solving Agents Looking to satisfy some goal Wants environment to be in particular state Have a number of possible actions An action

Embed Size (px)

Citation preview

Lecture 1 Characterisations of AI

Artificial Intelligence Search in Problem SolvingChapter 3

Problem Solving AgentsLooking to satisfy some goalWants environment to be in particular stateHave a number of possible actionsAn action changes environmentfinding sequences of actions that lead to desirable goal.

What sequence of actions reaches the goal?Many possible sequencesExamples of Search ProblemsChessEach turn, search moves for winRoute findingSearch routes for one that reaches destinationTheorem proving Search chains of reasoning for proofMachine learning Search through concepts for one which achieves target categorisationSearch TerminologyStates: places the search can visitSearch space: the set of possible statesSearch pathSequence of states the agent actually visitsSolutionA state which solves the given problemEither known or has a checkable propertyMay be more than one solutionStrategyHow to choose the next state in the path at any given state Specifying a Search Problem1. Initial stateWhere the search starts2. OperatorsFunction taking one state to another stateHow the agent moves around search space3. Goal testHow the agent knows if solution state foundSearch strategies apply operators to chosen statesExample: Route FindingInitial stateCity journey starts inOperatorsDriving from city to cityGoal testIs current location the destination city?LiverpoolLondonNottinghamLeedsBirminghamManchesterGeneral Search Considerations1. Artefact or Path?Interested in solution only, or path which got there?Route findingKnown destination, must find the route (path)Anagram puzzleDoesnt matter how you find the wordOnly the word itself (artefact) is importantMachine learningUsually only the concept (artefact) is importantTheorem provingThe proof is a sequence (path) of reasoning stepsGeneral Search Considerations2. CompletenessTask may require one, many or all solutionsE.g. how many different ways to get from A to B?Complete search space contains all solutionsExhaustive search explores entire space (assuming finite)

Complete search strategy will find solution if one exists

Pruning rules out certain operators in certain statesSpace still complete if no solutions prunedStrategy still complete if not all solutions prunedGeneral Search Considerations3. SoundnessA sound search contains only correct solutionsAn unsound search contains incorrect solutionsCaused by unsound operators or goal checkDangersfind solutions to problems with no solutionsfind a route to an unreachable destinationprove a theorem which is actually false(Not a problem if all your problems have solutions)produce incorrect solution to problemGeneral Search Considerations4. Time & Space TradeoffsFast programs can be writtenBut they often use up too much memoryMemory efficient programs can be writtenBut they are often slowDifferent search strategies have different memory/speed tradeoffsGeneral Search Considerations5. Additional InformationGiven initial state, operators and goal testCan you give the agent additional information?Uninformed search strategiesHave no additional information

Informed search strategiesUses problem specific informationHeuristic measure (Guess how far from goal)Graph and Agenda AnalogiesGraph AnalogyStates are nodes in graph, operators are edgesExpanding a node adds edges to new statesStrategy chooses which node to expand nextAgenda AnalogyNew states are put onto an agenda (a list)Top of the agenda is explored nextApply operators to generate new statesStrategy chooses where to put new states on agendaExample Search ProblemA genetics professorWants to name her new baby boyUsing only the letters D,N & ASearch through possible strings (states)D,DN,DNNA,NA,AND,DNAN, etc.3 operators: add D, N or A onto end of stringInitial state is an empty stringGoal testLook up state in a book of boys names, e.g. DANUninformed Search StrategiesBreadth-first searchDepth-first searchIterative deepening searchBidirectional searchUniform-cost search

Also known as blind searchBreadth-First SearchEvery time a new state is reachedNew states put on the bottom of the agendaWhen state NA is reachedNew states NAD, NAN, NAA added to bottomThese get explored later (possibly much later)

Graph analogyEach node of depth d is fully expanded before any node of depth d+1 is looked atBreadth-First SearchBranching rateAverage number of edges coming from a node (3 above)Uniform SearchEvery node has same number of branches (as above)

Depth-First SearchSame as breadth-first searchBut new states are put at the top of agendaGraph analogyExpand deepest and leftmost node nextBut search can go on indefinitely down one pathD, DD, DDD, DDDD, DDDDD, One solution to impose a depth limit on the searchSometimes the limit is not requiredBranches end naturally (i.e. cannot be expanded)Depth-First Search (Depth Limit 4)

State- or Action-Based Definition?Alternative ways to define strategiesAgenda stores (state, action) rather than stateRecords actions to performNot nodes expandedOnly performs necessary actionsChanges node order

Depth- v. Breadth-First SearchSuppose branching rate bBreadth-firstComplete (guaranteed to find solution)Requires a lot of memoryAt depth d needs to remember up to bd-1 statesDepth-firstNot complete because of indefinite paths or depth limitBut is memory efficientOnly needs to remember up to b*d states

Iterative Deepening SearchIdea: do repeated depth first searchesIncreasing the depth limit by one every timeDFS to depth 1, DFS to depth 2, etc.Completely re-do the previous search each timeMost DFS effort is in expanding last line of the treee.g. to depth five, branching rate of 10DFS: 111,111 states, IDS: 123,456 statesRepetition of only 11%Combines best of BFS and DFSComplete and memory efficientBut slower than eitherBidirectional SearchIf you know the solution stateWork forwards and backwardsLook to meet in middleOnly need to go to half depth

DifficultiesDo you really know solution? Unique?Must be able to reverse operatorsRecord all paths to check they meetMemory intensiveLiverpoolLondonNottinghamLeedsBirminghamManchesterPeterboroughAction and Path CostsAction costParticular value associated with an actionExamplesDistance in route planningPower consumption in circuit board constructionPath costSum of all the action costs in the pathIf action cost = 1 (always), then path cost = path lengthUniform-Cost SearchBreadth-first searchGuaranteed to find the shortest path to a solutionNot necessarily the least costly pathUniform path cost searchChoose to expand node with the least path costGuaranteed to find a solution with least costIf we know that path cost increases with path lengthThis method is optimal and completeBut can be very slowInformed Search StrategiesGreedy searchA* searchIDA* searchHill climbingSimulated annealingAlso known as heuristic searchrequire heuristic functionBest-First SearchEvaluation function f gives cost for each stateChoose state with smallest f(state) (the best)Agenda: f decides where new states are putGraph: f decides which node to expand next

Many different strategies depending on fFor uniform-cost search f = path costInformed search strategies defines f based on heuristic functionHeuristic FunctionsEstimate of path cost hFrom state to nearest solutionh(state) >= 0h(solution) = 0Strategies can use this informationExample: straight line distanceAs the crow flies in route findingWhere does h come from?maths, introspection, inspection or programs (e.g. ABSOLVER)LiverpoolNottinghamLeedsPeterboroughLondon12075155135Greedy SearchAlways take the biggest bitef(state) = h(state)Choose smallest estimated cost to solutionIgnores the path costBlind alley effect: early estimates very misleadingOne solution: delay the use of greedy searchNot guaranteed to find optimal solutionRemember we are estimating the path cost to solutionA* SearchPath cost is g and heuristic function is hf(state) = g(state) + h(state)Choose smallest overall path cost (known + estimate)Combines uniform-cost and greedy searchCan prove that A* is complete and optimalBut only if h is admissable,i.e. underestimates the true path cost from state to solution

A* Example: Route FindingFirst states to try:Birmingham, Peterboroughf(n) = distance from London + crow flies distance from statei.e., solid + dotted line distancesf(Peterborough) = 120 + 155 = 275f(Birmingham) = 130 + 150 = 280Hence expand PeterboroughBut must go through Leeds from NottsSo later Birmingham is betterLiverpoolNottinghamLeedsPeterboroughLondon120155135130150BirminghamSearch StrategiesUninformedBreadth-first searchDepth-first searchIterative deepeningBidirectional searchUniform-cost search

InformedGreedy searchA* searchIDA* searchHill climbingSimulated annealingSMA* in textbook