Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
ARTIFICIAL INTELLIGENCE
Lecturer: Silja Renooij
Pathfinding and search
Utrecht University The Netherlands
These slides are part of the INFOB2KI Course Notes available fromwww.cs.uu.nl/docs/vakken/b2ki/schema.html
INFOB2KI 2019-2020
Pathfinding
2
`Physical’ world: open space or structured?
Contents
Dijkstra, BFS, DFS, DLS, IDS Differences with elective course
‘Algoritmiek’:– Theory all by example: Dijkstra in detail,
others more superficial. Some more in-depth in Project B
– Emphasis on difference tree-search / graph-search
– Emphasis on comparison of properties rather than proofs of properties.
3
4
Pathfinding in Romania
Requirements for search1. States S: abstract represention of (set of) real state(s)
– E.g. all cities on a map, or all possible paths in a grid or (waypoint) graph
– A problem typically assigns an initial state (e.g "in Arad“) and a goal state (e.g. "in Zerind“)
2. Actions (operators/production rules) S S’: abstract representation of combination of real actions– E.g. change from one path to another by change of links– Each abstract action should be "easier" than the original problem; e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc.
5
State space for searchThe state space represents all states reachable fromthe initial state by any sequence of actions.
A tree or graph with nodes S and (un)directed edges S S’ is used as state space representation.
A search algorithm is applied to the state space representation to find a solution (abstract representation of solution in the real world)
– for guaranteed realizability, a sequence of actions maps initial state to goal state
– e.g. list of connections = real path
6
Searching the state space3. Strategy
– Defines how to search through state space– E.g. Systematic (desirable, not always possible):
• All states are visited• No state is visited more than once
Search algorithms:• differ in employed strategy• build a search tree to explore state space• often “create” state space while searching (space often
too large to represent all solutions• create (portions of) all paths while finding a good path
between two points7
8
Different representations Real world is absurdly complex
We often represent the world in a simplified wayIn addition, the search space is abstracted for problem solving in the world
An abstract representation of the world is not the same as a state space representation suitable for problem solving!
The map of Romania is represented as a graph with cities as nodes, and edges representing existing connections between cities. We can also use grids or waypoint graphs to represent the real world.As state space representation for path finding we can use, e.g. the same structure as the domain representation (nodes in the grid /
(waypoint) graph), or nodes to represent entire paths (lists of connections) on this grid/graph,…
Dijkstra’s algorithm Solves the single‐source shortest path problem using GRAPH‐SEARCH Source state start node of search Each connection is associated with a cost, often called step‐cost The cost‐so‐far is the cost to reach a node nalong a given path from the start node (often called path‐cost, denoted g(n) ) Result: to each n, a path with minimal g(n)
9
Edsger W. Dijkstra1930‐2002
Dijkstra’s algorithmBookkeeping: open list (seen, but not processed) closed list (completely processed) cost‐so‐far & connections (path) followed to get here
Init: open list (start node, cost‐so‐far=0)
Iteration:Process node from open list with smallest cost‐so‐far
Terminate: When open list is empty Follow back connections to retrieve path
10
11
Dijkstra’s algorithm: example
Open: (Arad,0) Closed:
12
Dijkstra’s algorithm: example
Open: (Z,75)Arad, (S,140)A, (T,118)A Closed: (Arad,0)
13
Dijkstra’s algorithm: example
Open: (S,140)A, (T,118) A, (O,146) Z Closed: (Arad,0), (Z,75)A
14
Dijkstra’s algorithm: example
Open: (S,140) A, (O,146) Z, (L,229)T Closed: (Arad,0), (Z,75) A, (T,118) A
15
Dijkstra’s algorithm: example
Open: (O,146)Z, (L,229) T, (R,220)S, (F,239) S
Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A
140 + 151 > 146
16
Dijkstra’s algorithm: example
Open: (L,229) T, (R,220) S, (F,239) S Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z
17
Dijkstra’s algorithm: example
Open: (L,229) T, (F,239) S, (C,366)R, (P,317) R
Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S
18
Dijkstra’s algorithm: example
Open: (F,239) S, (C,366) R, (P,317) R, (M,299)L
Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T
19
Dijkstra’s algorithm: example
Open: (C,366) R, (P,317) R, (M,299) L, (B,450)F
Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S
20
Dijkstra’s algorithm: example
Open: (C,366) R, (P,317) R, (B,450) F, (D,374)M
Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L
21
Dijkstra’s algorithm: example
Open: (C,366) R, (B,450) F, (D,374) M, (B,418)P
Closed:(Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L, (P,317) R
317+138>366
22
Dijkstra’s algorithm: example
Open: (D,374) M, (B,418) PClosed:(Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299)L, (P,317) R, (C,366) R
366+120>374
23
Dijkstra’s algorithm: example
Open: (B,418) PEtc….(Dijkstra continues, we stop…)
Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L, (P,317) R, (C,366) R,
(D,374) M
24
Dijkstra’s algorithm: example
Retrieve path to… Bucharest
(Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L, (P,317) R, (C,366) R, (D,374) M, (B,418) P
Dijkstra’s algorithm: summary Start with paths of length 1 and expand the one with lowest (non‐negative!) cost first All paths will be found (and no path will be found more than once) GRAPH‐SEARCH algorithm: i.e. uses closed list concept to avoid loops Recall: solves the single‐source shortest path problem Not aimed at finding path to one specific goal…
25
26
TREE-SEARCH algorithmsSimulated exploration of state space by generating successors of already‐explored states (a.k.a. expanding)
– initial state start node of tree – expand: function that generates leafs for successors fringe (= frontier = open list) with all leafs
– goal state: used in goal test Note: goal‐test only when node is considered for expansion, notalready upon generation!
27
Pseudocodegeneral TREE-SEARCH
# fringe = frontier = open list
Note: goal‐test upon expansion; sometimes more efficient upon generation!
28
Implementation: states vs. nodes (As before) State S is abstract representation of physical
configuration in problem domain; it corresponds to a node in the state space representation
A node x in a search tree constructed upon exploring the state space is a data structure containing info on:– state S– parent node– action– path cost g(x), – depth,…
Expand: function that– creates new nodes– fills in the various node fields– uses problem‐associated Successor‐Fn to generate successors
29
TREE-SEARCH example
Node for initial state “Arad” in fringe
30
TREE-SEARCH example
Node related to state “Arad” is expanded and removedfrom fringe
Nodes related to states “Sibiu” (~path from Arad to Sibiu), “Timisoara” and “Zerind” are generated and added to thefringe
31
TREE-SEARCH example
Suppose strategy selects node related to state “Sibiu” forexpansion: a.o. node related to state “Arad” is generated and added to the fringe
oops….there’s a loop….!
32
Repeated statesFailure to detect repeated states can turn a linear problem into an exponential one!
33
Avoiding loopsMethod 1: • Don’t add node to the fringe if we generated a node for the associated state before
What if there are multiple paths to a node and we want to be sure to get the shortest ? (like e.g. to Oradea)
Method 2: GRAPH SEARCH• Don’t add node to the fringe if we expanded a node for the associated state before
Keep a closed (=explored=visited) list of expanded states (c.f. Dijkstra)
This is not for free: takes up time and memory!
34
GRAPH-SEARCH
35
TREE-SEARCH vs GRAPH-SEARCH
GRAPH‐SEARCH = TREE‐SEARCH + a closed list of explored states
GRAPH‐SEARCH is used to prevent searching redundant paths; search algorithms can be implemented as either one.
NB tree vs graph in this context refers to: structure underlying the state space
It does not refer to: structure built during search (a tree in both cases) simplified representation of problem domain
– e.g. the road map
36
Search strategies A search strategy defines the order of node expansion
Strategies are evaluated along the following dimensions:– completeness: does it always find a solution if one exists?– optimality: does it always find a least‐cost solution?– time complexity: how long does it take to find a solution?– space complexity: how much memory is needed?
Time and space complexity are measured in terms of – b:maximum branching factor of the search tree (max # succ.)
– d: depth of optimal (least‐cost) solution (start with d=0)
– m: maximum length of path in state space (may be ∞)
– total number of nodes generated (= time)
– maximum number of nodes in memory (= space)
Solutionquality
Searchcomplexity
37
Uninformed search strategiesUse only the information available in the problem definition (a.k.a. blind search)
Breadth‐first search (BFS) Uniform‐cost search (UCS) Depth‐first search (DFS) Depth‐limited search (DLS) Iterative deepening search (IDS)
Can only generate successors and distinguish goal‐state from non‐goal state
38
Breadth-first search Expand shallowest unexpanded node Implementation:
– fringe is a FIFO queue, i.e., new successors go at end– ties: (in this case) queue in alphabetical order ( subsequently expanded in alphabetical order too)
fringe = A
39
Breadth-first search
fringe = BC
Expand shallowest unexpanded node Implementation:
– fringe is a FIFO queue, i.e., new successors go at end– ties: (in this case) queue in alphabetical order
40
Breadth-first search
fringe = CDE
Expand shallowest unexpanded node Implementation:
– fringe is a FIFO queue, i.e., new successors go at end– ties: (in this case) queue in alphabetical order
41
Breadth-first search
fringe = DEFG
Expand shallowest unexpanded node Implementation:
– fringe is a FIFO queue, i.e., new successors go at end– ties: (in this case) queue in alphabetical order
42
Properties of BFSAssumption: goal‐test upon generation, finite d
Complete? Yes, as long as b is finite
Optimal? Yes, if step costs equal shallowest == optimal
Time? 1+b+b2+b3+… +bd = O(bd) (if goal‐test upon expansion: + b(bd‐1) = O(bd+1))
Space? same as time
Exponential space is bigger problem (even more than time)
works only for smaller instances
dominate costsfor closed list in GRAPH‐SEARCH
43
Uniform-cost searchIncorporates step costs
Expand least‐cost unexpanded node Like Dijkstra, but now with goal test Implementation:
– fringe = priority queue, ordered by path cost g(n)
Equivalent to breadth‐first search if step costs all equal
! But step costs need not be equal (even though the name may suggest otherwise) !
44
Properties of UCSAssumptions: goal‐test upon expansion, finite d;ε =minimal step cost; C* = cost of cheapest solution
Complete? Yes, if ε > 0 and b is finite
Optimal? Yes: nodes expanded in increasing order of g(n) Time? O(bceiling(C*/ ε))
(# of nodes with path cost g ≤ C * )
Space? O(bceiling(C*/ ε))
typically dominatecosts for closed list in GRAPH‐SEARCH
45
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order ( subsequently expanded in alphabetical order!)
fringe = A
46
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = BC
47
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = DEC
48
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = HIEC
49
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = IEC
done at H
50
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = EC
done at I done at D
51
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = JKC
52
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = KC
done at J
53
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = C
done at K done at E done at B
54
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = FG
55
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = LMG
56
Depth-first search Expand deepest unexpanded node Implementation:
– fringe = LIFO queue, i.e., put successors at front– ties: (in this case) stack in reverse alphabetical order
fringe = MG
done at L
Etc.
Complete? No, unless GRAPH‐SEARCH in finite state space, finite d
Optimal? No, it finds “left‐most” solution, regardless of cost
Time? O(bm)
– terrible if m is much larger than d
– but if solutions are dense, may be much faster than breadth‐first
Space? O(bm), in case of tree search (“black” nodes are removed from memory)
57
Properties of DFS
Advantage may be lost in GRAPH‐SEARCH due to costsfor closed list !
58
Depth-limited search= depth‐first search with depth limit l,
i.e., nodes at depth l do not generate successors
solves infinite path problem (m = ∞)
Recursive implementation:
59
Properties of DLSNote: DFS is special case of DLS with l = m (possibly ∞)
Complete? Not if l < d (do we know d ??)
Optimal? Not if d < l
Time? O(bl)
Space? O(bl)Again advantage may be lost in GRAPH‐SEARCH due to costsfor closed list !
60
Iterative deepening search
Repeats DFS for increasing depth limit
Finds best depth limit
Combines benefits of BFS and DFS
61
Iterative deepening search limit=0
62
Iterative deepening search limit=1
63
Iterative deepening search limit=2
64
Iterative deepening search limit=3
65
Iterative deepening search Number of nodes generated in a depth‐limited search (DLS) to depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd‐2 + bd‐1 + bd
Number of nodes generated in an iterative deepening search (IDS) to depth d with branching factor b:
NIDS = (d+1)b0 + d b1 + (d‐1)b2 + … + 3bd‐2 +2bd‐1 + 1bd
For b = 10, d = 5:
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
Overhead = (123,456 ‐ 111,111)/111,111 = 11%
66
Properties of IDSAssumption: finite d
Complete? Yes Optimal? Yes
Time? O(bd)(d+1)b0 + d b1 + (d‐1)b2 + … + bd
Space? O(bd)
Inherited from BFS; same assumptions apply
Inherited from DFS, but max depth restricted to d;
same observations wrtGRAPH‐SEARCH apply
67
Summary of search algorithms
This overview assumes TREE‐SEARCH, with goal‐test upon expansion, and finite solution depth d.
Recall (!): • most yes’s and no’s depend on additional assumptions• Space complexity may be different if GRAPH‐SEARCH is employed
Is there one best algorithm?
68
Summary pathfinding and uninformed search
Algorithms find path to goal
Problem‐specific ingredients used only for: Goal test Path cost
(used in solution, and sometimes for expansion)