27
CS 416 Artificial Intelligence Lecture 5 Lecture 5 Informed Searches Informed Searches

CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Embed Size (px)

Citation preview

Page 1: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

CS 416Artificial Intelligence

Lecture 5Lecture 5

Informed SearchesInformed Searches

Lecture 5Lecture 5

Informed SearchesInformed Searches

Page 2: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Administrivia

Visual StudioVisual Studio

• If you’ve never been granted permission before, you should receive If you’ve never been granted permission before, you should receive email shortlyemail shortly

Assign 1Assign 1

• Set up your submit account passwordSet up your submit account password

• Visit: www.cs.virginia.edu/~cs416/submit.htmlVisit: www.cs.virginia.edu/~cs416/submit.html

ToolkitToolkit

• You should see that you’re enrolled through Toolkit access to class You should see that you’re enrolled through Toolkit access to class gradebookgradebook

Visual StudioVisual Studio

• If you’ve never been granted permission before, you should receive If you’ve never been granted permission before, you should receive email shortlyemail shortly

Assign 1Assign 1

• Set up your submit account passwordSet up your submit account password

• Visit: www.cs.virginia.edu/~cs416/submit.htmlVisit: www.cs.virginia.edu/~cs416/submit.html

ToolkitToolkit

• You should see that you’re enrolled through Toolkit access to class You should see that you’re enrolled through Toolkit access to class gradebookgradebook

Page 3: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Example

100

900

A

B C

10 5

10 95Goal

h(n)

c(n, n’)

f(n)

Page 4: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

A* w/o Admissibility

100

900

A

B C

10 5

10 95Goal

B = goalf(B) = 10Are we done?NoMust explore C

Page 5: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

A* w/ Admissibility

10

900

A

B C

10 5

10 95Goal

B = goalf(B) = 10Are we done?Yesh(C) indicatesbest path costof 95

Page 6: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

A* w/o Consistency

0A

101B

200C

100D

5

4 5

5

Page 7: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

A* w/ Consistency

0A

101B

105C

100D

5

4 5

5

Page 8: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Pros and Cons of A*

A* is optimal and optimally efficientA* is optimal and optimally efficient

A* is still slow and bulky (space kills first)A* is still slow and bulky (space kills first)• Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal

– This is actually a function of heuristic, but they all make This is actually a function of heuristic, but they all make mistakesmistakes

• A* must search all nodes within this goal contourA* must search all nodes within this goal contour

• Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution

• Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible

A* is optimal and optimally efficientA* is optimal and optimally efficient

A* is still slow and bulky (space kills first)A* is still slow and bulky (space kills first)• Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal

– This is actually a function of heuristic, but they all make This is actually a function of heuristic, but they all make mistakesmistakes

• A* must search all nodes within this goal contourA* must search all nodes within this goal contour

• Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution

• Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible

Page 9: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Memory-bounded Heuristic Search

Try to reduce memory needsTry to reduce memory needs

Take advantage of heuristic to improve performanceTake advantage of heuristic to improve performance

• Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*)

• Recursive best-first search (RBFS)Recursive best-first search (RBFS)

• SMA*SMA*

Try to reduce memory needsTry to reduce memory needs

Take advantage of heuristic to improve performanceTake advantage of heuristic to improve performance

• Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*)

• Recursive best-first search (RBFS)Recursive best-first search (RBFS)

• SMA*SMA*

Page 10: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Iterative Deepening A*

Iterative DeepeningIterative Deepening

• Remember, as in uniformed search, this was a depth-first Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increasedsearch where the max depth was iteratively increased

• As an informed search, we again perform depth-first search, As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f-but only nodes with f-cost less than or equal to smallest f-cost of nodes expanded at last iterationcost of nodes expanded at last iteration

– What happens when f-cost is real-valued?What happens when f-cost is real-valued?

Iterative DeepeningIterative Deepening

• Remember, as in uniformed search, this was a depth-first Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increasedsearch where the max depth was iteratively increased

• As an informed search, we again perform depth-first search, As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f-but only nodes with f-cost less than or equal to smallest f-cost of nodes expanded at last iterationcost of nodes expanded at last iteration

– What happens when f-cost is real-valued?What happens when f-cost is real-valued?

Page 11: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Recursive best-first search

Depth-first combined with best alternativeDepth-first combined with best alternative

• Keep track of options along fringeKeep track of options along fringe

• As soon as current depth-first exploration becomes more As soon as current depth-first exploration becomes more expensive of best fringe optionexpensive of best fringe option

– back up to fringe, but update node costs along the wayback up to fringe, but update node costs along the way

Depth-first combined with best alternativeDepth-first combined with best alternative

• Keep track of options along fringeKeep track of options along fringe

• As soon as current depth-first exploration becomes more As soon as current depth-first exploration becomes more expensive of best fringe optionexpensive of best fringe option

– back up to fringe, but update node costs along the wayback up to fringe, but update node costs along the way

Page 12: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Recursive best-first search• box contains f-value box contains f-value

of best alternative of best alternative path available from path available from any ancestorany ancestor

• First, explore path to First, explore path to PitestiPitesti

• Backtrack to Fagaras Backtrack to Fagaras and update Fagarasand update Fagaras

• Backtrack to Pitesti Backtrack to Pitesti and update Pitesti and update Pitesti

• box contains f-value box contains f-value of best alternative of best alternative path available from path available from any ancestorany ancestor

• First, explore path to First, explore path to PitestiPitesti

• Backtrack to Fagaras Backtrack to Fagaras and update Fagarasand update Fagaras

• Backtrack to Pitesti Backtrack to Pitesti and update Pitesti and update Pitesti

Page 13: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Quality of Iterative Deepening A* and Recursive best-first search

RBFSRBFS• O(bd) space complexity [if h(n) is admissible]O(bd) space complexity [if h(n) is admissible]

• Time complexity is hard to describeTime complexity is hard to describe

– efficiency is heavily dependent on quality of h(n)efficiency is heavily dependent on quality of h(n)

– same states may be explored many timessame states may be explored many times

• IDA* and RBFS use too little memoryIDA* and RBFS use too little memory

– even if you wanted to use more than O(bd) memory, these even if you wanted to use more than O(bd) memory, these two could not provide any advantagetwo could not provide any advantage

RBFSRBFS• O(bd) space complexity [if h(n) is admissible]O(bd) space complexity [if h(n) is admissible]

• Time complexity is hard to describeTime complexity is hard to describe

– efficiency is heavily dependent on quality of h(n)efficiency is heavily dependent on quality of h(n)

– same states may be explored many timessame states may be explored many times

• IDA* and RBFS use too little memoryIDA* and RBFS use too little memory

– even if you wanted to use more than O(bd) memory, these even if you wanted to use more than O(bd) memory, these two could not provide any advantagetwo could not provide any advantage

Page 14: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Simple Memory-bounded A*

Use all available memoryUse all available memory• Follow A* algorithm and fill memory with new expanded nodesFollow A* algorithm and fill memory with new expanded nodes

• If new node does not fitIf new node does not fit

– free() stored node with worst f-valuefree() stored node with worst f-value

– propagate f-value of freed node to parentpropagate f-value of freed node to parent

• SMA* will regenerate a subtree only when it is neededSMA* will regenerate a subtree only when it is needed

– the path through deleted subtree is unknown, but cost is the path through deleted subtree is unknown, but cost is knownknown

Use all available memoryUse all available memory• Follow A* algorithm and fill memory with new expanded nodesFollow A* algorithm and fill memory with new expanded nodes

• If new node does not fitIf new node does not fit

– free() stored node with worst f-valuefree() stored node with worst f-value

– propagate f-value of freed node to parentpropagate f-value of freed node to parent

• SMA* will regenerate a subtree only when it is neededSMA* will regenerate a subtree only when it is needed

– the path through deleted subtree is unknown, but cost is the path through deleted subtree is unknown, but cost is knownknown

Page 15: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Thrashing

Typically discussed in OS w.r.t. memoryTypically discussed in OS w.r.t. memory

• The cost of repeatedly freeing and regenerating parts of the The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual searchsearch tree dominate the cost of actual search

• time complexity will scale significantly if thrashingtime complexity will scale significantly if thrashing

– So we saved space with SMA*, but if the problem is large, So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation it will be intractable from the point of view of computation timetime

Typically discussed in OS w.r.t. memoryTypically discussed in OS w.r.t. memory

• The cost of repeatedly freeing and regenerating parts of the The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual searchsearch tree dominate the cost of actual search

• time complexity will scale significantly if thrashingtime complexity will scale significantly if thrashing

– So we saved space with SMA*, but if the problem is large, So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation it will be intractable from the point of view of computation timetime

Page 16: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Meta-foo

What does meta mean in AI?What does meta mean in AI?

• Frequently it means step back a level from fooFrequently it means step back a level from foo

• Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning

• These informed search algorithms have pros and cons These informed search algorithms have pros and cons regarding how they choose to explore new levelsregarding how they choose to explore new levels

– a metalevel learning algorithm may combine learn how to a metalevel learning algorithm may combine learn how to combine techniques and parameterize searchcombine techniques and parameterize search

What does meta mean in AI?What does meta mean in AI?

• Frequently it means step back a level from fooFrequently it means step back a level from foo

• Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning

• These informed search algorithms have pros and cons These informed search algorithms have pros and cons regarding how they choose to explore new levelsregarding how they choose to explore new levels

– a metalevel learning algorithm may combine learn how to a metalevel learning algorithm may combine learn how to combine techniques and parameterize searchcombine techniques and parameterize search

Page 17: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Heuristic Functions

8-puzzle problem8-puzzle problem8-puzzle problem8-puzzle problemAvg Depth=22

Branching = approx 3

322 states

170,000 repeated

Page 18: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Heuristics

The number of misplaced tilesThe number of misplaced tiles

• Admissible because at least n moves required to solve n Admissible because at least n moves required to solve n misplaced tilesmisplaced tiles

The distance from each tile to its goal positionThe distance from each tile to its goal position

• No diagonals, so use No diagonals, so use Manhattan DistanceManhattan Distance

– As if walking around rectilinear city blocksAs if walking around rectilinear city blocks

• also admissiblealso admissible

The number of misplaced tilesThe number of misplaced tiles

• Admissible because at least n moves required to solve n Admissible because at least n moves required to solve n misplaced tilesmisplaced tiles

The distance from each tile to its goal positionThe distance from each tile to its goal position

• No diagonals, so use No diagonals, so use Manhattan DistanceManhattan Distance

– As if walking around rectilinear city blocksAs if walking around rectilinear city blocks

• also admissiblealso admissible

Page 19: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Compare these two heuristics

Effective Branching Factor, b*Effective Branching Factor, b*• If A* explores If A* explores NN nodes to find the goal at depth nodes to find the goal at depth dd

– b* = branching factor such that a uniform tree of depth d contains b* = branching factor such that a uniform tree of depth d contains N+1 nodesN+1 nodes

N+1 = 1 + b* + (b*)N+1 = 1 + b* + (b*)22 + … + (b*) + … + (b*)dd

• b* close to 1 is ideal b* close to 1 is ideal

– because this means the heuristic guided the A* search linearlybecause this means the heuristic guided the A* search linearly

– If b* were 100, on average, the heuristic had to consider 100 children If b* were 100, on average, the heuristic had to consider 100 children for each nodefor each node

– Compare heuristics based on their b*Compare heuristics based on their b*

Effective Branching Factor, b*Effective Branching Factor, b*• If A* explores If A* explores NN nodes to find the goal at depth nodes to find the goal at depth dd

– b* = branching factor such that a uniform tree of depth d contains b* = branching factor such that a uniform tree of depth d contains N+1 nodesN+1 nodes

N+1 = 1 + b* + (b*)N+1 = 1 + b* + (b*)22 + … + (b*) + … + (b*)dd

• b* close to 1 is ideal b* close to 1 is ideal

– because this means the heuristic guided the A* search linearlybecause this means the heuristic guided the A* search linearly

– If b* were 100, on average, the heuristic had to consider 100 children If b* were 100, on average, the heuristic had to consider 100 children for each nodefor each node

– Compare heuristics based on their b*Compare heuristics based on their b*

Page 20: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Compare these two heuristics

Page 21: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Compare these two heuristics

hh22 is always better than h is always better than h11

• for any node, n, for any node, n, hh22(n) >= h(n) >= h11(n)(n)

• hh22 dominatesdominates h h11

• Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded?

– This means all nodes, h(n) + g(n) < C*, will be expandedThis means all nodes, h(n) + g(n) < C*, will be expanded

All nodes where h(n) < C* - g(n) will be expandedAll nodes where h(n) < C* - g(n) will be expanded

– All nodes hAll nodes h22 expands will also be expanded by h expands will also be expanded by h11 and and

because hbecause h11 is smaller, others will be expanded as well is smaller, others will be expanded as well

hh22 is always better than h is always better than h11

• for any node, n, for any node, n, hh22(n) >= h(n) >= h11(n)(n)

• hh22 dominatesdominates h h11

• Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded?

– This means all nodes, h(n) + g(n) < C*, will be expandedThis means all nodes, h(n) + g(n) < C*, will be expanded

All nodes where h(n) < C* - g(n) will be expandedAll nodes where h(n) < C* - g(n) will be expanded

– All nodes hAll nodes h22 expands will also be expanded by h expands will also be expanded by h11 and and

because hbecause h11 is smaller, others will be expanded as well is smaller, others will be expanded as well

Page 22: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Inventing admissible heuristic funcs

How can you create h(n)?How can you create h(n)?

• Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions

– Allow 8-puzzle pieces to sit atop on anotherAllow 8-puzzle pieces to sit atop on another

– Call this a Call this a relaxed problemrelaxed problem

– The cost of optimal solution to relaxed problem is The cost of optimal solution to relaxed problem is admissible heuristic for original problemadmissible heuristic for original problem

It is at least as expensive for the original problemIt is at least as expensive for the original problem

How can you create h(n)?How can you create h(n)?

• Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions

– Allow 8-puzzle pieces to sit atop on anotherAllow 8-puzzle pieces to sit atop on another

– Call this a Call this a relaxed problemrelaxed problem

– The cost of optimal solution to relaxed problem is The cost of optimal solution to relaxed problem is admissible heuristic for original problemadmissible heuristic for original problem

It is at least as expensive for the original problemIt is at least as expensive for the original problem

Page 23: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Examples of relaxed problems

A tile can move from square A tile can move from square AA to square to square BB if if

AA is horizontally or vertically adjacent to is horizontally or vertically adjacent to BB

and and BB is blank is blank

• A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap)

• A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport)

• A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap)

Solutions to these relaxed problems can be computed Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to computewithout search and therefore heuristic is easy to compute

A tile can move from square A tile can move from square AA to square to square BB if if

AA is horizontally or vertically adjacent to is horizontally or vertically adjacent to BB

and and BB is blank is blank

• A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap)

• A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport)

• A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap)

Solutions to these relaxed problems can be computed Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to computewithout search and therefore heuristic is easy to compute

Page 24: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Multiple Heuristics

If multiple heuristics available:If multiple heuristics available:

• h(n) = max {hh(n) = max {h11(n), h(n), h22(n), …, h(n), …, hmm(n)}(n)}

If multiple heuristics available:If multiple heuristics available:

• h(n) = max {hh(n) = max {h11(n), h(n), h22(n), …, h(n), …, hmm(n)}(n)}

Page 25: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Use solution to subproblem as heuristic

What is optimal cost of solving some portion of What is optimal cost of solving some portion of original problem?original problem?

• subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem

What is optimal cost of solving some portion of What is optimal cost of solving some portion of original problem?original problem?

• subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem

Page 26: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Pattern Databases

Store optimal solutions to subproblems in databaseStore optimal solutions to subproblems in database

• We use an exhaustive search to solve every permutation of the We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle1,2,3,4-piece subproblem of the 8-puzzle

• During solution of 8-puzzle, look up optimal cost to solve the During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic1,2,3,4-piece subproblem and use as heuristic

Store optimal solutions to subproblems in databaseStore optimal solutions to subproblems in database

• We use an exhaustive search to solve every permutation of the We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle1,2,3,4-piece subproblem of the 8-puzzle

• During solution of 8-puzzle, look up optimal cost to solve the During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic1,2,3,4-piece subproblem and use as heuristic

Page 27: CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Learning

Could also build pattern database while solving Could also build pattern database while solving cases of the 8-puzzlecases of the 8-puzzle

• Must keep track of intermediate states and true final cost of Must keep track of intermediate states and true final cost of solutionsolution

• Inductive learningInductive learning builds mapping of state -> cost builds mapping of state -> cost

• Because too many permutations of actual statesBecause too many permutations of actual states

– Construct important Construct important features features to reduce size of spaceto reduce size of space

Could also build pattern database while solving Could also build pattern database while solving cases of the 8-puzzlecases of the 8-puzzle

• Must keep track of intermediate states and true final cost of Must keep track of intermediate states and true final cost of solutionsolution

• Inductive learningInductive learning builds mapping of state -> cost builds mapping of state -> cost

• Because too many permutations of actual statesBecause too many permutations of actual states

– Construct important Construct important features features to reduce size of spaceto reduce size of space