Searching techniques in AI

Embed Size (px)

Citation preview

  • 7/29/2019 Searching techniques in AI

    1/37

    Simple Hill Climbing

    First state that is better than the current state

    is selected.

  • 7/29/2019 Searching techniques in AI

    2/37

    2

    Simple Hill Climbing

    Algorithm

    1. Evaluate the initial state.

    2. Loop until a solution is found or there are no

    new operators left to be applied:- Select and apply a new operator

    -Evaluate the new state:

    goal quit

    better than current state new current state

  • 7/29/2019 Searching techniques in AI

    3/37

    3

    Steepest-Ascent Hill Climbing

    (Gradient Search)

    Considers all the moves from the current

    state.

    Selects the best one as the next state.

  • 7/29/2019 Searching techniques in AI

    4/37

    Difference between Hill climbing and Best first

    search:

    In Hill Climbing-one move is selected and all

    others are rejected, never to be reconsidered.

    In Best first search-one move is selected but

    others are kept around so that they can be

    revisited later if the selected path becomes less

    promising.

  • 7/29/2019 Searching techniques in AI

    5/37

    5

    Hill Climbing: Disadvantages

    Local maximum

    A state that is better than all of its neighbours,

    but not

    better than some other states far away.

  • 7/29/2019 Searching techniques in AI

    6/37

    6

    Hill Climbing: Disadvantages

    Plateau

    A flat area of the search space in which all

    neighbouring

    states have the same value.

  • 7/29/2019 Searching techniques in AI

    7/37

    7

    Hill Climbing: Disadvantages

    Ridge

    The orientation of the high region, compared to

    the set

    of available moves, makes it impossible to climb

    up.

    However, two moves executed serially may

    increasethe height.

  • 7/29/2019 Searching techniques in AI

    8/37

    8

    Hill Climbing: Disadvantages

    Ways Out

    Backtrack to some earlier node and try going

    in a different direction.

    Make a big jump to try to get in a new section.

    Moving in several directions at once.

  • 7/29/2019 Searching techniques in AI

    9/37

    9

    Hill Climbing: Disadvantages

    Hill climbing is a local method:

    Decides what to do next by looking only at the

    immediate consequences of its choices.

    Global information might be encoded in

    heuristic functions.

  • 7/29/2019 Searching techniques in AI

    10/37

    10

    Hill Climbing: Disadvantages

    B

    C

    D

    A

    B

    C

    Start Goal

    Blocks World

    A D

  • 7/29/2019 Searching techniques in AI

    11/37

    11

    Hill Climbing: Disadvantages

    B

    C

    D

    A

    B

    C

    Start Goal

    Blocks World

    A D

    Local heuristic:

    +1 for each block that is resting on the thing it is supposed to

    be resting on.

    -1for each block that is resting on a wrong thing.

    0 4

  • 7/29/2019 Searching techniques in AI

    12/37

    12

    Hill Climbing: Disadvantages

    B

    C

    D

    B

    C

    D

    A

    A0 2

  • 7/29/2019 Searching techniques in AI

    13/37

    13

    Hill Climbing: Disadvantages

    B

    C

    D

    A

    B

    C D

    A B

    C

    DA

    0

    0

    0

    B

    C

    D

    A

    2

  • 7/29/2019 Searching techniques in AI

    14/37

    14

    Hill Climbing: Disadvantages

    B

    C

    D

    A

    B

    C

    Start Goal

    Blocks World

    A D

    Global heuristic:

    For each block that has the correct support structure: +1to

    every block in the support structure.

    For each block that has a wrong support structure: -1to

    every block in the support structure.

    -6 6

  • 7/29/2019 Searching techniques in AI

    15/37

    15

    Hill Climbing: Disadvantages

    B

    C

    D

    A

    B

    C D

    A B

    C

    DA

    -6

    -2

    -1

    B

    C

    D

    A

    -3

  • 7/29/2019 Searching techniques in AI

    16/37

    16

    Hill Climbing: Conclusion

    Can be very inefficient in a large, rough

    problem space.

    Global heuristic may have to pay forcomputational complexity.

    Often useful when combined with othermethods, getting it started right in the right

    general neighbourhood.

  • 7/29/2019 Searching techniques in AI

    17/37

    A* Method A* (Aystar) (Hart, 1972) method is a combination of branch &

    bound and best search, combined with the dynamic

    programming principle.

    The heuristic function (or Evaluation function) for a node N is

    defined as f(N) = g(N) + h(N)

    The function g is a measure of the cost of getting from the Start

    node (initial state) to the current node.

    It is sum of costs of applying the rules that were applied along the best

    path to the current node.

    The function h is an estimate of additional cost of getting fromthe current node to the Goal node (final state).

    Here knowledge about the problem domain is exploited.

    A* algorithm is called OR graph / tree search algorithm.

  • 7/29/2019 Searching techniques in AI

    18/37

    Behavior of A* Algorithm

    Underestimation

    If we can guarantee that h never overestimates actual value from current to goal,

    then A* algorithm is guaranteed to find an

    optimal path to a goal, if one exists.

  • 7/29/2019 Searching techniques in AI

    19/37

    Example Underestimation f=g+hHere h is underestimated

    A

    Underestimated

    (1+3)B (1+4)C (1+5)D

    3 moves away from goal

    (2+3) E

    3 moves away from goal

    (3+3) F

  • 7/29/2019 Searching techniques in AI

    20/37

    Explanation -Example of Underestimation

    Assume the cost of all arcs to be 1. A is expanded to B, C and D. f values for each node is computed.

    B is chosen to be expanded to E.

    We notice that f(E) = f(C) = 5

    Suppose we resolve in favor of E, the path currently we are

    expanding. E is expanded to F.

    Clearly expansion of a node F is stopped as f(F)=6 and so we willnow expand C.

    Thus we see that by underestimating h(B), we have wasted someeffort but eventually discovered that B was farther away than wethought.

    Then we go back and try another path, and will find optimal path.

  • 7/29/2019 Searching techniques in AI

    21/37

    Example Overestimation

    Here h is overestimated

    A

    Overestimated

    (1+3) B (1+4) C (1+5) D

    (2+2) E

    (3+1)F

    (4+0) G

  • 7/29/2019 Searching techniques in AI

    22/37

    Explanation Example of Overestimation

    A is expanded to B, C and D. Now B is expanded to E, E to F and F to G for a solution path

    of length 4.

    Consider a scenario when there a direct path from D to G witha solution giving a path of length 2.

    We will never find it because of overestimating h(D).

    Thus, we may find some other worse solution without everexpanding D.

    So by overestimating h, we can not be guaranteed to

    find the cheaper path solution.

  • 7/29/2019 Searching techniques in AI

    23/37

    Admissibility of A*

    A search algorithm is admissible, if

    for any graph, it always terminates in an optimalpath from initial state to goal state, if path exists.

    If heuristic function h is underestimate ofactual value from current state to goal state,then the it is called admissible function.

    Alternatively we can say that A* alwaysterminates with the optimal path in case

    h(x) is an admissible heuristic function.

  • 7/29/2019 Searching techniques in AI

    24/37

    Start state Goal state

    3 7 6 5 3 6

    5 1 2 7 2

    4 8 4 1 8

    Example: Solve Eight puzzle problem using A* algorithm

    Evaluation function f (X) = g (X) + h(X)

    h (X) = the number of tiles not in their goal

    position in a given state X

    g(X) = depth of node X in the search tree Initial node has f(initial_node) = 4

    Apply A* algorithm to solve it.

    The choice of evaluation function critically determines search

    results.

  • 7/29/2019 Searching techniques in AI

    25/37

    Example: Eight puzzle problem (EPP)

    Start state Goal state

    3 7 6 5 3 6

    5 1 2 7 2

    4 8 4 1 8

  • 7/29/2019 Searching techniques in AI

    26/37

    Evaluation function - f for EPP

    The choice of evaluation function criticallydetermines search results.

    Consider Evaluation function

    f (X) = g (X) + h(X) h (X) = the number of tiles not in their goal

    position in a given state X

    g(X) = depth of node X in the search tree

    For Initial node f(initial_node) = 4

    Apply A* algorithm to solve it.

  • 7/29/2019 Searching techniques in AI

    27/37

    Start State

    Search Tree f = 0+4

    3 7 65 1 2

    4 8

    up

    (1+3)

    left

    (1+5)

    right

    (1+5)

    3 7 6

    5 2

    4 1 8

    3 7 6

    5 1 2

    4 8

    3 7 6

    5 1 2

    4 8

    up

    (2+3)

    left

    (2+3)

    right

    (2+4)

    3 65 7 2

    4 1 8

    left

    (3+2)

    3 7 65 2

    4 1 8

    right

    (3+4)

    3 7 65 2

    4 1 8

    3 6

    5 7 2

    4 1 8down

    (4+1)

    3 6

    5 7 2

    4 1 8

    5 3 6

    7 24 1 8

    right 5 3 6

    7 24 1 8

    Goal

    State

  • 7/29/2019 Searching techniques in AI

    28/37

    Harder Problem

    Harder problems (8 puzzle) cant be solved by

    heuristic function defined earlier.

    Initial State Goal State

    2 1 6 1 2 34 8 8 4

    7 5 3 7 6 5

    A better estimate function is to be thought.

    h (X) = the sum of the distances of the tiles

    from their goal position in a given state X

    Initial node has h(initial_node) = 1+1+2+2+1+3+0+2=12

  • 7/29/2019 Searching techniques in AI

    29/37

    29

    Problem Reduction

    Goal: Acquire TV set

    AND-OR Graphs

    Goal: Steal TV set Goal: Earn some money Goal: Buy TV set

    Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980)

  • 7/29/2019 Searching techniques in AI

    30/37

    30

    Problem Reduction: AO*

    A

    DCB

    43 5

    A

    5

    6

    FE

    44

    A

    DCB

    43

    10

    9

    9

    9

    FE

    44

    A

    DCB

    4

    6 10

    11

    12

    HG

    75

  • 7/29/2019 Searching techniques in AI

    31/37

    31

    Problem Reduction: AO*

    A

    G

    CB 10

    5

    11

    13

    ED 65 F 3

    A

    G

    CB 15

    10

    14

    13

    ED 65 F 3

    H 9Necessary backward propagation

  • 7/29/2019 Searching techniques in AI

    32/37

    32

    Constraint Satisfaction

    Many AI problems can be viewed as problems

    ofconstraint satisfaction.

    Cryptarithmetic puzzle:

    SEND

    MORE

    MONEY

  • 7/29/2019 Searching techniques in AI

    33/37

    33

    Constraint Satisfaction

    As compared with a straightforard search

    procedure, viewing a problem as one of

    constraint satisfaction can reduce substantially

    the amount of search.

  • 7/29/2019 Searching techniques in AI

    34/37

    34

    Constraint Satisfaction

    Operates in a space of constraint sets.

    Initial state contains the original constraints

    given in the problem.

    A goal state is any state that has been

    constrained enough.

  • 7/29/2019 Searching techniques in AI

    35/37

    35

    Constraint Satisfaction

    Two-step process:

    1. Constraints are discovered and propagated as

    far as possible.2. If there is still not a solution, then search

    begins, adding new constraints.

  • 7/29/2019 Searching techniques in AI

    36/37

    36

    M = 1

    S = 8 or 9

    O = 0N = E + 1

    C2 = 1

    N + R > 8

    E 9

    N = 3

    R = 8 or 9

    2 + D = Y or 2 + D = 10 + Y

    2 + D = YN + R = 10 + E

    R = 9

    S =8

    2 + D = 10 + YD = 8 + Y

    D = 8 or 9

    Y = 0 Y = 1

    E = 2

    C1 = 0 C1 = 1

    D = 8 D = 9

    Initial state:

    No two letters have

    the same value.

    The sum of the digits

    must be as shown.

    SEND

    MORE

    MONEY

  • 7/29/2019 Searching techniques in AI

    37/37

    37

    Constraint Satisfaction

    Two kinds of rules:

    1. Rules that define valid constraint

    propagation.2. Rules that suggest guesses when necessary.