50
Learning Objectives At the end of the class you should be able to: recognize and represent constraint satisfaction problems show how constraint satisfaction problems can be solved with search implement and trace arc-consistency of a constraint graph show how domain splitting can solve constraint problems c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 1

Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Learning Objectives

At the end of the class you should be able to:

recognize and represent constraint satisfaction problems

show how constraint satisfaction problems can be solved withsearch

implement and trace arc-consistency of a constraint graph

show how domain splitting can solve constraint problems

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 1

Page 2: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Posing a Constraint Satisfaction Problem

A CSP is characterized by

A set of variables V1,V2, . . . ,Vn.

Each variable Vi has an associated domain DViof possible

values.

There are hard constraints on various subsets of the variableswhich specify legal combinations of values for these variables.

A solution to the CSP is an assignment of a value to eachvariable that satisfies all the constraints.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 2

Page 3: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example: scheduling activities

Variables: A, B, C , D, E that represent the starting times ofvarious activities.

Domains: DA = {1, 2, 3, 4}, DB = {1, 2, 3, 4},DC = {1, 2, 3, 4}, DD = {1, 2, 3, 4}, DE = {1, 2, 3, 4}Constraints:

(B 6= 3) ∧ (C 6= 2) ∧ (A 6= B) ∧ (B 6= C ) ∧(C < D) ∧ (A = D) ∧ (E < A) ∧ (E < B) ∧(E < C ) ∧ (E < D) ∧ (B 6= D).

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 3

Page 4: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Generate-and-Test Algorithm

Generate the assignment space D = DV1 ×DV2 × . . .×DVn .Test each assignment with the constraints.

Example:

D = DA ×DB ×DC ×DD ×DE

= {1, 2, 3, 4} × {1, 2, 3, 4} × {1, 2, 3, 4}×{1, 2, 3, 4} × {1, 2, 3, 4}

= {〈1, 1, 1, 1, 1〉 , 〈1, 1, 1, 1, 2〉 , ..., 〈4, 4, 4, 4, 4〉}.

How many assignments need to be tested for n variables eachwith domain size d?

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 4

Page 5: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Backtracking Algorithms

Systematically explore D by instantiating the variables one ata time

evaluate each constraint predicate as soon as all its variablesare bound

any partial assignment that doesn’t satisfy the constraint canbe pruned.

Example Assignment A = 1 ∧ B = 1 is inconsistent withconstraint A 6= B regardless of the value of the other variables.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 5

Page 6: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

CSP as Graph Searching

A CSP can be solved by graph-searching:

A node is an assignment values to some of the variables.

Suppose node N is the assignment X1 = v1, . . . ,Xk = vk .Select a variable Y that isn’t assigned in N.For each value yi ∈ dom(Y )X1 = v1, . . . ,Xk = vk ,Y = yi is a neighbour if it is consistentwith the constraints.

The start node is the empty assignment.

A goal node is a total assignment that satisfies the constraints.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 6

Page 7: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Consistency Algorithms

Idea: prune the domains as much as possible before selectingvalues from them.

A variable is domain consistent if no value of the domain ofthe node is ruled impossible by any of the constraints.

Example: Is the scheduling example domain consistent?

DB = {1, 2, 3, 4} isn’t domain consistent as B = 3 violatesthe constraint B 6= 3.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 7

Page 8: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Consistency Algorithms

Idea: prune the domains as much as possible before selectingvalues from them.

A variable is domain consistent if no value of the domain ofthe node is ruled impossible by any of the constraints.

Example: Is the scheduling example domain consistent?DB = {1, 2, 3, 4} isn’t domain consistent as B = 3 violatesthe constraint B 6= 3.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 8

Page 9: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Constraint Network

There is a oval-shaped node for each variable.

There is a rectangular node for each constraint.

There is a domain of values associated with each variablenode.

There is an arc from variable X to each constraint thatinvolves X .

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 9

Page 10: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example Constraint Network

{1,2,3,4} {1,2,4}

{1,2,3,4} {1,3,4}

{1,2,3,4}

A B

D C

E

A ≠ B

B ≠ D

C < D

A = D

E < A

B ≠ C

E < B

E < D E < C

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 10

Page 11: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Arc Consistency

An arc⟨X , r(X ,Y )

⟩is arc consistent if, for each value

x ∈ dom(X ), there is some value y ∈ dom(Y ) such thatr(x , y) is satisfied.

A network is arc consistent if all its arcs are arc consistent.

What if arc⟨X , r(X ,Y )

⟩is not arc consistent?

All values of X in dom(X ) for which there is no correspondingvalue in dom(Y ) can be deleted from dom(X ) to make thearc

⟨X , r(X ,Y )

⟩consistent.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 11

Page 12: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Arc Consistency

An arc⟨X , r(X ,Y )

⟩is arc consistent if, for each value

x ∈ dom(X ), there is some value y ∈ dom(Y ) such thatr(x , y) is satisfied.

A network is arc consistent if all its arcs are arc consistent.

What if arc⟨X , r(X ,Y )

⟩is not arc consistent?

All values of X in dom(X ) for which there is no correspondingvalue in dom(Y ) can be deleted from dom(X ) to make thearc

⟨X , r(X ,Y )

⟩consistent.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 12

Page 13: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Arc Consistency Algorithm

The arcs can be considered in turn making each arc consistent.

When an arc has been made arc consistent, does it ever needto be checked again?

An arc⟨X , r(X ,Y )

⟩needs to be revisited if the domain of

one of the Y ’s is reduced.

Three possible outcomes when all arcs are made arcconsistent: (Is there a solution?)

I One domain is empty =⇒

no solution

I Each domain has a single value =⇒

unique solution

I Some domains have more than one value =⇒

there may ormay not be a solution

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 13

Page 14: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Arc Consistency Algorithm

The arcs can be considered in turn making each arc consistent.

When an arc has been made arc consistent, does it ever needto be checked again?An arc

⟨X , r(X ,Y )

⟩needs to be revisited if the domain of

one of the Y ’s is reduced.

Three possible outcomes when all arcs are made arcconsistent: (Is there a solution?)

I One domain is empty =⇒

no solution

I Each domain has a single value =⇒

unique solution

I Some domains have more than one value =⇒

there may ormay not be a solution

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 14

Page 15: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Arc Consistency Algorithm

The arcs can be considered in turn making each arc consistent.

When an arc has been made arc consistent, does it ever needto be checked again?An arc

⟨X , r(X ,Y )

⟩needs to be revisited if the domain of

one of the Y ’s is reduced.

Three possible outcomes when all arcs are made arcconsistent: (Is there a solution?)

I One domain is empty =⇒ no solutionI Each domain has a single value =⇒ unique solutionI Some domains have more than one value =⇒ there may or

may not be a solution

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 15

Page 16: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Finding solutions when AC finishes

If some domains have more than one element =⇒ search

Split a domain, then recursively solve each half.

It is often best to split a domain in half.

Do we need to restart from scratch?

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 16

Page 17: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example: Crossword Puzzle

1 2

3

4

Words:

ant, big, bus, car, hasbook, buys, hold,lane, yearbeast, ginger, search,symbol, syntax

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 17

Page 18: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Hard and Soft Constraints

Given a set of variables, assign a value to each variable thateither

I satisfies some set of constraints: satisfiability problems —“hard constraints”

I minimizes some cost function, where each assignment ofvalues to variables has some cost: optimization problems —“soft constraints”

Many problems are a mix of hard and soft constraints(called constrained optimization problems).

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.1, Page 18

Page 19: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Local Search

Local Search (Greedy Descent):

Maintain an assignment of a value to each variable.

Repeat:

I Select a variable to changeI Select a new value for that variable

Until a satisfying assignment is found

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 1

Page 20: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Local Search for CSPs

Aim: find an assignment with zero unsatisfied constraints.

Given an assignment of a value to each variable, a conflict isan unsatisfied constraint.

The goal is an assignment with zero conflicts.

Heuristic function to be minimized: the number of conflicts.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 2

Page 21: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Greedy Descent Variants

To choose a variable to change and a new value for it:

Find a variable-value pair that minimizes the number ofconflicts

Select a variable that participates in the most conflicts.Select a value that minimizes the number of conflicts.

Select a variable that appears in any conflict.Select a value that minimizes the number of conflicts.

Select a variable at random.Select a value that minimizes the number of conflicts.

Select a variable and value at random; accept this change if itdoesn’t increase the number of conflicts.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 3

Page 22: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Complex Domains

When the domains are small or unordered, the neighbors of anassignment can correspond to choosing another value for oneof the variables.

When the domains are large and ordered, the neighbors of anassignment are the adjacent values for one of the variables.

If the domains are continuous, Gradient descent changeseach variable proportional to the gradient of the heuristicfunction in that direction.The value of variable Xi goes from vi to

vi − η ∂h∂Xi

.η is the step size.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 4

Page 23: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Complex Domains

When the domains are small or unordered, the neighbors of anassignment can correspond to choosing another value for oneof the variables.

When the domains are large and ordered, the neighbors of anassignment are the adjacent values for one of the variables.

If the domains are continuous, Gradient descent changeseach variable proportional to the gradient of the heuristicfunction in that direction.The value of variable Xi goes from vi to vi − η ∂h

∂Xi.

η is the step size.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 5

Page 24: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Problems with Greedy Descent

a local minimum that isnot a global minimum

a plateau where theheuristic values areuninformative

a ridge is a localminimum where n-steplook-ahead might help

Ridge

Local Minimum

Plateau

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 6

Page 25: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Randomized Algorithms

Consider two methods to find a minimum value:I Greedy descent, starting from some position, keep moving

down & report minimum value foundI Pick values at random & report minimum value found

Which do you expect to work better to find a globalminimum?

Can a mix work better?

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 7

Page 26: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Randomized Greedy Descent

As well as downward steps we can allow for:

Random steps: move to a random neighbor.

Random restart: reassign random values to all variables.

Which is more expensive computationally?

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 8

Page 27: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

1-Dimensional Ordered Examples

Two 1-dimensional search spaces; step right or left:

(a) (b)

Which method would most easily find the global minimum?

What happens in hundreds or thousands of dimensions?

What if different parts of the search space have differentstructure?

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 9

Page 28: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Stochastic Local Search

Stochastic local search is a mix of:

Greedy descent: move to a lowest neighbor

Random walk: taking some random steps

Random restart: reassigning values to all variables

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 10

Page 29: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Random Walk

Variants of random walk:

When choosing the best variable-value pair, randomlysometimes choose a random variable-value pair.

When selecting a variable then a value:I Sometimes choose any variable that participates in the most

conflicts.I Sometimes choose any variable that participates in any conflict

(a red node).I Sometimes choose any variable.

Sometimes choose the best value and sometimes choose arandom value.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 11

Page 30: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Comparing Stochastic Algorithms

How can you compare three algorithms whenI one solves the problem 30% of the time very quickly but

doesn’t halt for the other 70% of the casesI one solves 60% of the cases reasonably quickly but doesn’t

solve the restI one solves the problem in 100% of the cases, but slowly?

Summary statistics, such as mean run time, median run time,and mode run time don’t make much sense.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 12

Page 31: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Comparing Stochastic Algorithms

How can you compare three algorithms whenI one solves the problem 30% of the time very quickly but

doesn’t halt for the other 70% of the casesI one solves 60% of the cases reasonably quickly but doesn’t

solve the restI one solves the problem in 100% of the cases, but slowly?

Summary statistics, such as mean run time, median run time,and mode run time don’t make much sense.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 13

Page 32: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Runtime Distribution

Plots runtime (or number of steps) and the proportion (ornumber) of the runs that are solved within that runtime.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 10 100 1000

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 14

Page 33: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Variant: Simulated Annealing

Pick a variable at random and a new value at random.

If it is an improvement, adopt it.

If it isn’t an improvement, adopt it probabilistically dependingon a temperature parameter, T .

I With current assignment n and proposed assignment n′ wemove to n′ with probability e(h(n

′)−h(n))/T

Temperature can be reduced.

Probability of accepting a change:

Temperature 1-worse 2-worse 3-worse

10 0.91 0.81 0.741 0.37 0.14 0.050.25 0.02 0.0003 0.0000060.1 0.00005 2× 10−9 9× 10−14

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 15

Page 34: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Variant: Simulated Annealing

Pick a variable at random and a new value at random.

If it is an improvement, adopt it.

If it isn’t an improvement, adopt it probabilistically dependingon a temperature parameter, T .

I With current assignment n and proposed assignment n′ wemove to n′ with probability e(h(n

′)−h(n))/T

Temperature can be reduced.

Probability of accepting a change:

Temperature 1-worse 2-worse 3-worse

10 0.91 0.81 0.741 0.37 0.14 0.050.25 0.02 0.0003 0.0000060.1 0.00005 2× 10−9 9× 10−14

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 16

Page 35: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Tabu lists

To prevent cycling we can maintain a tabu list of the k lastassignments.

Don’t allow an assignment that is already on the tabu list.

If k = 1, we don’t allow an assignment of to the same valueto the variable chosen.

We can implement it more efficiently than as a list ofcomplete assignments.

It can be expensive if k is large.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 17

Page 36: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Parallel Search

A total assignment is called an individual .

Idea: maintain a population of k individuals instead of one.

At every stage, update each individual in the population.

Whenever an individual is a solution, it can be reported.

Like k restarts, but uses k times the minimum number ofsteps.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 18

Page 37: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Beam Search

Like parallel search, with k individuals, but choose the k bestout of all of the neighbors.

When k = 1, it is greedy descent.

When k =∞, it is breadth-first search.

The value of k lets us limit space and parallelism.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 19

Page 38: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Stochastic Beam Search

Like beam search, but it probabilistically chooses the kindividuals at the next generation.

The probability that a neighbor is chosen is proportional to itsheuristic value.

This maintains diversity amongst the individuals.

The heuristic value reflects the fitness of the individual.

Like asexual reproduction: each individual mutates and thefittest ones survive.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 20

Page 39: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Genetic Algorithms

Like stochastic beam search, but pairs of individuals arecombined to create the offspring:

For each generation:I Randomly choose pairs of individuals where the fittest

individuals are more likely to be chosen.I For each pair, perform a cross-over: form two offspring each

taking different parts of their parents:I Mutate some values.

Stop when a solution is found.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 21

Page 40: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Crossover

Given two individuals:

X1 = a1,X2 = a2, . . . ,Xm = am

X1 = b1,X2 = b2, . . . ,Xm = bm

Select i at random.

Form two offspring:

X1 = a1, . . . ,Xi = ai ,Xi+1 = bi+1, . . . ,Xm = bm

X1 = b1, . . . ,Xi = bi ,Xi+1 = ai+1, . . . ,Xm = am

The effectiveness depends on the ordering of the variables.

Many variations are possible.

c©D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 22

Page 41: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Constraint satisfaction revisited

A Constraint Satisfaction problem consists of:I a set of variablesI a set of possible values, a domain for each variableI a set of constraints amongst subsets of the variables

The aim is to find a set of assignments that satisfies allconstraints, or to find all such assignments.

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 1

Page 42: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example: crossword puzzle

1 2

3

4 5

6

at, be, he, it, on,eta, hat, her, him,one,desk, dove, easy,else, help, kind,soon, this,dance, first, fuels,given, haste, loses,sense, sound, think,usage

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 2

Page 43: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Dual Representations

Two ways to represent the crossword as a CSP

First representation:I nodes represent word positions: 1-down. . . 6-acrossI domains are the wordsI constraints specify that the letters on the intersections must

be the same.

Dual representation:I nodes represent the individual squaresI domains are the lettersI constraints specify that the words must fit

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 3

Page 44: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Representations for image interpretation

First representation:I nodes represent the chains and regionsI domains are the scene objectsI constraints correspond to the intersections and adjacency

Dual representation:I nodes represent the intersectionsI domains are the intersection labelsI constraints specify that the chains must have same marking

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 4

Page 45: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Variable Elimination

Idea: eliminate the variables one-by-one passing theirconstraints to their neighbours

Variable Elimination Algorithm:

If there is only one variable, return the intersection of the(unary) constraints that contain it

Select a variable X

Join the constraints in which X appears, forming constraint R1

Project R1 onto its variables other than X , forming R2

Replace all of the constraints in which Xi appears by R2

Recursively solve the simplified problem, forming R3

Return R1 joined with R3

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 5

Page 46: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Variable elimination (cont.)

When there is a single variable remaining, if it has no values,the network was inconsistent.

The variables are eliminated according to someelimination ordering

Different elimination orderings result in different sizeintermediate constraints.

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 6

Page 47: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example network

{1,2,3,4}

{1,2,3,4}

{1,2,3,4} {1,2,3,4}

{1,2,3,4}

A

B

E

C

DA ≠ B

E ≠ C

E ≠ DD<C

A<D

B<E

E-A is odd

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 7

Page 48: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example: arc-consistent network

{1,2}

{1,2,3}

{2,3,4} {3,4}

{2,3}

A

B

EC

DA ≠ B

E ≠ C

E ≠ DD<C

A<D

B<E

E-A is odd

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 8

Page 49: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Example: eliminating C

r1 : C 6= E C E3 23 44 24 3

r2 : C > D C D3 24 24 3

r3 : r1 ./ r2 C D E3 2 23 2 44 2 24 2 34 3 24 3 3

r4 : π{D,E}r3 D E2 22 32 43 23 3

↪→ new constraint

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 9

Page 50: Learning Objectives - jbnu.ac.krnlp.jbnu.ac.kr/AI2018/slides/ch04_all.pdf · 2018-09-03 · Learning Objectives At the end of the class you should be able to: recognize and represent

Resulting network after eliminating C

{1,2}

{1,2,3}

{2,3,4}

{2,3}

A

B

E

DA ≠ B

E ≠ D

r4(E,D)

A<D

B<E

E-A is odd

c©D. Poole and A. Mackworth 2009 Artificial Intelligence, Lecture 4.3, Page 10