41
MIT and James Orlin 1 NP-completeness 204.302 in 2005

MIT and James Orlin1 NP-completeness 204.302 in 2005

Embed Size (px)

Citation preview

MIT and James Orlin 1

NP-completeness

204.302 in 2005

MIT and James Orlin 2

Complexity

How can we show a problem is efficiently solvable?

– We can show it constructively, by providing an algorithm and show that it solves the problem efficiently.

– How can we show a problem is not efficiently solvable?

– Proving this negative is the aim of complexity theory.

MIT and James Orlin 3

What do we mean by a problem?

Consider maximize 3x + 4y subject to 4x + 5y 23 x 0 , y 0

This is an “instance” of linear programming.

When we say the linear programming problem, we refer to the collection of all instances.

Similarly, the shortest path problem refers to the collection of all instances of finding shortest paths.

The traveling salesman problem refers to all instances of the traveling salesman problem.

MIT and James Orlin 4

Instances versus problem

• Complexity theory addresses the following problem: when is a problem hard?

• Note: it does not deal with the question of whether any instance is hard.

MIT and James Orlin 5

General Fact

• As problem instances get larger, the time to solve the problem grows.

• But how fast?

• We say that a problem is solvable in polynomial time if there is a polynomial p(n) such that the time to solve a problem of size n is at most p(n).

MIT and James Orlin 6

Some examples

• Finding a word in a dictionary with n entries. Time log n, depending on assumptions.

• Sorting n itemstime n log n

• Finding the shortest path from s to ttime n2.

All solvable in Polynomial time

MIT and James Orlin 7

Running times as n grows

growth rates

1.0E+00

1.0E+03

1.0E+06

1.0E+09

1.0E+12

1.0E+15

1.0E+18

1 6 11 16 21 26 31 36 41 46

n

run

nin

g t

ime log n

n log n

n^2

2^n

MIT and James Orlin 8

Easy Problems

Easy problems are those problems whose running time is guaranteed to grow slower than a polynomial of the size of the input.

Everything else is a hard problem of some description.

MIT and James Orlin 9

Polynomial Time Algorithms

We consider a problem X to be “easy” or efficiently solvable, if there is a polynomial time algorithm A for solving X.

We let P denote the class of problems solvable in polynomial time.

Problems that are in the class P include linear Programming, so therefore the assignment problem, transportation problem and minimum cost flow problems are too.

So are:- Finding a topological order

– - Finding a critical path– - Finding an Euler Ian cycle

MIT and James Orlin 10

To determine whether M is prime, one can divide M by every integer less than M.

The number of steps taken by one variant of the simplex algorithm on the minimum cost flow problem is 1000 nlog n pivots.

Linear programming can be solved by a technique called the ellipsoid algorithm in at most n log (M) iterations, where each iteration takes at most 1000 n3 steps.

Which are polynomial time algorithms?

MIT and James Orlin 11

Can integer programming be solved in polynomial time?

• It’s a fact that every algorithm that has ever been developed for integer programming takes exponential time.

• It is generally believed that no polynomial time algorithm for integer programming exists.

• Complexity theory can be used to prove that integer programming is hard.

MIT and James Orlin 12

Hard problems in practice

• What can you say to your manager if he or she hands you a problem that is too difficult for you to solve.

• (adapted from Garey and Johnson)

MIT and James Orlin 13

I cant’ find an efficient algorithm. I guess I’m too dumb.

MIT and James Orlin 14

I cant’ find an efficient algorithm, because no such algorithm is possible

MIT and James Orlin 15

I can’t find an efficient algorithm, but neither can these famous researchers.

MIT and James Orlin 16

The class NP-easy

• Consider an optimization problem X, in which for any instance I, the goal is to find a feasible solution x for I with maximum value fI(x). (or minimum)

• We say that X is NP-Easy if there is a polynomial p( ) with the following properties: For every instance I of X1. There is an optimal solution x for I such that size(x) <

p(size(I)). “There is a small sized optimum solution”2. For any proposed solution y, one can evaluate whether

y is feasible in fewer than p(size(I)+ size(y)) steps.“One can efficiently check feasibility”

3. The number of steps to evaluate f(y) is fewer than p(size(I)+ size(y)). “One can efficiently evaluate the objective function.”

MIT and James Orlin 17

The housing problem

• 400 students applied in the lottery for a wonderful new dorm that holds 100 students.

• You have a list of pairs of incompatible students. – no two incompatible students are in the list of 100

students chosen for the dorm– Is there an efficient procedure for finding the list of 100

students.

MIT and James Orlin 18

0-1 Integer Programming is NP-easy

• Checking whether 0-1 integer programming is NP-easy.1. There is an optimal solution x for I such that size(x) <

p(size(I)). “There is a small sized optimum solution”--every solution is an n-vector of 0’s and 1’s

2. For any proposed solution y, one can evaluate whether y is feasible in fewer than p(size(I)+ size(y)) steps.“One can efficiently check feasibility”--evaluating whether a 0-1 vector is feasible means checking each constraint

3. The number of steps to evaluate f(y) is fewer than p(size(I)+ size(y)). “One can efficiently evaluate the objective function.” –evaluating f(x) is to evaluate co for some linear function c.

MIT and James Orlin 19

Some More NP-easy Problems

• TSP• Is there a small sized optimum solution?• Can one check feasibility efficiently?• Can one evaluate the objective function efficiently?

MIT and James Orlin 20

On NP-easy problems

• Theorem. If problem X is NP-easy, and if Y is a special case of X, then Y is NP-easy.

• Example. 0-1 integer programming is NP-easy• Capital budgeting is a special case of 0-1 integer

programming. Therefore Capital budgeting is NP-easy.

“If a problem is easier than an NP-easy problem it is NP-easy.”

MIT and James Orlin 21

Other problems that are NP-easy

• Set cover problem (fire station problem)• Capital budgeting problem• Determining the largest prime number less than n that divides

integer n– solutions are any numbers that divide n– size of any “solution x” is log x < log n– A solution can be checked as a divisor in polynomial time

• Also, any problem that is a special case of an NP-easy problem is NP-easy.– So determining if a number is prime is NP-easy

MIT and James Orlin 22

On NP-easy optimization problems

• Almost any optimization problem that you see will ever want to solve will be NP-easy. It’s a challenge to find optimization problems that are not NP-easy.

• The next slide illustrates a problem that is not NP-easy.

MIT and James Orlin 23

A problem that is not NP-easy.

• Problem:• INPUT: an integer n• Optimization: find the smallest integer k such that k>n,

and both k and k+2 are prime numbers.

• It is possible that the size of optimum solution, which is log k, is exponentially large in the size of the problem instance, which is log n.

• So, this violates condition 1: the size of the optimum solution may be exponential in the size of the problem instance.

MIT and James Orlin 24

NP-easy

• Almost every optimization problem that you will ever see is NP-easy.

• Question. Can the NP-easy problems be solved in polynomial time? This is a very famous unsolved problem in mathematics. It is often represented as “Does P = NP?”

• Amazing Fact 1: If 0-1 integer programming can be solved in polynomial time, then every other NP-easy problem can be solved in polynomial time.

• Amazing Fact 2: If the traveling salesman problem (or the capital budgeting problem, or the independent set problem) can be solved in polynomial time, then every other NP-easy problem can be solved in polynomial time.

MIT and James Orlin 25

NP-equivalence and other classes

• We say that a problem is NP-equivalent if it is both NP hard and NP-easy.

NP-hardNP-easy

NP-equivalent

NP-completeP

MIT and James Orlin 26

Np-complete problems

There is a set of problems that are called np-complete.

These problems are equivalent to each other in terms of their solvability.

No algorithm exists to solve them efficiently.

If an algorithm is found to solve one of them, it can be adjusted to solve any of them.

MIT and James Orlin 27

No algorithms

OK, there’s no way to find an optimal solution efficiently.

We want to get any solution then, so we need strategies to find feasible solutions.

This is why heuristics exist.

MIT and James Orlin 28

Recognizing an np-complete problem

MIT and James Orlin 29

The class NP-hard

• An oracle function is a “black box” for solving an optimization problem.

• An oracle function for integer programming would take an integer programming instance as input and produce a solution in 1 time unit.

• Let X be an optimization problem. We say that X is NP-hard if every NP-easy problem can be solved in polynomial time if one is permitted to use an oracle function for X.

• Theorem: 0-1 integer programming is NP-hard.

MIT and James Orlin 30

On NP-hardness

• Theorem. If problem X is NP-hard, and if X is a special case of Y, then Y is NP-hard.

• Example. 0-1 integer programming is NP-hard• 0-1 integer programming is a special case of integer

programming. Therefore, integer programming is NP-hard.

“If a problem is harder than an NP-hard problem it is NP-hard.”

MIT and James Orlin 31

Some examples of NP-hard problems

• Traveling Salesman Problem• Capital Budgeting Problem (knapsack problem)• Independent Set Problem• Fire Station Problem (set covering)• 0-1 Integer programming• Integer Programming• Project management with resource constraints• and thousands more

MIT and James Orlin 32

Proving that a problem is hard

• “To prove that problem X is hard, find a problem Y that you know is hard, and show that problem X is easier than Y”

• To prove that a problem X is NP-hard, start with a “similar” NP-hard problem Y. Then show that Y can be solved in polynomial time if one permits X to be used as a subroutine and counting each solution of x as taking 1 step.

MIT and James Orlin 33

On Proving NP-hardness results

• Suppose that we know that the problem of determining a cycle is NP-hard. We will show that the problem of

finding an a hamiltonian path is also NP-hard.

• A hamiltonian cycle is a cycle that passes through each node exactly once.

• A hamiltonian path is a path that includes every node of G.

• Proof technique: Start with any instance of the hamiltonian cycle problem. We denote this instance as G = (N, A).

• Transformation proofs (these are standard). Create an instance G’ = (N’, A) for the hamiltonian path problem from G with the following property: there is a hamiltonian path in G’ if and only if there is a hamiltonian cycle in G.

MIT and James Orlin 34

A transformation

1 1 210 22

The original networkThe transformed network: node 1 of the original network was split into nodes 1 and 21, and nodes 0 and 22 were connected to the split nodes.

MIT and James Orlin 35

Claim 1: If there is a hamiltonian cycle in the original graph then there is a hamiltonian path in the transformed graph.

1

A Hamiltonian Cycle.

Connect one to node 1, and the other to node 21. Add in arcs (0,1) and (21, 22).

1 210 22

Take the two arcs in G incident to the node 1.

MIT and James Orlin 36

Claim 1: If there is a hamiltonian path in the transformed graph then there is a hamiltonian cycle in the original graph.

1

Delete the two arcs (0, 1) and (21, 22). Then take the other arcs in G’ incident to 1 and 21, and make them incident to node 1 in G.

1 210 22

A Hamiltonian Path

MIT and James Orlin 37

Proofs of NP-hardness transformations have two parts

• Original instance I, and transformed instance I’.

• Part 1. An optimal (or feasible) solution for I induces an optimal (or feasible) solution for I’.

• Part 2. An optimal (or feasible) solution for I’ induces and optimal (or feasible) solution for I.

• Formulating problems as integer programs illustrates the type of transformation.

• Note: transformation can be difficult to develop.• Great reference: Garey and Johnson 1979.

MIT and James Orlin 38

Problem reduction

If a problem P1 polynomially reduces to problem P2, and some polynomial-time algorithm solves P2, then some polynomial-time algorithm solves P1.

Recall the construction of the dual problem to solve the primal problem in linear programming for example.

MIT and James Orlin 39

The real benefits of problem reduction 1

If you can reduce a problem to make it easy, in polynomial time, then the original problem is in fact deemed easy.

You can of course fall into the trap of reducing your problem and making it harder to solve.

That doesn’t necessarily make the problem a hard problem in the first place.

MIT and James Orlin 40

The real benefits of problem reduction 2

If on the other hand, you can show that your problem can be transformed to one that is known to be np-complete, then you won’t waste time finding an efficient algorithm to find its optimal solution.

Rather, you will find an heuristic that does the job to the satisfaction of your employer and move on to the next job.

MIT and James Orlin 41

Complexity theory takes a worst case perspective.

For example any problem solvable with an algorithm with a running time of o(n^100) is considered easy, despite this running time.

It is still possible for an np-complete problem to be solved faster than an easy problem of a comparable size.