Ahuja, Network Flows

Embed Size (px)

Citation preview

^"V.

^^

Dewey

ALFRED

P.

WORKING PAPER SLOAN SCHOOL OF MANAGEMENT

NETWORK FLOWSRavindra K. Ahuja Thomas L. Magnanti James B. Orlin

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139

NETWORK FLOWSRavindra K. Ahuja L. Magnanti James B. Orlin

Thomas

Sloan W.P. No. 2059-88

August 1988 Revised: December, 1988

NETWORK FLOWS

Ravindra K. Ahuja* Thomas L. Magnanti, and James Sloan School of Management Massachusetts Institute of Technology Cambridge, MA. 02139,

B. Orlin

On

leave from Indian Institute of Technology,

Kanpur - 208016, INDIA

MIT.

LffiRARF --^ JUN 1

NETWORK FLOWSOVERVIEWIntroduction 1.1 Applications 1.2 Complexity Analysis 1.3 Notation and Definitions 1.4 Network Representations 1.5 Search Algorithms 1.6 Developing Polynomial Time AlgorithmsBasic Properties of

21

Z2 Z3 24

Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks, Linear and Integer Programming

Network Transformations

Shortest Paths3.1

3.2 3.3 3.4 3.5

Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path AlgorithmDijkstra'sDial's

Maximum Flows4.1

4.24.3 4.4

Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm

4.5

Preflow-Push Algorithms Excess-Scaling AlgorithmCost Flows Duality and Optimality Conditions Relationship to Shortest Path and Maximum Flow Problems

Minimum5.1

5.2 5.35.4 5.5 5.6

Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns

5.75.8 5.9

5.105.11

Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity AnalysisAssignment Problem

Reference Notes

References

Network FlowsPerhaps no subfield of mathematical programmingis

more

alluring than

network optimization.

Highway,

rail,

electrical,lives.

communication and many othera consequence,

physical networks pervade

our everyday

As

even non-specialists

recognize the practical importance and the wide ranging applicability of networks.

Moreover, because the physical operating characteristics of networks

(e.g.,

flows on arcs

and mass balance

at

nodes) have natural mathematical representations, practitioners andof

non-specialists can readily understand the mathematical descriptions

network

optimization problems and the basic ruiture of techniques used to solve these problems.

This combination of widespread applicability and ease of assimilation has undoubtedly

been instrumental in the evolution of network planning models as one of the mostwidely used modeling techniquesin all of operatior^s research

and applied mathematics.

Network optimization

is

also alluring to methodologists.

Networks provide a

concrete setting for testing and devising

new

theories.

Indeed, network optimization has

inspired

many

of the

most fundamental

results in all of optimization.

For example,

price

directive

decomposition

algorithms

for

both

linear

programming andSo did cutting

combinatorial optimization had their origins in network optimization.

plane methods and branch and bound procedures of integer programming, primal-dual

methods

of

linear

and nonlinear programming, and polyhedral methods ofIn addition, networks

combinatorial

optimization.

have served as the major prototype

for several theoretical domaiiis (for example, the field of matroids) for a

and as the core model

wide variety of min/max duality

results in discrete mathematics.

Moreover, network optimization has served as a

fertile

meeting ground for ideas

from optimization and computer

science.

Many

results in

network optimization are

routinely used to design and evaluate computer systems,science concerning data structures

and ideas from computer

and

efficient data

manipulation have had a majoroptimization algorithms.

impact on the design and implementation of

many network many

The aimoptimization.

of this paf>er

is to

summarilze

of the fundamental ideas of network

In particular,

we

concentrate on network flow problems and highlight a

number

of recent theoretical

and algorithmic advances.topics:

We

have divided the discussion

into the following

broad major

ApplicationsBasic Prof)erties of

Network Flows''

Shortest Path Problems

Maximum Flow Problems Minimum Cost Flow ProblemsAssigTunent Problems

Much

of our discussion

focuses on the design of provably goodalgorithms,

(e.g.,

polynomial-time) algorithms.

Among good

we have

presented those thatto structure

are simple and are likely to be efficient in practice.

We

have attempted

our

discussion so that

it

not only provides a survey of the field for the specialists, but also

serves as an introduction and

summary

to the non-specialists

who have

a basic

working

knowledge

of the rudiments of optimization, particularly linear

programming.listed

In this chapter,

we

limit

our discussions to the problems(i)

above.

Some

important generalizations of these problems such as(ii)

the generalized network flows;

the

multicommodity flows; and

(iv)

the network design, will not be covered in our

survey.

We, however,

briefly describe these

problems

in Section 6.6

and provide some

important references.

As

a prelude to the

remainder of our discussion,.

in this section

we

present

several important preliminaries

We

discuss

(i)

different

ways

to

measure thenetworksof

performance of algorithms;quantitively;(iii)

(ii)

graph notation and vtirious waysthat

to represent

a

few basic ideas from computer science(iv)

underUe the designto

manyin1.1

algorithms;

and

two generic proof techniques

that

have proven

be useful

designing polynomial-time algorithms.

Applications

Networksthis section,

arise in

numerous application

settings

emd

in a variety of guises.

Inis

we

briefly describe a

few prototypical applications.

Our

discussion

intended to illustrate a range of applications and to be suggestive of

how network

flow

problems

arise in practice;

a

more extensive survey would take us

far

beyond the scope

of our discussion.

To

illustrate the

breadth of network applications,that

we

consider

some

models requiring solution techniquesFor the purposes of

we

will not describe in this chapter.

this discussion,

we

will consider four different types of

networks arising

in practice:

Physical networks (Streets, railbeds, pipelines, wires)

Route networksSpace-time networks (Scheduling networks)

Derived networks (Through problem trai^formations)in coverage.

These four categories are not exhaustive and overlap

Nevertheless,

they provide a useful taxonomy for summarizing a variety of applications.

Network flow models are

also used for several purposes:

Descriptive modeling (answering "what is?" questions)Predictive modeling (answering "what will be?" questions)

Normative modeling (answering "what should be?" questions,performing optimization)

that

is,

We

will illustrate

models

in

each of these categories.

We

first

introduce the basic

underlying network flow model and some useful notation.

The Network Flow ModelLet

G

= (N, A) be a directed network with a cost(i, j)

Cjj,

a

lower bound

/,;,

and a

capacityinteger

Uj;

associated with every arcb(i)

e A.

We

associate with eachIf b(i)

nodei

i

e

N

an

number

0,

then nodei

is

a supply

node;node.

if b(i)

then node|

is

a|

demand node; and|.

=

0,

then node

is

a transhipment

Let

n =

N

|

and

m= A

The minimum

cost

network flow problem can be

formulated as follows:

Minimize

^

C;; x;:'

(1.1a)

(i,j)A^

subject to

X^ii{j:(i,j)e]\}

-

Xxji{j:(j,i)6^A}

=b(i),

foralliN,

(1.1b)

/jj

0),

no matter how much or how

little,

we

incur a fixed costt

In addition

we mayh^

incur a per unit production cost c^ in period

and a pert

unit

inventory cost

for carrying

any unit of inventory from periodproblemis

t

to i>eriod

+

1.

Hence, the cost on each arc for

this

either linear (for inventory carrying arcs)

or linear plus a fixed cost (for production arcs).

Consequently, the objective function for

10

the

problem

is

concave.

As we

indicate in Section 2.2

,

any such concave

cost

network

flow problem always has a special type of optimum solutionsolution. This problem's spanning tree solution

known

as a spanning trees

decomposesform

into disjoint directed paths;

the

first

arc

on each patharc.

is

a production arc (of the

(0, t))

and each other

arc

is

an

inventory carryingsolution,

This observation implies the following production property: in the

each time

we

produce,

we produce enough

to

meet the demand

for an integral

number

of contiguous periods.

Moreover, in no period do

we

both carry inventory from

the previous period and produce.

The production property permits usshortest pathconsists of

to solve the

problem veryas follows.

efficiently as a

problem on an auxiliary network G' defined1

The network G'i

nodesj).

to

T+

1,

and

for every pair of(i,j)

nodes

i

and

j

with

label of zero,

and each other node

j

a

temporary

label equal to Cgj

A, and

otherwise. At each iteration, the label of a

nodeare

i

is its

shortest distance

from the source node along a path whose internal nodesselects a

all

permanently labeled. The algorithmlabel,

node

i

with the

minimumlabels

temporary

makes

it

permanent, and scans

au-cs in A(i) toit

update the distamceall

of adjacent nodes.

The algorithm terminates when

has designatedrelies

nodes as

permanently labeled. The correctness of the algorithm

on the key observation

we prove later) that it is always possible to minimum temporary label as permanent. The following(whichbasic implementation of Dijkstra's algorithm.

designate the node vdth thealgorithmic representationis

a

51

algorithm DIJKSTRA;begin

P:=(s); T: = N-{s);d(s)d(j): :

= =

and pred(s) =:

0;

Cgj

and

pred(j)

:

=

s

if

(s,j)

e

A

,

and

d(j)

:

=

otherwise;

while P *begin

N

do

(node selection)

let

i

e

T be

a

nodeT:

for

which

d(i)

= min

{d(j)

:

j

T);

P: = Pu(i);{distance update) for eachif(i,j)

= T-{i};

A(i)

dothend(j):

d(j)

>

d(i)

+

Cjj

=

d(i)

+

Cjj

and

pred(j)

:

=

i;

end; end;

The algorithmi

associates a predecessor index, denotedto

by

pred(i),

with each node

N.

The algorithm updates these indices(tentative) shortest path

ensure thats to

pred(i) is the last

node

prior to

i

on the

from node

node

i.

At termination, these indices

allow us to trace back along a shortest path from each node to the source.

To

establish the validity of Dijkstra's algorithm,in the algorithm, the

we

use an inductive argument.sets,

At each point

nodes are partitioned into two

P and

T.

Assume

that the label of each

nodej

in

P

is

the length of a shortest path from the source, whereas

the label of each

node

in

T

isj)

the length of a shortest path subject to the restriction that

each node in the path (except

belongs to

P.

Then

it

is

possible to transfer the

node

i

in

Tto

with the smallest label

d(i) to

P

for the following reason:thatis

any path P from the source

node

i

must contain

a first

node ki

in T.

However, node k must beis ati

at least as far

away from

the source as

node

since

its

label

least that of

node

i;

furthermore, the

segment of the path P between node k and node

has a nonnegative length because arc

lengths are nonnegative. This observation shows that the length of path

P

is at least d(i)

and hencelabeledi

it is

valid to permanently label

node

i.

After the algorithm has permanentlyin

node

i,

the temporary labels of

some nodes>

T+Cj:

(i)

might decrease, because

node could become an internal node in the must thus scan all of the arcs (i, j) in A(i); ifupdates the labels of nodes in T (i).

tentative shortest paths to these nodes.d(j)

We+Cj;

d(i)

,

then setting

d(j)

=

d(i)

The computational timeits

for this algorithm can

be

split into

the time required by

two

basic operatior\s--selecting nodes

and ujjdatingi

distances. In an iteration, the

algorithm requires 0(n) time to identify the node

with

minimum temporary

label

and

52takes 0( A(i)II

))

time to update the distance labels of adjacent nodes. Thus, overall, the

algorithm requires Oirr-) time for selecting nodes and CX

^ie

A(i)|

|

)

= 0(m)

time for

Nthus runs in O(n^)

updating distances. This implementationtime.

of Dijkstra's algorithm

Dijkstra's algorithm has

been a subject of much research.

Researchers have

attempted

to

reduce the node selection time without substantially increasing the time for

updating distances. Consequently, they have, using clever data structures, suggestedseveral

implementations of the algorithm.

These implementations have eitherits

dramatically reduced the running time of the algorithm in practice or improved

worst case complexity. In the following discussion,is

we

describe Oial's algorithm, which

currently comparable to the best label setting algorithm in practice.

Subsequentlythe best

we(Aall

describe an implementation using

R-heaps, which

is

nearly

knownmost

implementation of Dijkstra's algorithm from the perspective of worst-case analysis.

more complex version

of R-heaps gives the best worst-case performance for

choices of the parameters n, m, and C.)

3^

Dial's

Implementationin Dijkstra's

The bottleneck operationthe algorithm's performance,all

algorithm

is

node

selection.

To improve

we must

ask the following question.

Instead of scanning

temporarily labeled nodes at each iteration to find the one with the

minimumin a sorted

distance label, can

we reducein practice,

the computation time

by maintaining distances

fashion?

Ehal's algorithm tries to

accomplish this objective, and reduces the algorithm'sfact:

computation time

using the foUouingthat

FACT

3.1. The distance nondecreasing.

labels

Dijkstra's algorithm designates as permanent are

This fact follows from the observation that the algorithm permanently labels a

node

i

with

smallest temporary label

d(i),

and while scanning arcs

in A(i)

during the

distance update step, never decreases the distance label of any permanently labeled nodesince arc lengths are nonnegative.selection.

FACT

3.1

suggests the following scheme for node0, 1, 2,...,

We

maintain nC+1 buckets numberedlabelis k.

nC. Bucket k stores each node

whose temporary distancenetwork and, hence, nCis

Recall that

C

represents the largest arc length in theall

an upper bound on the distance labels of

the nodes. In theidentify thefirst

node

selection step,

we

scan the buckets in increasing order untillabel of each

weis

nonempty

bucket.

The distance

node

in this

bucket

minimum. One by

53one,arc

we

delete these rodes from the bucket,

making them permanent and scanning

their

lists to

update distance

labels of adjacent nodes.

We

then resume the scanning of

higher numbered buckets in increasing order to select the next nonempty bucket.

By

storing the content of these buckets carefully,

it is

possible to add, delete,

and

select the next

element of any bucket veryconstant.

efficiently; in fact, in

0(1) time,

i.e.,bls

a time

bounded by somelinkedlist.

One implemention

uses a data structure knov\T

a doubly

In this data structure,

we

order the content of each bucket arbitrarily, storingto its

two pointers

for each entry:

one pointer

immediate predecessor and one

to its

immediate successor. Doing so permitsthe topmostrelabel

us,

by rearranging the

pointers, to select easilya node.

node from the

list,

add

a

bottommost node, or deletelabel,

Now,it

as

we

nodes and decrease any node's temporary distance

we move

from a

higher index bucket to a lower index bucket; this transfer requires 0(1) time.

Consequently, this algorithm runs infollowingfact

0(m

+ nC) time and uses nC+1 buckets. Theof buckets to C+1.

allows us to reduce theIf d(i) is the

number

FACT

3.2.

distance label that the algorithm designates as permanent at thed(j)

beginning of an iteration, then at the end of that iteration labeled node j in T.

ecific order, or

bottleneck operation

is

the

number

of nonsaturating pushes.

suggested improvements based on examining nodes in someclever data structures.

We

describe one such improvement

,

called the

wave algorithm.

The wave algorithmselects active

is

the

same

as the

Improve-Approximation procedure, but

it

nodes

for the push/relabel step in a specific order.

The algorithm uses thenetwork can

acyclicity of the admissible network.

As

is

well

known, nodesi

of an acyclic

be ordered so that for each arc

(i, j)

in the

network,

ears to be similar to the shortestfor the

augmenting path algorithm

maximum

flow problem; this algorithm,

also

requires 0(n) time on average to find each augmenting path.

The double

scaling

algorithm uses the following Improve-Approximation procedure.

procedurebegin

IMPROVE- APPROXIMATION-n(e, x,and compute node imbalances;+ E for,

n);

set X :=7t(j)

:=

7t(j)

all

j

N2;

A:=2riogUl;while the network contains an active node dobeginS(A) :=(

i

Nj

u N2

:

e(i)

^A

};

while S(A) ^

do

begin OlHS-scaling phase)select a

node k

in S(A)

and delete

it

from

S(A);/

determine an admissible path P from node k to some nodewithe(/)

arse

and nondense.

Further,

this

algorithm does not use any complex data structures.

Scaling excesses by a factor of log U/log log

U

and pushing flow from

a large excess

node with the highest distance

label,

Ahuja, Orlin and Tarjan [1988] reduced theU).

number

of nonsaturating pushes to

OCn^ log U/ log log

Ahuja, Orlin and Tarjan

[1988] obtained another variation of origir\al excess scaling algorithm

which further

reduces the number of nonsaturating pushes to 0(n^

VlogU

).

The use

of the

dynamic

tree data structureits

improves the running times of the

excess-scaling algorithm and

variations,

though the improvements are not asFor

dramatic as they have been for example, the

E>inic's

and the FIFO preflow push algorithms.

0(nm + n^ Vlog Utrees, as

)

algorithm improves to

O nm log

+2

by using dyiuimic

showT in Ahuja, Orlin and Tarjan [1988].

Tarjan [1987]

conjectures that any preflow push algorithm that performs

p noraturating pushestrees.

can be implemented in

0(nm

log

(2+p/nm) time using dynamic

Although

this

166

conjecture

is

true for

all

known preflow push

algorithms,

it

is

still

open

for the

general case.

Developing a polynomial-time primal simplex algorithm for theflow problem has been an outstanding open problem for quite some time.

maximumRecently,essentially

Goldfarb and Hao [1988] developed such an algorithm. This algorithm

is

based on selecting pivot arcs so that flowsource to the sink.

is

augmented along

a shortest path

from the

As one would

expect, this algorithm performs

0(nm)

pivots

andto

can be implemented in (n^m) time.

Tarjan[1988] recently

showed how

implement

this

algorithm in

0(nm

logn) using

dynamic

trees.

Researchers have also investigated the following special cases of the

maximum(i.e.,

flow problems:(ii)

the

maximum

flow problem on(i.e.,

(i)

unit capacity networksin the;

U=l);

unit capacity simple networks

U=l, and, every node

network, except source and sink, has one incoming arc or one outgoing arc)bipartite networks;

(iii)

and

(iv)

planar networks.is

Observe that the

maximum

flow value

for unit capacity

networks

less

than n, and so the shortest augmenting pathin

algorithm will solve these problemsto solve

0(nm)

time.

Thus, these problems are easier

than are problems with large capacities. Even and Tarjan [1975] showed that

Dinic's algorithm solves the

maximum

flow problem on unit capacity networks inOrlin and

O(n^'-'m) time and on unit capacity simple networks in 0(n^/2in) time.

Ahuja [1987] have achieved the same time bounds using a modification of theshortest

augmenting path algorithm. Both of these algorithms

rely

on ideas

contained in Hopcraft and Karp's [1973] algorithm for

maximum

bipartite matching.

Femandez-Baca and Martelsmall integer capacities.

[1987]

have generalized these ideas for networks with

Versions of thebipartiteLet

maximum= (N^

flow algorithms run considerably faster on aifj

networks

G

u

N2, A)

Nj

j

erformances of algorithms.

Finally,

we

discuss two important generalizations of the(ii)

problem:problem.

(i)

the multi-terminal flow problem;

the

maximum maximum dynamic

flow

flow

maximum flow value between every pair of nodes. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximumIn the multi-terminal flow problem,

we wish

to

determine the

flow problems. Recently, Gusfield [1987] has suggested a simpler multi-terminal flowalgorithm.

These

results,

however

,

do not apply

to the multi-terminal

maximum

flow problem on directed networks.

169

In the simplest version of

maximum dynamictj:

flow problem,

we

associate

with each arcthat arc.

(i, j)

in theis

network a numberto

denoting the time needed to traversepossible flow from the source

The

objective

send the

maximum

nodefirst

to the sink

node within

a given time period T.

Ford and

Fulkerson [1958]

showed

that the

maximum dynamic

flow problem can be solved by solving aa nice treatment of

nunimumin

cost flow problem.

(Ford and Fulkerson [1962] give

this problem).

Orlin [1983]is to

has considered infinite horizon dynannic flow problems

which the objective

minimize the average cost per period.

6.4

Minimum

Cost Flow Problemcost

The minimum

flow problem has a rich history.caise of

The

classical

transportation problem, a special

the

minimum

cost flow

problem,was posed

and solved (though incompletely) by Kantorovich

[1939],

Hitchcock [1941], and

Koopmans

[1947].

Dantzig [1951] developed the

first

complete solution procedure for

the transportation problem by specializing his simplex algorithm for linear

programming.

He observedfor linear

the spanning tree property of the basis and thesolution.

integrabty property of the

optimum

Later his development of the upper

bounding technique

programming

led to an efficient sp)ecializatior of the

simplex algorithm for thediscusses these topics.

minimum

cost flow problem.

Dantzig's book [1962]

Ford and Fulkerson [1956, 1957] suggested thefor the uncapacitated

first

combinatorial algorithms

and capacitated transportation problem; these algorithms areFord and Fulkerson [1962] describe thecost flow problem.

known

as the primal-dual algorithms.

primal-dual algorithm for the

minimum

Jewell [1958],

Iri

[1960]

and Busaker and Gowenalgorithm.

[1961] independently discovered the successive shortest path

These researchers showed

how

to solve the

minimum

cost flow

problem[1971]

as a sequence of shortest path problems v^th arbitrary arc lengths.

Tomizava

and Edmonds and Karp

[1972] independently pointed out that

if

the computations

use node potentials, then these algorithms can be implemented so that the shortestpath problems have nonnegative arc lengths.

Mintyalgorithm.

[1960]

and Fulkerson

[1961] independently discovered the out-of-kilteris

The negative cycle algorithm

credited to Klein [1967]. Helgason

and

Kennington [1977] and Armstrong, Klingnun and Whitman [1980]

describe the

170

specialization of the linearcost flow

programming dual simplex algorithmnot discussed in this chapter).

for the

minimum

problem (which

is

Each of these algorithms

perform iterations that can (apparently) not be polynomially bounded. Zadeh [1973a]describes one such example on which each ofseveral

algorithms

the primal

simplex algorithm with Dantzig's pivot

rule, the

dual simplex algorithm, the

negative cycle algorithm (which augments flow along a most negative cycle), thesuccessive shortest path algorithm, the primal-dual algorithm, and the out-of-kilter

algorithm

- performs an

exponential

number

of iterations.

Zadeh

11973b) has also

described more pathological examples for network algorithms.

The

fact that

one example

is

bad

for

many networkinsightful

algorithms suggests

inter-relationship

among

the algorithms.

The

paper by Zadeh [1979]just

showed

this relationship

by pointing out

that each of the algorithms

mentionedof

are indeed equivalent in the sense that they perform the

same sequence

augmentations provided

ties are

broken using the same

rule.

All these algorithms

essentially cortsist of identifying shortest paths

between appropriately defined nodes

and augmenting flow along these

paths.

Further, these algorithms obtain shortest

paths losing a method that can be regarded as an application of Dijkstra's algorithm.

The network simplex algorithm andmost popular with operations researchers.

its

practical implementations

have beenfirst

Johnson [1966] suggested the

treefirst

manipulating data structure for implementing the simplex algorithm.

The

implementations using these ideas, due to Srinivasan and Thompson [1973] andGlover,

Kamey, Klingman and Napier

[1974],

significantly

reduced the running time

of the simplex algorithm.

Glover, Klingman and Stutz [1974], Bradley,

Brown and

Graves

[1977],

and

Barr, Glover,

and Klingmanof

[1979] subsequently discoveredis

improved dataexcellent

structures.

The book

Kennington and Helgason [1980]

an

source for references and background material concerning these

developements.Researchers have conducted extensive studies to determine the most effectivepricing strategy,i.e.,

selection of the entering variable.

These studies show that the

choice of the pricing strategy has a significant effect on both solution time and the

numberstrategyBrovkTi

of pivots required to solve

minimum

cost flow problems.

The candidate

list

we

described

is

due

to

Mulvey

[1978a].

Goldfarb and Reid [1977], Bradley,Gibby, Glover, Klingman andthat

and Graves[1983]

[1978], Grigoriadis

and Hsu

[1979],

Mead

and Grigoriadis

[1986]

have described other strategies

have been

171

effective in practice.

It

appears that the best pricing strategy depends both upon thesize.

network structure and the network

Experience with solving large scaleestablished that

minimum

cost

flow problems has

more than 90%

of the pivoting steps in the simplex

method can be

degenerate (see Bradley, Brown and Graves [1978], Gavish, Schweitzer and Shlifer[1977]

and Grigoriadis

(1986]).

Thus, degeneracy

is

both a computational and a

theoretical issue.

The strongly

feasible basis technique,

proposed by Cunningham[1977a, 1977b, 1978) has

{1976]

and independently by

Barr, Glover

and Klingman

contributed on both fronts.

Computational experience has

shown

that maintaining

strongly feasible basis substantially reduces the

number

of degenerate pivots.

On

the

theoretical front, the use of this technique led to a finitely converging primal simplex

algorithm. Orlin [1985] showed, using a p>erturbation technique, that for integer data

an implementation of the primal simplex algorithm that maintainsfeasible basis

a strongly

performs

O(nmCU)pivots

pivots

when used with any

arbitrary pricing strategy

and 0(nm

C log (mCU))

when used with

Dantzig's pricing strategy.

The strongly

feasible basis technique prevents cycling during a

sequence of

consecutive degenerate pivots, but the

numberis

of consecutive degenerate pivots

may

be exponential.

This

phenomenon

knownthe

as stalling.

Cunningham

[1979]

described an example of stalling and suggested several rules for selecting the enteringvariable to avoid stalling.

One such

rule

is

LRCfixed,

(Leaist

Recently Considered) rule

which orders the arcs

in

an arbitrary, but

manner.

The algorithm then

examines the arcs in the wrap-around fashion, each

iteration starting at a place

where

it

left

off earlier,

and introduces the

first

eligible arc into the basis.

Cunningham showedpivots.

that this rule admits at

most

nm

consecutive degenerate

Goldfarb,

Hao and

Kai [1987] have described more anti-stalling pivot rules for

the

minimum

cost flow problem.

Researchers have also been interested in developing polynomial-timesimplex algorithms for the

minimum

cost flow

problem or

its

special CJises.

The onlyis

polynomial time-simplex algorithm for thesimplex algorithm due to Orlin [1984];the uncapacitatedthis

minimum

cost flow

problem

a dual

algorithm performs 0(n^log n) pivots for

minimum

cost flow problem.

Developing a polynomial-time

primal simplex algorithm for the

minimum

cost flow

problem

is

still

open.

However, researchers have developed such algorithmsthe

for the shortest path problem,

maximum

flow problem, and the assignment problem:

Dial et

al.

[1979],

Zadeh

172

[1979], Orlin [1985],

Akgul

[1985a], Goldfarb,

Hao and

Kai [1986] and Ahu)a and OrUn[1988] for the

[1988] for the shortest path problem; Goldfarb

and Hao

maximum

flow

problem; and Roohy-Laleh [1980],

Hung

[1983], Orlin [1985],

Akgul [1985b] and Ahuja

and Orlin

[1988]

for the

assignment problem.

Theattractive

relaxation algorithms

proposed by Bertsekas and his associates are other

algorithms for solving theFor the

minimum

cost

flow problem and

its

generalization.

mirumum

cost flow problem, this algorithm maintains a(i)

pseudoflow satisfying the optimality conditions. The algorithm proceeds by either

augmenting flow from an excess nodewith zero reduced cost, orlatter case,it

to a deficit

node along

a path cortsisting of arcs

(ii)

changing the potentials of a subset of nodes.to their

In thesatisfy

resets flows

on some arcshowever,

lower or upper bounds so as to

the optimality conditions;

this

flow assignment might change the excessesthat each

and

deficits at nodes.

The algorithm operates so

changeit

in the

node

potentials increases the dual objective function value

and when

finally

determines

the

optimum dual

objective function value,

it

has also obtained an

optimum primalBertsekas

solution.

This relaxation algorithm has exhibited nice empirical behavior.

[1985]

suggested the relaxation algorithm for theBertsekas and Tseng [1985]real data,

minimumthis

cost flow

problem (with

integer data).

extended

approach

for the

minimumcost flow

cost flow

problem with

and

for the generalized

minimum

problem

(see Section 6.6 for a definition of this problem).

A numbersizes.

of empirical studies

have extensively tested

minimumto

cost flow

algorithms for wide variety of network structures, data distributions,

and problem

The most common problem generator[1974],

is

NETGEN, due

Klingman, Napier

and Stutz

which

is

capable of generating assignment, and capacitated or

uncapacitated transportation and

minimum

cost flow problems.[1976]

Glover,

Kamey and

Klingman

[1974]

and

Aeishtiani

and Magnanti

have tested the primal-dual and

out-of-kilter algorithms.

Helgason and Kennington [1977] and Armstrong, Klingman

and Whitmanalgorithm.

[1980]

have reported on extensive studies of the dual simplexsubject of

The primal simplex algorithm has been a

more rigorous,

investigation; studies conducted

by Glover, Kamey, Klingman and Napier [1974]

Glover,

Kamey and Klingmanand Hsu[1988]

[1974], Bradley, Brov^Ti

and Graves

[1977],

Mulvey

[1978b], Grigoriadis

[1979]

and Grigoriadis [1986] are noteworthy. Bertsekasresults for the relaxation algorithm.

and Tseng

have presented computational

173

In view of Zadeh's [1979] result,

we would

expect that the successive shortest

path algorithm, the primal-dual algorithm, the out-of-kilter algorithm, the dual

simplex algorithm, and the primal simplex algorithm with Dantzig's pivot rule

should have comparable running times.that determine a

By using more

effective pricing strategies

good entering arc without examining

all arcs,

we would

expect thatAll the

the primal simplex algorithm should outperform other algorithms.

computational studies

have verified

this expectation

and

until very recently theall

primal simplex algorithm has been a clear winner for almost

classes of

networkis

problems. Bertsekas and Tseng [1988] have reported that their relaxation algorithmsubstantially faster than the primal simplex algorithm.

However, Grigoriadis

[1986]

finds his algorithm.

newAt

version of primal simplex algorithm faster than the relaxationthis time,it

appears that the relaxation algorithm of Bertsekas and

Tseng, and the primal simplex algorithm due to Grigoriadis are the two fastestalgorithms for solving the

minimum

cost flow

problem

in practice.

Computer codespublic domain.

for

some minimum

cost flow

problem are available

in the

These include the primal simplex codes

RNET and NETFLOW

developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980],respectively,[1988].

and the relaxation code

RELAX

developed by Bertsekas and Tseng

Polynomial-Time AlgorithmsIn the recent past, researchers have actively pursued the design of fast

(weakly) polynomial and strongly polynomial-time algorithms for thecost flow problem.

minimumif

Recall that an algorithmin the

is

strongly polynomial-time

its

running time

is

polynomial

numberor U.

of nodes

and

arcs,

and does not evolvesummarizes

terms containing logarithms of

C

The

table given in Figure 6.3

these theoretical developments in solving thetable reports running times for

minimum

cost flow problem.

The

networks with n nodes and

m arcs,

m' of which arein absolute

capacitated.

It

cissumes that the integral cost coefficients are

bounded

value by C, and the integral capacities, supplies and demands are bounded in absolute value by U. The term S()is

the running time for the shortest path problem

and theflow

term M() represents the corresponding running time to solve aproblem.

maximum

174

Polynomial-Time Combinatorial Algorithms

#1

Discoverers

Running Time[1972]

Edmonds and KarpRock Rock[1980] [1980]

0((n + m") log

2 3 45 6

0((n +

U S(n, m, C)) m') log U S(n, m, O)C M(n, m, U)) C M(n, m, U))nC))

0(n 0(n

log log

Bland and Jensen [1985]

Goldberg and Tarjan [1988a]Bertsekas and Eckstein [1988]

0(nm log irr/nx) log nC)o(n3log

7 78

Goldberg and Tarjan [1987]

0( n^ log nC

Gabow and

Tarjan [1987][1987, 1988b]

0(nm

log n log log n log(log

U log nQnC)

Goldberg and Tarjan

0(nm0(nm0(nm

9

Ahuja, Goldberg, Orlin

U/log log U) log nC)

and Tarjan

[1988]

andlog log

U log nQ

Strongly Polynomial -Time Combinatorial Algorithms

#

175

For the sake of comparing the polynomial and strongly polynomial-timealgorithms,

we

invoke the similarity assumption.

For problems that satisfy the

similarity assumption, the best

bounds

for the shortest path

and maximum flow

problems

are:

Polynomial-Time BoundsS(n,m, C) =

Discoverers

min (m

log log C,

m + rh/logC

)

Johnson

[1982],

and

Ahuja, Mehlhom, Orlin and Tarjan[1988]

M(n, m, C) =

nm

^%rT^gTJlog[

^

+ 2J

Ahuja, Orlin and Tarjan [1987]

Strongly Polynomial -Time BoundsS(n,

Discoverers

m) =

m+nm

n log nlog

Fredman and Tarjan

[1984]

M(n, m) =

(n^/m)

Goldberg and Tarjan [1986]

Using capacity and right-hand-side scaling, Edmonds and Karp [1972]developed thefirst

(weakly) polynomial-time eilgorithm for thein Section 5.7,

minimumL>

cost flow

problem. The RHS-scaling algorithm presentedthe

which

a Vciriant of

Edmonds-Karp algorithm, was suggested by Orlininitiallylittle

[1988].

The

scaling techniqueit

did not

capture the interest of

many

researchers, since they regarded

as

having

practical utility.

However, researchers gradually recognized

that the

scaling technique has great theoretical value as well as potential practical significance.

Rock

[1980]

developed two different bit-scaling algorithms for the

minimum

cost

flow problem, one using capacity scaling and the other using cost scaling. This costscaling algorithm reduces the

minimum

cost flow

problem

to a

sequence of

0(n log C) maximum flow problems.

Bland and Jensen [1985] independently

discovered a similar cost scaling algorithm.

The pseudoflow push algorithms

for the

minimum

cost flow

problem

discussed in Section 5.8 use the concept of approximate

optimality,

introduced

independently by Bertsekas [1979] and Tardos [1985]. Bertsekas [1986] developed thefirst

pseudoflow push algorithm.

This algorithm was pseudopolynomial-time.this

Goldberg and Tarjan [1987] used a scaling technique on a variant ofobtain the generic pseudoflow push algorithm described in Section

algorithm toTarjan [1984]

5.8.

proposed a wave algorithm for the

maximum

flow problem.

The wave algorithm

,

176

for the

minimum

cost flow

problem described

in Section 5.8

,

which was developedrelies

independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988],

upon

similar ideas.

Using a dynamic

tree data structure in the generic

pseudoflow

push algorithm, Goldberg and Tarjan

[1987] obtained a computational time that the

bound

of

0(nm

log n log nC).

They

also

showed

minimum

cost flow

problem cam be

solved using 0(n log nC) blocking flow computations.

(The description of Dinic's

algorithm in Section 6.3 contains the definition of a blocking flow.)finger tree (see

Using both

Mehlhom

[1984])

and dynamic

tree

data structures, Goldberg and

Tarjan [1988a] obtained an

0(nm

log (n^/m) log nC)

bound

for ^he

wave

algorithm.

These algorithms, except the wave algorithm, required sophisticated datastructures that impose a very high computational overhead.

Although the waveThis

algorithm

is

very practical,

its

worst-case running time

is

not very attractive.

situation has

prompted researchers

to investigate the possibility of

improving the

computational complexity of

minimumfirst

cost flow algorithms without using

any

complex dataTarjan [1987],log

structures.

The

success in this direction

was due

to

who

developed a

triple scaling

algorithm running in timeto Ahuja, Goldberg,

Gabow and 0(nm log n

U

log nC).

The second success was due

Orlin and Tarjanscaling algorithm,

[1988],

who developed

the double scaling algorithm.5.9,

The double

as described in Section

runs in

0(nm

log

U

log nC) time.

Scaling costs by

an

appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC)

and

a

dynamic

tree

implementation improves the bound further to

0(nm

log log

U

log nC).

For problems satisfying the similarityis

assumption, the double scaling

algorithm

faster than all other algorithms for all

network topologies except

for

very dense networks; in these instances, algorithms by Goldberg and Tarjan appear

more

attractive.

Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developedother polynomial-time algorithms.cycle algorithm

Both the algorithms are based on the negative

due

to Klein [1967].

Goldberg and Tarjan [1988b] showed thatflow ait

if

the

negative cycle algorithmcycle

always augments along/

minimum mean

cycle (a

W for which V(i,j)

Cj;

|W

|

is

minimum), then

is

strongly polynomial-time.

6

Wthis

Goldberg and Tarjan described an implementation of

approach running

in

time

0(nm(log

n) minflog nC,

m

log

n)).

Barahona and Tardosif

[1987], analyzing

an

algorithm suggested by Weintraub [1974], showed that

the negative cycle algorithm

177

augments flow alongthenit

a cycle with

maximum improvement

in the objective function,

performsis

0(m

log

mCU)

iterations.

Since identifying a cycle with

maximum

improvement

difficult

(i.e.,

NP-hard), they describe a method (based upon solvingto

an auxiliary assignment problem)

determine a disjoint

set of

augmenting cycles

with the property that augmenting flows along these cycles improves the flow cost

by

at least as

much

as

augmenting flow along any single

cycle.

Their algorithm runs

in 0(.Tr\^ log

(mCU)

S(n,

m,

O)

time.

Edmonds and Karpthe

[1972]

proposed the

first

polynomial-time algorithm for

minimum

cost flow problem,

and

also highlighted the desire to develop a

strongly polynomial-time algorithm.theoretical considerations.

This desire

was

motivated primarily by

(Indeed, in practice, the terms login n.)

C and

log

U

typically

range from

1

to 20,

and are sublinear

Strongly polynomial-time algorithms are(i)

theoretically attractive for at least

two reasons:run onreal

they might provide, in principle,

network flow algorithmsdata,

that can

valued data as well as integer valuedlevel, identify the

and

(ii)

they might, at

a

more fundamentali.e.,

source of thedifficult or

underlying complexity in solving a problem;

are problems

more

equally difficult to solve as the values of the tmderlying data becomes increasinglylarger?

TheTardos

first

strongly polynomial-time

minimum

cost flow algorithm

is

due

to

[1985].

Several researchers including Orlin [1984], Fujishige [1986], Galil and

Tardostime.

[1986],

and Orlin

[1988] provided subsequent

improvements

in the

running

Goldberg and Tarjan [1988a] obtained another strongly polynomial timeGoldberg and

algorithm by slightly modifying their pseudoflow push algorithm.Tarjan [1988b] also

show

that their algorithm that proceeds

by cancelling minimvun

mean

cycles

is

also strongly polynomial time.is

Currently, the fastest strongly

polynomial-time algorithm

due

to Orlin

[1988].

This algorithm solves the

minimum

cost flow

problem as a sequence of 0(min(m log U,

m log n)) shortest pathis

problems. For very sparse networks, the worst-case running time of this algorithmnearly as lowcis

the best weakly polynomieil-time algorithm, even for problems that

satisfy the similarity

assumption.

Interior point linear

programming algorithms are another source of

polynomial-time algorithms for the

minimum

cost flow problem.

Kapoor andto the

Vaidya [1986] have shown that Karmarkar's [1984] algorithm, when applied

minimum

cost

flow

problem performs

0(n^-^

mK)

operations,

where

,

178

K=

log n + log

C

+ log U.

Vaidya [1986]

suggested another algorithm for linear

programming

that solves the

minimum

cost flow

problem

in

0(n^-^ y[m K) time.

Asymptotically, these time bounds are worse than that of the double scalingalgorithm.

Atfully

this time, the research

community has

yet to develop sufficient evidence to

assess

the

computational worth of scaling and interior point linearfor the

programming algorithmsfolklore,

minimum

cost flow problem.

According

to the

even though they might provide the best-worst case bounds on runningeu-e

times,

the scaling algorithms[1986]

not as efficient as the non-scaling algorithms. Boydresults.

and Orlin

have obtained contradictory

Testing the right-hand-side

scaling algorithm for the

minimum

cost flow problem, they

found the scaling

algorithm to be competitive with the relaxation algorithm for some classes ofproblems.

Bland and Jensen [1985] also reported encouraging results with their cost

scaling algorithm.

We

believe that

when implemented with appropriate speed-up

techniques, scaling algorithms have the potential to be competitive with the best

other algorithms.

6.5

Assignment Problem

The assignment problem has beenemphasisin the literature has

a popular research topic.

The primaryefficient

been on the development of empirically

algorithms rather than the development of algorithms with improved worst-casecomplexity.

Although the research community has developed several different

algorithms for the assignment problem,features.

many

of these algorithms share

common

The successive

shortest path algorithm, described in Section 5.4 for thelie

minimumalgorithms.[1955],

cost flow problem, appears to

at the heart of

many assignmentdueto

This algorithm

is

implicit in the first assignment algorithm

Kuhn

known

as the Hungarian method,[1972].

and

is explicit

in the papers

by Tomizava

[1971]

and Edmonds and Karp

When

applied to an assignment problem on the network

G

= (N^

u N2

,

A)

the successive shortest path algorithm operates as follows.

To use

this solution

approach,

we(j,t)

first

transform the assignment problem into aa source

minimumarcs

cost flow(s,i)

problem by adding

node;

s

and

a sink

node

t,

and introducing and unit

for all

iN|, and

for all

JN2

these arcs have zero costs tot

capacity.

The

algorithm successively obtains a shortest path from

with respect

to the lir;ar

179

programming reduced

costs,

updates the node potentials, and augments one unit of

flow along the shortest path.

The algorithm solves

the assignment

problem by n

applications of the shortest path algorithm for nonnegative arc lengths

and runs

in

0(nS(n,m,C)) time, where S(n,m,C)problem.

is

the time needed to solve a shortest pathis

For a naive implementation of Dijkstra's algorithm, S(n,m,C)

O(n^) and

for a Fibonacci

heap implementationis

it is

0(m+nlogn).log log C,

For problems satisfying the

similarity assumption, S(n,m,C)

min(m

m+nVlogC}.

The

fact that the

assignment problem can be solved as a sequence of n shortestIri

path problems with arbitrary arc lengths follows from the works of Jewell [1958],[1960]

and Busaker and Gowen[1971]

[1961]

on the minimum cost flow problem. However,[1972] independently pointed out that

Tomizava

and Edmonds and Karp

working with reducedlengths.

costs leads to shortest path

problems with nonnegative arcdetails of

Weintraub and Barahona [1979] worked out the

Edmonds-Karpassignment

algorithm for the assignment problem.algorithm by Glover, Glover

The more recent[1986]is

threshold

and Klingman

also a successive shortest path

algorithm which integrates their threshold shortest path algorithm (see Glover,

Glover and Klingman [1984]) with the flow augmentation process.Sodini [1986] also suggested a similar threshold assignment algorithm.

Carraresi

and

Hoffman and Markowitzpath problemto

[1963] pointed out the transformation of a shortest

an assignment problem.

Kuhn's [1955] Hungarian methodshortest path algorithm.

is

the primal-dual version of the successive

After solving a shortest path problem and updating the

node

potentials, theto

Hungarian method solves a (particularly simple)

maximum

flow problem

send the

maximum

possible flow from the source

node

s to the sink

node

t

using arcs vdth zero reduced cost.in

Whereas the successive shortest pathaniteration, the

problem augments flow along one pathaugments flow alongall

Hungarian methodto the sink node.If

the shortest paths from the source

node

we

use the labeling algorithm to solve the resulting

maximum

flow problems, then

these applications take a total of

0(nm) timetime.

overall, since there are

n augmentatior\s

and each augmentation takes 0(m)runs in

Consequently, the Hungarian method, too,(For

0(nm

+ nS(n,mC)) = 0(nS(n,m,C)) time.

some time

after the

development of the Hungarian method as described by Kuhn, the research

community considered

it

to

be O(n^) method.

Lawler [1976] described an

Oiri^)

180

implementation of the method.

Subsequently,

many

researchers realized that the

Hungarian method

in fact runs in

0(nS(n,m,C)) time.) Jonker and Volgenant [1986]

suggested some practical improvements of the Hungarian method.

The

relaxation approach for the(1969],

minimum

cost flow

problem

is

due

to E>inicis

and Kronrod

Hung

eind

Rom

[1980]

and Engquist

[1982].

This approach

closely related to the successive shortest path algorithm.

Both approaches

start writhis in

an

infeasible assignment

and gradually make

it

feasible.

The major difference

the nature of the infeasibility.

The successive

shortest path algorithm maintains a

solution w^ith unassigned persons and objects,

and with no person oris

object

overassigned.objects

Throughout the relaxation algorithm, every person

assigned, but

may

be overassigned or unassigned. Both the algorithms maintain optimality

of the intermediate solution

and work toward

feasibility

by solving

at

most n shortest

path problems with nonnegative arc lengths.[1969]

The algorithms

of Dinic

and Kronrodbut

and Engquist

[1982] are essentially the

same as the one wein the

just described,

the shortest path computations are

somewhat disguised

paper of Dinic and

Kronrod

[1969].

The algorithm

of

Hung and Romafter

[1980] maintains a strongly feaisible

basis rooted at an overassigned

node and,

each augmentation, reoptimizes overAll of these algorithms

the previous basis to obtain another strongly feaisible basis.

run in 0(nS(n,m,C)) time.

Another algorithm worth mentioningThis algorithmis

is

due

to Balinski

and Gomory

[1964].

a primal algorithm that maintains a feasibleit

assignment and

gradually converts

into

an optimum assignment by augmenting flows along

negative cycles or by modifying node potentials. Derigs [1985] notes that the shortest

path computations vmderlie

this

method, and that

it

rurrs in

0(nS(n,m,C)) time.

Researchers have also studied primal simplex algorithms for the assignment

problem.

The

basis of the assignment problem

is

highly degenerate; of

its

2n-l

variables, only

n are nonzero.

Probably because of this excessive degeneracy, the

mathematical programming community did not conduct

much

research on the

network simplex method for the assignment problem until Barr, Glover and

Klingman

[1977a] devised the strongly feasible basis technique.

These authorsto

developed the details of the network simplex algorithm when implemented

maintain a strongly feasible basis for the assignment problem; they also reported

encouraging computational

results.

Subsequent research focused on developing

ISl

polynomial-time simplex algorithms. Roohy-Laleh [1980] developed a simplex pivotrule requiring O(n^) pivots.

Hung

[1983] describes a pivot rule that performs atat

most

O(n^) consecutive degenerate pivots and

most 0(n log nC) nondegenerate pivots.

Hence, his algorithm performs 0(n^log nC) pivots. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots.This algorithm essentiallyin

amounts

to solving

n shortest path problems and runs

0(nS(n,m,C)) time.

Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for thenetvk'ork simplex algorithm

and showed

that for the eissignment

problem

this rule

requires O(n^lognC) pivots.

A

naive implementation of the algorithm runs in[1988] described a scaling version of Dantzig's pivot

0(n^m

log nC). Ahuja

and Orlin

rule that performs 0(n^log C) pivots

and can be implemented

to

run

in

0(nm log C)

time using simple data structures.

The algorithmcost.

essentially consists of pivoting in

any arc with

sufficiently large

reduced

The algorithm defines the term

"sufficiently large" iteratively; initially, this threshold value equals

C and

within

O(n^) pivots

its

value

is

halved.

Balinski [1985] developed the signature method, which

is

a dual

simplex

algorithm for the eissignment problem.dual feasible basis,itit

(Although his basic algorithm maintains a

is

not a dual simplex algorithm in the traditional sense becauseat

does not necessarily increase the dual objectivealgorithm do have this property.)in

every iteration; some variants of

this

Balinski's algorithm

performs O(n^) pivots

and runs

O(n^) time.

Goldfarb [1985] described some implementations ofO(n^)time using simple data structures and in

Balinski's algorithm that run in

0(nm +

n^log n) time using Fibonacci heaps.

The auction algorithmsuggested in Bertsekas [1979].

is

due

to Bertsekas

and uses basic ideas originally[1988] described a

Bertsekas

and Eckstein

more

recentits

version of the auction algorithm.analysisis

Out

presentation of the auction algorithm tmd

somewhat

different that the

one given by Bertsekas and Eckstein

[1988].

For example, the algorithm

we have

presented increases the prices of the objects by

one unit

at a time,

whereas the algorithm by Bertsekas and Eckstein increases pricesthat preserves e-optimality of the solution.

by the

maximum amount

Bertsekasis

[1981] has presented another algorithm for the assignment

problem whichcost flow

in fact a

specialization of his relaxation algorithm for the

minimum

problem

(see

Bertsekas [1985]).

Currently, the best strongly polynomial-time

bound

to solve the

assignmentalgorithms.

problem

is

0(nm

+ n^ log n)

which

is

achieved by

many assignment

Scaling algorithms can

do

better for

problems that

satisfy the similarityfirst scciling

assumption.

Gabow

[1985]

,

using bit-scaling of costs, developed the

algorithm for the

assignment problem.

His algorithm performs O(log C) scaling phases and solves

each phase in OCn'^'^m) time, thereby achieving jm OCn'^'

^m

log C) time bound.

Using the concept of e-optimality,algorithm running in time 0(n^'

Gabow and

Tarjan [1987] developed another scaling

push algorithmthe assignment

^m log nC). Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5.8 solves problem in 0(nm log nC) since every push is a saturating push.showedthat the scaling version of the auction

Bertsekas and Eckstein [1988]

algorithm runs inthis

0(nm

log nC).

Section 5.11

has presented a modified version of

algorithm in Orlin and Ahuja [1988]. They also improved the time bound of the

auction algorithm to

0(n^'^m lognC). This time boundFor problems satisfyingbest time

is

comparable

to that of

Gabow and

Tarjan 's algorithm, but the two algorithms

would probably have

different

computational attributes.

the similarity assumption, these

two algorithms achieve the

boimd

to solve the

assignment problem

without using any sophisticated data structure.

As mentioned

previously, most of the research effort devoted to assignment

algorithms has stressed the development of empirically faster algorithms.years,

Over the

many

computational studies have compared one algorithm with a few other

algorithms.

Some

representative computational studies are those conducted by Barr,

Glover and Klingman [1977a] on the network simplex method, by McGinnis [1983]

and Carpento, Martello and Toth[1982]

[1988]

on the primal-dual method, by Engquistet al.

on the relaxation methods, and by Glover

[1986]

and Jonker and

Volgenant [1987] on the successive shortest path methods.

Since no paper has

compared

all

of these zilgorithms,

it

is difficult to

assess their computational merits.

Nevertheless, results to date seem to justify the following observations about thealgorithms' relative performance.

The primal simplex algorithm

is

slower than thethe latter

primal-dual, relaxation and successive shortest path algorithms.three approaches, the successive shortest path algorithms

Among

due

to

Glover

et al. [1986]

and Jonker and Volgenant[1988]

[1987]

appear

to

be the

fastest.

Bertsekas and Ecksteinis

found

that the scaling version of the auction algorithm

competitive with

Jonker and Volgenant's algorithm. Carpento, Martello and Trlh [1988] present

183

severalcases.

FORTRAN

implementations of assignment algorithms for dense and sparse

6.6

Other Topics

Ourdomainof

discussion in this paper has featured singlecosts.

commodity network flow

problems with linear

Several other generic topics in the broader problemtheoretical(i)

network optimization are of considerable

and

practical interest.

In particular, four other topics deserve mention:(ii)

generalized network flows;(iv)

convex cost flows;

(iii)

multicommodity flows; and

network design.

We

shall

now

discuss these topics briefly.

Generalized Network Flows

The flow problems we have consideredconserve flows,i.e.,

in this chapter

assume

that arcs

the flow entering an arc equals the flow leaving the arc.If

Inxj:

models of generalized network flows, arcs do not necessarily conserve flow.units of flow enter an arc(i,j),

then

Tj: Xj:

units "arrive" atarc.If

node1,

j;

Tj; is

ais

nonnegative flow multiplier dissociated with thelossy and,1