71
Network Coding: A New Direction in Combinatorial Optimization Nick Harvey

Network Coding: A New Direction in Combinatorial Optimization Nick Harvey

Embed Size (px)

Citation preview

Network Coding:A New Direction in

Combinatorial Optimization

Nick Harvey

Collaborators

David Karger Robert Kleinberg April Rasala Lehman

Kazuo Murota

Kamal Jain

Micah Adler

UMass

Transportation Problems

Max Flow

Transportation Problems

Min Cut

Communication Problems“A problem of inherent interest in the planning of large-scale communication, distribution and transportation networks also arises with the current rate structure for Bell System leased-line services.”

- Robert Prim, 1957Spanning Tree

Steiner Tree

Facility Location

Steiner Forest

Steiner Network

Multicommodity Buy-at-Bulk

Motivation for Network Design largely from communication networks

s1 s2

Send items from s1t1 and s2t2

Problem: no disjoint paths

bottleneck edge

What is the capacity of a network?

t2 t1

b1⊕b2

An Information Networkb1 b2s1 s2

t2 t1

If sending information, we can do better Send xor b1⊕b2 on bottleneck edge

Moral of Butterfly

Transportation Network Capacity≠

Information Network Capacity

Information TheoryDeep analysis of simple channels

(noise, interference, etc.)Little understanding of network structures

Combinatorial OptimizationDeep understanding of transportation

problems on complex structuresDoes not address information flow

Network CodingCombine ideas from both fields

Understanding Network Capacity

Definition: Instance Graph G (directed or undirected)

Capacity ce on edge e k commodities, with

A source si

Set of sinks Ti

Demand di

Typically:all capacities ce = 1

all demands di = 1

s1 s2

t2 t1

Technicality: Always assume G is directed.

Replace with

Definition: Solution Alphabet (e) for messages on edge e A function fe for each edge s.t.

Causality: Edge (u,v) sendsinformation previously received at u.

Correctness: Each sink ti can decodedata from source si.

b1⊕b2

b1 b2

b1⊕b2b2

b1

Multicast

Multicast Graph is DAG 1 source, k sinks Source has r messages in

alphabet Each sink wants all msgs

m1 m2 mr…

Source:

Sinks:

Thm [ACLY00]: Network coding solution exists iff connectivity r from source to each sink

Multicast Example

t1 t2

sm1 m2

Linear Network Codes Treat alphabet as finite field Node outputs linear

combinations of inputs

Thm [LYC03]: Linear codes sufficient for multicast

A B

A+B A+B

Multicast Code Construction Thm [HKMK03]: Random linear codes

work (over large enough field)

Thm [JS…03]: Deterministic algorithm to construct codes

Thm [HKM05]: Deterministic algorithm to construct codes (general algebraic approach)

Random Coding Solution

Randomly choose coding coefficients Sink receives linear comb of source msgs If connectivity r, linear combs

have full rank

can decode!

Without coding, problem isSteiner Tree Packing (hard!)

Our Algorithm Derandomization of [HKMK] algorithm Technique: Max-Rank Completion

of Mixed Matrices Mixed Matrix: contains numbers and variables Completion = choice of values for variables that

maximizes the rank.

k-Pairs Problemsaka “Multiple Unicast Sessions”

k-pairs problem Network coding when each commodity has

one sinkAnalogous to multicommodity flow

Goal: compute max concurrent rateThis is an open question

s1 s2

t2 t1

Rate Each edge has its own alphabet (e) of

messages Rate = min log( (S(i)) )

NCR = sup { rate of coding solutions }

Observation: If there is a fractional flow with rational coefficients achieving rate r, there is a network coding solution achieving rate r.

Source S(i)Edge e log( (e) )

Network coding rate can be muchlarger than flow rate!

Butterfly graph Network coding rate (NCR) = 1 Flow rate = ½

Thm [HKL’04,LL’04]: graphs G(V,E) whereNCR = Ω( flow rate ∙ |V| )

Thm [HKL’05]: graphs G(V,E) whereNCR = Ω( flow rate ∙ |E| )

Directed k-pairss1 s2

t2 t1

NCR / Flow Gaps1 s2

t1 t2

G (1):

Equivalent to:

s1s2

t1 t2

Edge capacity= 1

s1s2

t1 t2

Edge capacity = ½

Network Coding Flow

NCR = 1Flow rate = ½

NCR / Flow Gaps1 s2 s3 s4

t1 t2 t3 t4

G (2):

Start with two copies of G (1)

NCR / Flow Gaps1 s2 s3 s4

t1 t2 t3 t4

G (2):

Replace middle edges with copy of G (1)

NCR / Flow Gaps1 s2 s3 s4

G (1)

t1 t2 t3 t4

G (2):

NCR = 1, Flow rate = ¼

NCR / Flow Gap

G (n-1)G (n):

# commodities = 2n, |V| = O(2n), |E| = O(2n) NCR = 1, Flow rate = 2-n

s1 s2

t1 t2

s3 s4

t3 t4

s2n-1 s2n

t2n-1 t2n

Optimality

The graph G (n) proves:Thm [HKL’05]: graphs G(V,E) whereNCR = Ω( flow rate ∙ |E| )

G (n) is optimal:Thm [HKL’05]: graph G(V,E),NCR/flow rate = O(min {|V|,|E|,k})

Network flow vs. information flowMulticommodity

Flow Efficient algorithms for

computing maximum concurrent (fractional) flow.

Connected with metric embeddings via LP duality.

Approximate max-flow min-cut theorems.

NetworkCoding

Computing the max concurrent network coding rate may be: Undecidable Decidable in poly-time

No adequate duality theory.

No cut-based parameter is known to give sublinear approximation in digraphs.

No known undirected instance where network coding rate ≠ max flow!(The undirected k-pairs conjecture.)

Why not obviously decidable? How large should alphabet size be? Thm [LL05]: There exist networks where

max-rate solution requires alphabet size

Moreover, rate does not increase monotonically with alphabet size! No such thing as a “large enough” alphabet

3/12

2n

Approximate max-flow / min-cut?

The value of the sparsest cut is a O(log n)-approximation to

max-flow in undirected graphs. [AR’98, LLR’95, LR’99]

a O(√n)-approximation tomax-flow in directed graphs. [CKR’01, G’03, HR’05]

not even a valid upper bound on network coding rate in directed graphs!

s1 s2

t2 t1

e

{e} has capacity 1 and separates 2 commodities, i.e. sparsity is ½.

Yet network coding rate is 1.

Approximate max-flow / min-cut? The value of the sparsest cut

induced by a vertex partition is a valid upper bound, but can exceed network coding rate by a factor of Ω(n).

We next present a cut parameter which may be a better approximation…

si

ti

sj

tj

Definition: A e if for every network coding solution, the messages sent on edges of A uniquely determine the message sent on e.

Given A and e, how hard is it to determine whether A e? Is it even decidable?

Theorem [HKL’05]: There is a combinatorial characterization of informational dominance. Also, there is an algorithm to compute whetherA e in time O(k²m).

Informational Dominance

i

i

i

s1 s2

t2 t1

A does not dominate B

Informational Dominance

Def: A dominates B if information in A determines information in Bin every network coding solution.

Informational Dominance

Def: A dominates B if information in A determines information in Bin every network coding solution.

s1 s2

t2 t1

A dominates B

Sufficient Condition: If no path fromany source B then A dominates B

(not a necessary condition)

Informational Dominance Examples1 s2

t1

t2

“Obviously” flow rate = NCR = 1 How to prove it? Markovicity? No two edges disconnect t1 and t2 from both sources!

Informational Dominance Examples1 s2

t1

t2

Our characterization implies thatA dominates {t1,t2} H(A) H(t1,t2)

Cut A

Informational Meagerness Def: Edge set A informationally isolates

commodity set P if A υ P P.

iM (G) = minA,P

for P informationally isolated by A

Claim: network coding rate iM (G).

iCapacity of edges in A

Demand of commodities in P

Approximate max-flow / min-cut? Informational meagerness is no better than an

Ω(log n)-approximation to the network coding rate, due to a family of instances called the iterated split butterfly.

Approximate max-flow / min-cut? Informational meagerness is no better than a

Ω(log n)-approximation to the network coding rate, due to a family of instances called the iterated split butterfly.

On the other hand, we don’t even know if it is a o(n)-approximation in general.

And we don’t know if there is a polynomial-time algorithm to compute a o(n)-approximation to the network coding rate in directed graphs.

Sparsity Summary Directed Graphs

Undirected Graphs

Flow Rate Sparsity < NCR iM (G)

in some graphs

Flow Rate NCR Sparsity

easy consequence of info. dom.Gap can be Ω(log n) when G is an expander

Undirected k-Pairs Conjecture

Flow Rate Sparsity NCR< =? ?= <? ?

Undirected k-pairs conjecture

Unknown until this work

The Okamura-Seymour Graph

s1

t1s2

t2s3

t3

s4 t4

Every edge cut has enough capacity to carry the combined demand of all commodities separated by the cut.

Cut

Okamura-Seymour Max-Flow

s1

t1s2

t2s3

t3

s4 t4

Flow Rate = 3/4

si is 2 hops from ti.

At flow rate r, eachcommodity consumes 2r units of bandwidth in a graph with only 6 units of capacity.

The trouble with information flow… If an edge combines

messages from multiple sources, which commodities get charged for “consuming bandwidth”?

We present a way around this obstacle and boundNCR by 3/4.

s1

t1s2

t2s3

t3

s4 t4

At flow rate r, each commodity consumes at least 2r units of bandwidth in a graph with only 6 units of capacity.

Thm [AHJKL’05]: flow rate = NCR = 3/4.

We will prove:

Thm [HKL’05]: NCR 6/7 < Sparsity. Proof uses properties of entropy.

A B H(A) H(B)Submodularity: H(A)+H(B) H(AB)+H(AB)

Lemma (Cut Bound): For a cut A E,H( A ) H( A, sources separated by A ).

Okamura-Seymour Proof

s1

t1s2

t2s3

t3

s4 t4

H(A) H(A,s1,s2,s4) (Cut Bound)

Cut A

s1

t1s2

t2s3

t3

s4 t4

H(B) H(B,s1,s2,s4) (Cut Bound)

Cut B

Add inequalities:H(A) + H(B) H(A,s1,s2,s4) + H(B,s1,s2,s4)

Apply submodularity:H(A) + H(B) H(AB,s1,s2,s4) + H(s1,s2,s4)

Note: AB separates s3 (Cut Bound)

H(AB,s1,s2,s4) H(s1,s2,s3,s4)

Conclude:H(A) + H(B) H(s1,s2,s3,s4) + H(s1,s2,s4)6 edges rate of 7 sources rate 6/7.

Cut ACut B

Rate ¾ for Okamura-Seymour

s1 t3

s2 t1

s3 t2

s4 t4

s1

i

s1 t3

s3

Rate ¾ for Okamura-Seymour

s1 t3

s2 t1

s3 t2

s4 t4

i

i

i + +

≥ + +

Rate ¾ for Okamura-Seymour

s1 t3

s2 t1

s3 t2

s4 t4

i

i

i + +

≥ + +

Rate ¾ for Okamura-Seymour

s1 t3

s2 t1

s3 t2

s4 t4

i

i

i + +

≥ + +i

Rate ¾ for Okamura-Seymour

s1 t3

s2 t1

s3 t2

s4 t4

+ + ≥ +

i i

Rate ¾ for Okamura-Seymour

s1 t3

s2 t1

s3 t2

s4 t4

+ + ≥ +

3 H(source) + 6 H(undirected edge) ≥ 11 H(source)6 H(undirected edge) ≥ 8 H(source)

¾ ≥ RATE

Special Bipartite Graphs

s1 t3

s2 t1

s3 t2

s4 t4

This proof generalizes to

show that max-flow = NCR

for every instance which is: Bipartite Every source is 2 hops away from its sink. Dual of flow LP is optimized by assigning

length 1 to all edges.

The k-pairs conjecture and I/O complexity

In the I/O complexity model [AV’88], one has:A large, slow external memory consisting of

pages each containing p records.A fast internal memory that holds O(1) pages.

(For concreteness, say 2.)Basic I/O operation: read in two pages from

external memory, write out one page.

I/O Complexity of Matrix Transposition

Matrix transposition: Given a p×p matrix of records in row-major order, write it out in column-major order.

Obvious algorithm requires O(p²) ops.

A better algorithm uses O(p log p) ops.

I/O Complexity of Matrix Transposition

Matrix transposition: Given a p×p matrix of records in row-major order, write it out in column-major order.

Obvious algorithm requires O(p²) ops.

A better algorithm uses O(p log p) ops.

s1 s2

I/O Complexity of Matrix Transposition

Matrix transposition: Given a pxp matrix of records in row-major order, write it out in column-major order.

Obvious algorithm requires O(p²) ops.

A better algorithm uses O(p log p) ops.

s1 s2 s3 s4

I/O Complexity of Matrix Transposition

Matrix transposition: Given a pxp matrix of records in row-major order, write it out in column-major order.

Obvious algorithm requires O(p²) ops.

A better algorithm uses O(p log p) ops.

s1 s2 s3 s4

t1t3

I/O Complexity of Matrix Transposition

Matrix transposition: Given a pxp matrix of records in row-major order, write it out in column-major order.

Obvious algorithm requires O(p²) ops.

A better algorithm uses O(p log p) ops.

s1 s2 s3 s4

t1t3t2

t4

I/O Complexity of Matrix Transposition

Theorem: (Floyd ’72, AV’88) If a matrix transposition algorithm performs only read and write operations (no bitwise operations on records) then it must perform Ω(p log p) I/O operations.

s1 s2 s3 s4

t1t3t2

t4

I/O Complexity of Matrix Transposition

Proof: Let Nij denote the number of ops in which record (i,j) is written. For all j,

Σi Nij ≥ p log p.

Hence

Σij Nij ≥ p² log p.

Each I/O writes only p records. QED.

s1 s2 s3 s4

t1t3t2

t4

The k-pairs conjecture and I/O complexity

Definition: An oblivious algorithm is one whose pattern of read/write operations does not depend on the input.

Theorem: If there is an oblivious algorithm for matrix transposition using o(p log p) I/O ops, the undirected k-pairs conjecture is false.

s1 s2 s3 s4

t1t3t2

t4

The k-pairs conjecture and I/O complexity

Proof: Represent the algorithm

with a diagram as before.

Assume WLOG that each node has only two outgoing edges.

s1 s2 s3 s4

t1t3t2

t4

p1 p2

p1 q p2

The k-pairs conjecture and I/O complexity

Proof: Represent the algorithm

with a diagram as before.

Assume WLOG that each node has only two outgoing edges.

Make all edges undirected, capacity p.

Create a commodity for each matrix entry.

s1 s2 s3 s4

t1t3t2

t4

p1 p2

p1 q p2

The k-pairs conjecture and I/O complexity

Proof: The algorithm itself is a

network code of rate 1. Assuming the k-pairs

conjecture, there is a flow of rate 1.

Σi,jd(si,tj) ≤ p |E(G)|.

Arguing as before, LHS is Ω(p² log p).

Hence |E(G)|=Ω(p log p).

s1 s2 s3 s4

t1t3t2

t4

p1 p2

p1 q p2

Other consequences for complexity

The undirected k-pairs conjecture implies:A Ω(p log p) lower bound for matrix

transposition in the cell-probe model.

[Same proof.]A Ω(p² log p) lower bound for the running time

of oblivious matrix transposition algorithms on a multi-tape Turing machine.

[I/O model can emulate multi-tape Turing machines with a factor p speedup.]

Open Problems Computing the network coding rate in DAGs:

Recursively decidable? How do you compute a o(n)-factor approximation?

Undirected k-pairs conjecture:Does flow rate = NCR?At least prove a Ω(log n) gap between

sparsest cut and network coding rate for some graphs.

Summary Information ≠ Transportation For multicast, NCR rate = min cut

Algorithms to find solution k-pairs:

Directed: NCR >> flow rateUndirected: Flow rate = NCR in O-S graph

Informational dominance