Transcript
Page 1: Distributed Message Passing for Large Scale Graphical Models

Distributed Message Passing for Large Scale Graphical Models

Alexander SchwingTamir Hazan

Marc PollefeysRaquel Urtasun

CVPR2011

Page 2: Distributed Message Passing for Large Scale Graphical Models

Outline

• Introduction• Related work• Message passing algorithm• Distributed convex belief propagation• Experiment evaluation• Conclusion

Page 3: Distributed Message Passing for Large Scale Graphical Models

Introduction• Vision problems → discrete labeling problems in an

undirected graphical model (Ex : MRF)

– Belief propagation (BP)– Graph cut

• Depending on the potentials and structure of the graph

• The main underlying limitations to real-world problems are memory and computation.

Page 4: Distributed Message Passing for Large Scale Graphical Models

Introduction• A new algorithm

– distribute and parallelize the computation and memory requirements– conserving the convergence and optimality guarantees

• Computation can be done in parallel by partitioning the graph and imposing agreement between the beliefs in the boundaries.– Graph-based optimization program → local optimization problems (one

per machine).– Messages between machines : Lagrange multipliers

• Stereo reconstruction from high-resolution image• Handle large problems (more than 200 labels in images larger than 10 MPixel)

Page 5: Distributed Message Passing for Large Scale Graphical Models

Related work• Provable convergence while still being computationally

tractable.– parallelizes convex belief propagation– conserves its convergence and optimality guarantees

• Strandmark and Kahl [24]

– splitting the model across multiple machines• GraphLab

– assumes that all the data is stored in shared-memory

Page 6: Distributed Message Passing for Large Scale Graphical Models

Related work• Split the message passing task at hand into several local

optimization problems that are solved in parallel.

• To ensure convergence we force the local tasks to communicate occasionally.

• At the local level we parallelize the message passing algorithm using a greedy vertex coloring

Page 7: Distributed Message Passing for Large Scale Graphical Models

Message passing algorithm• The joint distribution factors into a product of non-negative

functions

– defines a hypergraph whose nodes represent the n random variables and the subsets of variables x correspond to its hyperedges.

• Hypergraph– Bipartite graph : factor graph[11]

• one set of nodes corresponding to the original nodes of the hypergraph : variable nodes

• the other set consisting of its hyperedges : factor nodes

• N(i) : all factor nodes that are neighbors of variable node i

Page 8: Distributed Message Passing for Large Scale Graphical Models

Message passing algorithm• Maximum a posteriori (MAP) assignment

• Reformulate the MAP problem as integer linear program.

Page 9: Distributed Message Passing for Large Scale Graphical Models

Message passing algorithm

Page 10: Distributed Message Passing for Large Scale Graphical Models

Distributed convex belief propagation

• Partition the vertices of the graphical model to disjoint subgraphs– each computer solves independently a variational program with respect to

its subgraph. • The distributed solutions are then integrated through message-

passing between the subgraphs– preserving the consistency of the graphical model.

• Properties : – If (5) is strictly concave then the algorithm converges for all ε >= 0, and

converges to the global optimum when ε > 0.

Page 11: Distributed Message Passing for Large Scale Graphical Models

Distributed convex belief propagation

Page 12: Distributed Message Passing for Large Scale Graphical Models

Distributed convex belief propagation

Page 13: Distributed Message Passing for Large Scale Graphical Models

Distributed convex belief propagation

• Lagrange multipliers : – : the marginalization constraints within each computer– : the consistency constraints between the different computers

Page 14: Distributed Message Passing for Large Scale Graphical Models

Distributed convex belief propagation

Page 15: Distributed Message Passing for Large Scale Graphical Models

Experiment evaluation• Stereo reconstruction

– nine 2.4 GHz x64 Quad-Core computers with 24 GB memory each, connected via a standard local area network

• libDAI 0.2.7 [17] and GraphLAB [16]

Page 16: Distributed Message Passing for Large Scale Graphical Models

Experiment evaluation

Page 17: Distributed Message Passing for Large Scale Graphical Models

Experiment evaluation

Page 18: Distributed Message Passing for Large Scale Graphical Models

Experiment evaluation• relative duality gap

Page 19: Distributed Message Passing for Large Scale Graphical Models

Conclusion• Large scale graphical models by dividing the computation and

memory requirements into multiple machines.

• Convergence and optimality guarantees are preserved.

• Main benefit : the use of multiple computers.


Recommended