Iterative Methods andCombinatorial Preconditioners

  • View
    22

  • Download
    1

Embed Size (px)

DESCRIPTION

Iterative Methods andCombinatorial Preconditioners. This talk is not about…. Credits. Solving Symmetric Diagonally-Dominant Systems By Preconditioning Bruce Maggs, Gary Miller, Ojas Parekh, R. Ravi, mw - PowerPoint PPT Presentation

Text of Iterative Methods andCombinatorial Preconditioners

  • This talk is not about

  • CreditsSolving Symmetric Diagonally-Dominant Systems By PreconditioningBruce Maggs, Gary Miller, Ojas Parekh, R. Ravi, mw

    Combinatorial Preconditioners for Large, Sparse, Symmetric, Diagonally-Dominant Linear SystemsKeith Gremban (CMU PhD 1996)

  • Linear Systems

    n by n matrix

    n by 1 vectorA useful way to do matrix algebra in your head:Matrix-vector multiplication = Linear combination of matrix columns

  • Matrix-Vector MultiplicationUsing BTAT = (AB)T, xTA should also be interpreted as a linear combination of the rows of A.

  • How to Solve?Find A-1Guess x repeatedly until we guess a solutionGaussian Elimination

  • Large Linear Systems in The Real World

  • Circuit Voltage ProblemGiven a resistive network and the net current flow at each terminal, find the voltage at each node.2A2AConductance is the reciprocal of resistance. Its unit is siemens (S).3S4A2S1S1SA node with an external connection is a terminal. Otherwise its a junction.

  • Kirchhoffs Law of CurrentAt each node,net current flowing at each node = 0.

    Consider v1. We have which after regrouping yields2A2A3S4A2S1S1Sv1v2v4v3I=VC

  • Summing Up2A2A3S4A2S1S1Sv1v2v4v3

  • Solving2A2A3S4A2S1S1S2V2V3V0V

  • Did I say LARGE?

    Imagine this being the power grid of America.

    2A2A3S4A2S1S1Sv1v2v4v3

  • LaplaciansGiven a weighted, undirected graph G = (V, E), we can represent it as a Laplacian matrix.3211v1v2v4v3

  • LaplaciansLaplacians have many interesting properties, such asDiagonals 0 denotes total incident weightsOff-diagonals < 0 denotes individual edge weightsRow sum = 0Symmetric

    3211v1v2v4v3

  • Net Current FlowLemmaSuppose an n by n matrix A is the Laplacian of a resistive network G with n nodes.If y is the n-vector specifying the voltage at each node of G, then Ay is the n-vector representing the net current flow at each node.

  • Power DissipationLemmaSuppose an n by n matrix A is the Laplacian of a resistive network G with n nodes.If y is the n-vector specifying the voltage at each node of G, then yTAy is the total power dissipated by G.

  • SparsityLaplacians arising in practice are usually sparse.The i-th row has (d+1) nonzeros if vi has d neighbors.3211v1v2v4v3

  • Sparse MatrixAn n by n matrix is sparse when there are O(n) nonzeros.

    A reasonably-sized power grid has way more junctions and each junction has only a couple of neighbors.3211v1v2v4v3

  • Had Gauss owned a supercomputer(Would he really work on Gaussian Elimination?)

  • A Model ProblemLet G(x, y) and g(x, y) be continuous functions defined in R and S respectively, where R and S are respectively the region and the boundary of the unit square (as in the figure).

  • A Model ProblemWe seek a function u(x, y) that satisfies Poissons equation in R and the boundary condition in S.

  • DiscretizationImagine a uniform grid with a small spacing h.h

  • Five-Point DifferenceReplace the partial derivatives by difference quotients

    The Poissons equation now becomes

    Exercise: Derive the 5-pt diff. eqt. from first principle (limit).

  • For each point in RThe total number of equations is .

    Now write them in the matrix form, weve got one BIG linear system to solve!

  • An ExampleConsider u3,1, we have

    which can be rearranged to404u31u41u21u32u30

  • An ExampleEach row and column can have a maximum of 5 nonzeros.

  • Sparse Matrix AgainReally, its rare to see large dense matrices arising from applications.

  • Laplacian???I showed you a system that is not quite Laplacian.Weve got way too many boundary points in a 3x3 example.

  • Making It LaplacianWe add a dummy variable and force it to zero. (How to force? Well, look at the rank of this matrix first)

  • Sparse Matrix RepresentationA simple schemeAn array of columns, where each column Aj is a linked-list of tuples (i, x).

  • Solving Sparse SystemsGaussian Elimination again?Lets look at one elimination step.

  • Solving Sparse SystemsGaussian Elimination introduces fill.

  • Solving Sparse SystemsGaussian Elimination introduces fill.

  • FillOf course it depends on the elimination order.Finding an elimination order with minimal fill is hopeless Garey and Johnson-GT46, Yannakakis SIAM JADM 1981O(log n) Approximation Sudipto Guha, FOCS 2000 Nested Graph Dissection and Approximation Algorithms(n log n) lower bound on fill (Maverick still has not dug up the paper)

  • When Fill MattersFind A-1Guess x repeatedly until we guessed a solutionGaussian Elimination

  • Inverse Of Sparse Matricesare not necessarily sparse either!

    B B-1

  • And the winner isFind A-1Guess x repeatedly until we guessed a solutionGaussian EliminationCan we be so lucky?

  • Iterative MethodsCheckpointHow large linear system actually arise in practiceWhy Gaussian Elimination may not be the way to go

  • The Basic IdeaStart off with a guess x (0).Using x (i ) to compute x (i+1) until converge.

    We hopethe process converges in a small number of iterationseach iteration is efficient

  • The RF Method [Richardson, 1910]DomainRangeTestCorrectA

  • Why should it converge at all?DomainRangeTestCorrect

  • It only converges whenTheoremA first-order stationary iterative method

    converges iff

    (A) is the maximum absolute eigenvalue of A

  • Fate?Once we are given the system, we do not have any control on A and b.

    How do we guarantee even convergence?

  • Preconditioning

    Instead of dealing with A and b, we now deal with B-1A and B-1b.The word preconditioning originated with Turing in 1948, but his idea was slightly different.

  • Preconditioned RFSince we may precompute B-1b by solving By = b, each iteration is dominated by computing B-1Ax(i), which is a multiplication step Ax(i) and a direct-solve step Bz = Ax(i).

    Hence a preconditioned iterative method is in fact a hybrid.

  • The Art of PreconditioningWe have a lot of flexibility in choosing B.Solving Bz = Ax(i) must be fastB should approximate A well for a low iteration countB

  • ClassicsJacobiLet D be the diagonal sub-matrix of A. Pick B = D.Gauss-SeidelLet L be the lower triangular part of A w/ zero diagonals Pick B = L + D.

  • CombinatorialWe choose to measure how well B approximates A by comparing combinatorial properties of (the graphs represented by) A and B.

    Hence the term Combinatorial Preconditioner.

  • Questions?

  • Graphs as Matrices

  • Edge Vertex Incidence MatrixGiven an undirected graph G = (V, E), let be a |E| |V | matrix of {-1, 0, 1}.

    For each edge (u, v), set e,u to -1 and e,v to 1. Other entries are all zeros.

  • Edge Vertex Incidence Matrix a b c d e f g h i j

  • Weighted GraphsLet W be an |E| |E| diagonal matrix where We,e is the weight of the edge e.

  • LaplacianThe Laplacian of G is defined to be TW .

  • Properties of LaplaciansLet L = TW .Prove by example

  • Properties of LaplaciansLet L = TW .Prove by example

  • Properties of LaplaciansA matrix A is Positive SemiDefinite if

    Since L = TW , its easy to see that for all x

  • LaplacianThe Laplacian of G is defined to be TW .

  • Graph EmbeddingA Primer

  • Graph EmbeddingVertex in G Vertex in HEdge in G Path in H

  • DilationFor each edge e in G, define dil(e) to be the number of edges in its corresponding path in H.

  • CongestionFor each edge e in H, define cong(e) to be the number of embedding paths that uses e.

  • Support Theory

  • DisclaimerThe presentation to follow is only essentially correct.

  • SupportDefinitionThe support required by a matrix B for a matrix A, both n by n in size, is defined as

    (A / B)Think of B supporting A at the bottom.

  • Support With LaplaciansLife is Good when the matrices are Laplacians.Remember the resistive circuit analogy?

  • Power DissipationLemmaSuppose an n by n matrix A is the Laplacian of a resistive network G with n nodes.If y is the n-vector specifying the voltage at each node of G, then yTAy is the total power dissipated by G.

  • Circuit-SpeakRead this loud in circuit-speak:The support for A by B is the minimum number so that for all possible voltage settings, copies of B burn at least as much energy as one copy of A.

  • Congestion-Dilation LemmaGiven an embedding from G to H,

  • TransitivityPop QuizFor Laplacians, prove this in circuit-speak.

  • Generalized Condition NumberDefinitionThe generalized condition number of a pair of PSD matrices is

    A is Positive Semi-definite iff 8x, xTAx 0.

  • Preconditioned Conjugate GradientSolving the system Ax = b using PCG with preconditioner B requires at most

    iterations to find a solution such thatConvergence rate is dependent on the actual iterative method used.

  • Support Trees

  • Information FlowIn many iterative methods, the only operation using A directly is to compute Ax(i) in each iteration.

    Imagine each node is an agent maintaining its value.The update formula specifies how each agent should update its value for round (i+1) given all the values in round i.

  • The Problem With MultiplicationOnly neighbors can communicate in a multi