30
CS 484

CS 684

  • Upload
    sherry

  • View
    29

  • Download
    0

Embed Size (px)

DESCRIPTION

CS 684. Iterative Methods. Gaussian elimination is considered to be a direct method to solve a system. An indirect method produces a sequence of values that converges to the solution of the system. Computation is halted in an indirect algorithm when a specified accuracy is reached. - PowerPoint PPT Presentation

Citation preview

Page 1: CS 684

CS 484

Page 2: CS 684

Iterative Methods Gaussian elimination is considered to Gaussian elimination is considered to

be a be a directdirect method to solve a system. method to solve a system. An An indirectindirect method produces a method produces a

sequence of values that converges to sequence of values that converges to the solution of the system.the solution of the system.

Computation is halted in an indirect Computation is halted in an indirect algorithm when a specified accuracy is algorithm when a specified accuracy is reached.reached.

Page 3: CS 684

Why Iterative Methods?

Sometimes we don't need to be exact.Sometimes we don't need to be exact.– Input has inaccuracy, etc.Input has inaccuracy, etc.– Only requires a few iterationsOnly requires a few iterations

If the system is sparse, the matrix can If the system is sparse, the matrix can be stored in a different format.be stored in a different format.

Iterative methods are usually Iterative methods are usually stable.stable.– Errors are dampenedErrors are dampened

Page 4: CS 684

Iterative Methods

Consider the following system.Consider the following system.

7 -6 x1 3-8 9 x2 -4

=

Now write out the equationsNow write out the equations– solve for the isolve for the ithth unknown in the i unknown in the ithth equation equation

x1 = 6/7 x2 + 3/7x2 = 8/9 x1 - 4/9

Page 5: CS 684

Iterative Methods Come up with an initial guess for each xCome up with an initial guess for each x ii

Hopefully, the equations will produce better values and which can Hopefully, the equations will produce better values and which can be used to calculate better values, etc. and will converge to the be used to calculate better values, etc. and will converge to the answer.answer.

Page 6: CS 684

Iterative Methods

Jacobi iterationJacobi iteration– Use all old values to compute new valuesUse all old values to compute new values

k x1 x2 0 0.00000 0.0000010 0.14865 -0.1982020 0.18682 -0.2490830 0.19662 -0.2621540 0.19913 -0.2655150 0.19977 -0.26637

Page 7: CS 684

Jacobi Iteration

The ith equation has the formThe ith equation has the form

Which can be rewritten as:Which can be rewritten as:

][][],[1

0

ibjxjiAn

j

ijjxjiAib

iiAix ][],[][

],[1][

Page 8: CS 684

Jacobi Iteration

The vector (b - Ax) is zero when and if The vector (b - Ax) is zero when and if we get the exact answer.we get the exact answer.

Define this vector to be the residual Define this vector to be the residual rr Now rewrite the solution equationNow rewrite the solution equation

][],[

][][ ixiiAirix

Page 9: CS 684

void Jacobi(float A[][], float b[], float x[], float epsilon){ int k = 0; float x1[]; float r[]; float r0norm; // Randomly select an initial x vector

r = b - Ax; // This involves matrix-vector mult etc. r0norm = ||r||2; // This is basically the magnitude while (||r||2 > epsilon * r0norm) { for (j = 0; j < n; j++) x1[j] = r[j] / A[j,j] + x[j];

r = b - Ax; } x = x1;}

Page 10: CS 684

Parallelization of Jacobi

3 main computations per iteration3 main computations per iteration Inner product (2 norm)Inner product (2 norm) Loop calculating x[j]sLoop calculating x[j]s Matrix-vector mult. to calculate rMatrix-vector mult. to calculate r

If If A[j,j], b[j],A[j,j], b[j], & & r[j]r[j] are on the same proc. are on the same proc. Loop requires no communicationLoop requires no communication

Inner product and Matrix-vector mult Inner product and Matrix-vector mult require communication.require communication.

Page 11: CS 684

Inner Product (2 norm of residual)

Suppose data is distributed row-wiseSuppose data is distributed row-wise Inner product (2 norm) is simply dot productInner product (2 norm) is simply dot product

– IP = Sum(x[j] * x[j])IP = Sum(x[j] * x[j])

This only requires a global sum collapseThis only requires a global sum collapse– O(log p)O(log p)

A x b=P0P1P2P3

Page 12: CS 684

Matrix-Vector Multiplication

Again data is distributed row-wiseAgain data is distributed row-wise Each proc. requires all of the elements Each proc. requires all of the elements

in the vector to calculate their part of the in the vector to calculate their part of the resulting answer.resulting answer.

This results in all to all gatherThis results in all to all gather– O(p log p)O(p log p)

Page 13: CS 684

Jacobi Iteration

Resulting cost for float (4 bytes)Resulting cost for float (4 bytes)– TTcommcomm = #iterations * (T = #iterations * (TIPIP + T + TMVMMVM))

– TTIPIP = log p * (t = log p * (tss + t + tww * 4) * 4)

– TTMVMMVM = p log p * (t = p log p * (tss + t + tww * nrows/p * 4) * nrows/p * 4)

Page 14: CS 684

Iterative Methods

Gauss-SeidelGauss-Seidel– Use the new values as soon as availableUse the new values as soon as available

k x1 x2 0 0.00000 0.0000010 0.21977 -0.2490920 0.20130 -0.2653130 0.20008 -0.2665940 0.20000 -0.26666

Page 15: CS 684

Gauss-Seidel Iteration

The basic Gauss-Seidel iteration isThe basic Gauss-Seidel iteration is

1

0

1

1

1 ],[][],[][][],[

1][i

j

n

ij

kkk jiAjxjiAjxibiiA

ix

Page 16: CS 684

Gauss-Seidel Iteration

Rule: Always use the newest values of Rule: Always use the newest values of the previously computed variables.the previously computed variables.

Problem: Sequential?Problem: Sequential? Gauss-Seidel is indeed sequential if the Gauss-Seidel is indeed sequential if the

matrix is dense. matrix is dense. Parallelism is a function of the sparsity Parallelism is a function of the sparsity

and ordering of the equation matrix.and ordering of the equation matrix.

Page 17: CS 684

Gauss-Seidel Iteration

We can increase possible parallelism by We can increase possible parallelism by changing the numbering of a system.changing the numbering of a system.

Page 18: CS 684

Parallelizing Red-Black GSI

Partitioning?Partitioning?

Communication?Communication?

Block checkerboard.

2 phases per iteration. 1- compute red cells using values from black cells 2- compute black cells using values from red cellsCommunication is required for each phase.

Page 19: CS 684

Partitioning

P0 P1

P2 P3

Page 20: CS 684

Communication

P0 P1

P2 P3

Page 21: CS 684

Procedure Gauss-SeidelRedBlack

while ( error > limit ) send black values to neighbors recv black values from neighbors

compute red values

send red values to neighbors recv red values from neighbors

compute black values

compute error /* only do every so often */endwhile

Page 22: CS 684

Extending Red-Black Coloring Goal: Produce a graph coloring scheme Goal: Produce a graph coloring scheme

such that no node has a neighbor of the such that no node has a neighbor of the same color.same color.

Simple finite element and finite Simple finite element and finite difference methods produce graphs with difference methods produce graphs with only 4 neighbors.only 4 neighbors.– Two colors sufficeTwo colors suffice

What about more complex graphs?What about more complex graphs?

Page 23: CS 684

More complex graphs

Use graph coloring heuristics.Use graph coloring heuristics. Number the nodes one color at a time.Number the nodes one color at a time.

Page 24: CS 684

Successive Overrelaxation

Devised to speed up the convergence of Devised to speed up the convergence of Gauss-SeidelGauss-Seidel– apply extrapolation using a weighted apply extrapolation using a weighted

average between current & previous iteratesaverage between current & previous iterates Choose weighting that will accelerate the Choose weighting that will accelerate the

rate of convergencerate of convergence

Page 25: CS 684

SOR

Gauss-Seidel iterationGauss-Seidel iteration

Don’t compute directly into xDon’t compute directly into x

Compute weighted averageCompute weighted average

1

0

1

1

1 ],[][],[][][],[

1][

i

j

n

ij

kkk jiAjxjiAjxibiiA

ix

xk[i] = xk−1[i]+ω ∂[i] − xk−1[i]( )

1

0

1

1

1 ],[][],[][][],[

1][

i

j

n

ij

kk jiAjxjiAjxibiiA

i∂

Page 26: CS 684

SOR

SOR EquationSOR Equation

Choose Choose in the range [0, 2] in the range [0, 2]– technically technically < 1 is underrelaxation < 1 is underrelaxation– if you choose if you choose > 2 it won’t converge > 2 it won’t converge– if you choose if you choose incorrectly it won’t converge incorrectly it won’t converge

xk[i] = xk−1[i]+ω ∂[i] − xk−1[i]( )

Page 27: CS 684

SOR AlgorithmChoose an initial guess for x[i]for k = 1,2…..

for i = 1,2, … n∂ = 0for j = 1,2,…., i-1

∂ = ∂ + a[i,j] * xk[j]endfor j = i+1, … , n

∂ = ∂ + a[i,j] * xk-1[j]end∂ = (b[i] - ∂) / a[i,i]xk[i] = xk-1[i] + w * (∂ - xk-1[i])

endcheck for convergence

end

Page 28: CS 684

Parallelizing SOR

Just like Gauss-SeidelJust like Gauss-Seidel Create the dependency graphCreate the dependency graph

– ColorColor– Number by colorNumber by color

PhasesPhases– Communicate nodes of previous phaseCommunicate nodes of previous phase– Compute nodes of this phaseCompute nodes of this phase– Move to next phaseMove to next phase

Page 29: CS 684

Conclusion

Iterative methods are used when an Iterative methods are used when an exact answer is not computable or exact answer is not computable or needed.needed.

Gauss-Seidel converges faster than Gauss-Seidel converges faster than Jacobi but parallelism is trickier.Jacobi but parallelism is trickier.

Finite element codes are simply systems Finite element codes are simply systems of equationsof equations– Solve with either Jacobi or Gauss-SeidelSolve with either Jacobi or Gauss-Seidel

Page 30: CS 684

Consider

Are systems of equations and finite Are systems of equations and finite element methods related?element methods related?