20
This article was downloaded by:[HEAL- Link Consortium] [HEAL- Link Consortium] On: 19 May 2007 Access Details: [subscription number 758077518] Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Computer Mathematics Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713455451 Block preconditioned iterative methods D. J. Evans a ; N. M. Missirlis b a Department of Computer Studies, Loughborough University of Technology. Loughborough, Leicestershire. U.K b Department of Applied Mathematics, University of Athens. Panepistimiopolis, Athens. Greece To cite this Article: Evans, D. J. and Missirlis, N. M. , 'Block preconditioned iterative methods', International Journal of Computer Mathematics, 15:1, 77 - 95 To link to this article: DOI: 10.1080/00207168408803403 URL: http://dx.doi.org/10.1080/00207168408803403 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. © Taylor and Francis 2007

Block preconditioned iterative methods

  • Upload
    uoa

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

This article was downloaded by:[HEAL- Link Consortium][HEAL- Link Consortium]

On: 19 May 2007Access Details: [subscription number 758077518]Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of ComputerMathematicsPublication details, including instructions for authors and subscription information:http://www.informaworld.com/smpp/title~content=t713455451

Block preconditioned iterative methodsD. J. Evans a; N. M. Missirlis ba Department of Computer Studies, Loughborough University of Technology.Loughborough, Leicestershire. U.Kb Department of Applied Mathematics, University of Athens. Panepistimiopolis,Athens. Greece

To cite this Article: Evans, D. J. and Missirlis, N. M. , 'Block preconditioned iterativemethods', International Journal of Computer Mathematics, 15:1, 77 - 95To link to this article: DOI: 10.1080/00207168408803403URL: http://dx.doi.org/10.1080/00207168408803403

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf

This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction,re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expresslyforbidden.

The publisher does not give any warranty express or implied or make any representation that the contents will becomplete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should beindependently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with orarising out of the use of this material.

© Taylor and Francis 2007

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

Intern. J . Computer Math. 1984, Vol. IS, pp. 77-95 0020-7160/84/15Ol-0077 S18.50/0 Q Gordon and Breach Sdencc Pubhshers Inc., 1984 Printed in Great Britain

Block Preconditioned lterat Methods D. J . EVANS Department of Computer Studies, Loughborough University Technology, Loughborough, Leicestershire, U.K.

and

N. M. MlSSlRLlS Department of Applied Mathematics, University of Athens, Panepistimiopolis 621, Athens, Greece

In this paper the application of'preconditioning is extended to block iterative methods and shown to yield improved convergence rates for linear systems derived from the generalised Dirichlet problem.

(Received September, 1982)

KEYWORDS: Preconditioned simultaneous displacement, line iteration, semi- iterative method.

C.R. CATEGORIES: 5.14.

1. INTRODUCTION

In previous work [4, 5, 63 various preconditioned iterative schemes were considered where each component of u'") was determined explicitly, i.e., by using already computed approximate values of the other unknowns. These schemes are called point methods in order to distinguish them from group implicit iterative methods in which we first assign the equation to groups and then solve the block of equations for the corresponding unknowns ui, treating the other values of uj as known.

A special case of a grouping is a partitioning where for some

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

78 D. J. EVANS AND N. M. MISSIRLIS

integers n,, n,, . . . , n, such that 1 5 n, < n, < . . . < n, = N the equations for i = 1,2,. . . , n, belong to the first group, those for i=n, + 1, n, + 2,. . . , n, belong to the second group, etc. The methods which are based on partitionings are known as block methods.

Arms, Gates and Zondek [I] first generalised SOR to a block method and analysed its convergence rate. In addition, Varga [lo] showed that the rate of convergence of the two-line SOR method with optimum w is approximately twice that of point SOR, whereas Parter [8] showed that the k-line SOR method with optimum w converges approximately (2k)'I2 as fast as point SOR. Finally, Ehrlich [3] considered the line SSOR for the five-point discrete Dirichlet problem and was able to show that the convergence is faster than the point SSOR method.

In this paper we extend the preconditioning technique so that we can construct and develop the corresponding block methods of the previously considered iterative procedures [ 5 ] . Let us commence our consideration by presenting a brief review of some basic concepts concerned with the definition of the block methods.

DEFINITION 1.1 An ordered grouping 71 of W = {I ,&. . . , N ) is a subdivision of W into disjoint subsets R,, R,, . . . , R, such that R,+R2+ ... f R,=W

We let denote the ordered grouping defined by R,={k), k=l,2, . . ,N.

Given a matrix A and an ordered grouping n we define the submatrices A,,, for r , s= 1,2,. . . , q by deleting from A all rows except those corresponding to R, and all columns except those corresponding to R,. We can now generalise the concepts of Property A and consistently ordered matrices [12].

Given a matrix A and an ordered grouping 71, with q groups, we define the q x q matrix Z = (z,, ,) by

DEFINITION 1.2 The matrix A has Property A'") if Z has Property A.t

?We will use the notation B'"' to denote the group form of the matrix B.

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 79

DEFINI~ON 1.3 The matrix A is a n-consistently ordered matrix (x- CO-matrix) if Z is consistently ordered.

DEFINITION 1.4 A matrix A is a generalised x-consistently ordered matrix (n-GCO-matrix) if

where

is independent of u for all u #O and for all k. Here D'") is the matrix formed from A by replacing with zeros all

airj unless i and j belong to the same group, whereas Cp) and C',") are formed from A by replacing all elements of A by zero except those aiSj such that i and j belong to different groups and such that the group containing i comes after and before, respectively the group containing j.

From the above analysis we note that the definition of the block methods is based on the splitting (1.2). Thus following the analysis of the preconditioning techniques we can regard (1.2) as another splitting of A and in an analogous way we can develop group forms of the preconditioned methods by using the iterative scheme u ( n + l ) - - u (n) + zR-'(b - Adn)) where z (ZO) is a real parameter and R

is a non-singular conditioning matrix. Next, we let the conditioning matrix have the form,

for any ordered grouping n, then we define the Block Simultaneous Displacement (BSD) method by,

where D(") is'a non-singular matrix. We therefore see that the rate of convergence of the BSD method depends upon the grouping x, since if D'") = A , we solve our system immediately. On the other hand, the inversion of D'") by using direct methods is a limit to the above observation.

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

80 D. J. EVANS AND N. M. MISSIRLIS

Further by letting the conditioning matrix have the form

we define the Block Extrapolated Gauss Seidel (BEGS) method by,

Finally, we can also define the Block Extrapolated SOR (BESOR) method by letting the conditioning matrix have the form,

where w is a real parameter. A more compact form of the BESOR method is given by,

For actual computation with the BESOR method we solve the system,

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 81

where the matrices A,,, have been defined earlier and if we define Us and B, similarly, then (1.12) is equivalent to solving,

s ~ ~ ~ e s s i ~ e l y for U p + I ) , U p + I ) , . . . , U r + I ) .

The conditions under which the previous schemes converge are similar to their point versions which have been thoroughly considered in [6]. However, the derivation of a relation between the eigenvalues of the preconditioned matrix ( I -oP ' ) ) - ' (D(") ) - 'A and B'")=C~'+C',n) similar to that for ( I - ~ I L ) - ' D - ~ A and B, is possible for x-GCO matrices. We can therefore generalise Theorem 3.1 of [63 as follows.

THEOREM 1.1 Let A be a n-GCO matrix such that D(") is non-singular. If B(") has real eigenvalues @in), i = 1,2,. . . , N with p(") =mini 1 pin) ( and p(") =maxi I @ln) I c 1 such that i -($"))' < Jm, then S(9";) is minimised for,

and its corresponding value is given by the expression,

Moreover, if p("'=O then in a similar manner it can be proved that w f ' = r t ) and the BESOR degenerates into the classical BSOR method.

2. THE BLOCK PRECONDITIONED SIMULTANEOUS DISPLACEMENT (BPSD) METHOD

For any ordered grouping x, we now let the conditioning matrix

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

82 D. J. EVANS AND N. M. MISSIRLIS

have the form,

which yields the Block Preconditioned Simultaneous Displacement (BPSD) method defined by,

where w is a real parameter. For the convergence analysis of the method it is convenient to write (2.2) in the following form,

However, neither of the above forms of BPSD are suitable for actual computation. This difficulty is overcome by expressing (2.2) in a two-level form, by

which is equivalent to,

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 8 3

and,

The above scheme first successively computes U p + 'I2), U'; + li2', . . . , Ur+"2) using the BESOR method. Then successively computes Up + I), Up-+ll), . . . , U p + ') using (2.7b).

'By exp;essing one BPSD iteration as two half-iterations (see (2.6)) we can exploit the fact that it is not necessary to recompute the quantity C ~ ' u ' " ) in (2.6); thus reducing the computational work involved considerably.

Evidently, by (2.4), (2.5) we see that the BPSD method is completely consistent if D(") is non-singular and z#O. Moreover, a similar analysis for the point PSD, as developed in [ 5 ] , [4] can be applied in a straightforward manner to the BPSD method. Before we proceed with a more detailed analysis of the behaviour of S(9:") we determine the spectral radius of the point PSD method applied to a smaller system derived by Au = b.

THEOREM 2.1 I f A is a symmetric and positive definite matrix and if A , is obtained from A by deleting certain rows and the corresponding columns of A, then

and P ( . ) denotes the P-condition number.

Proof Let I(g,,) and A(W,,) denote the smallest and largest eigenvalue of

Since B,, is similar to the symmetric and positive definite matrix,

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

84 D. J . EVANS A N D N . M . MISSIRLlS

B*0=~;12(~,-~~,*)-1~,(~,-w~,,)-1~;'2 (2.11)

and v , is an eigenvector associated with A(g,,), then

where

or

Next, we augment w, with zero components (at the positions which were deleted from A to form A,) to form w and define v such that

Evidently, from the definition of w and v we have

and the influence of the added rows and columns in A is annihilated by the zero components of w. Further, by the definition of w, the right hand side of (2.15) has identical components as the right hand side of (2.14) plus additional ones. Since D1l2 is diagonal, the components of v are identical as the components of v , plus additional ones. If /2(~%'~) denotes the smallest eigenvalue of B,, then we have

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 8 5

Similarly, we can prove

where A(B,,) and A(9?,) denote the largest eigenvalues of 9,, and B,, respectively.

Hence, if A is positive definite, then (2.8) follows from (2.17) and (2.18) and the proof of the theorem is complete.

Since now S(CBr,,) is an increasing function of P(.@,) it follows that S(9,,, ,) 5 S(g7, ,). Although Theorem 2.1 applies only for the point PSD method, the numerical results (see Table IV.l) indicate that the theorem is probably true for at least certain other partitions 71.

The analysis for the determination of good estimates for T, the pre- conditioning parameter w and the spectral radius of .@" is similar to that developed for the point PSD method [5, 4). Consequently we can easily derive the conclusion that the BPSD method produces a gain of approximately a factor of 2 in the rate of convergence as compared with the BSSOR method.

3. COMPARISON OF LINE PSD AND POINT PSD METHODS

As an example of a block method we will choose the partitioning by lines of mesh points ( x , y) with y constant, where the ordering is with increasing y. In this case the system is partitioned such that all the equations with y constant are grouped together and solved simultaneously. In the literature, this partition is frequently referred to as "line" iteration [I l l .

Subsequently, this partition will be denoted by 71, whereas no will be used to denote the point form of the method.

By using the standard 5 point finite difference equation

to represent the discretised generalised Dirichlet problem we can

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

86 D. J. EVANS AND N. M. MISSIRLIS

exhibit the line PSD method (LPSD) as follows, (see (2.7)),

a u(n + 112) - a u(n + 112) - n + 112) 0 i , ~ I I , ~ 3 4 - 1 . j

Further by (2.3) the LPSD method can be written in the matrix form,

where 9:;:' and 6;::' are given by (2.4), (2.5), respectively with .rr replaced by 7r1. For the estimation of the rate of convergence we consider the application of Theorem 3.6 of [ 5 ] . For this case Young [12] has shown that,

implying that from Theorem 3.6 of [ 5 ] we obtain,

where the corresponding value of P(B$%) is given by

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 8 7

In particular, if we consider these results to be applied directly to systems of linear equations arising from the five-point difference equations of the model problem, i.e. the Laplace equation in the unit square, we have [l 11,

M("1) = S(~(n1)) = cos nh - 1-n2h2 2-cosnh

as compared with

for sufficiently small h. From (3.3) and (3.5) we note that generally in this case the

spectral radius of 9$'j, ,?,, is given by

which by (3.6) yields

Therefore the rate of convergence of the LPSD method for h sufficiently small, is approximately the same with the line SOR method (LSOR). But if we have the additional restriction

(n1)<M("1)/4 we note that, as in the point version, LPSD has an P = improved rate of convergence over LSOR (see [5]). However, due to the additional work involved in one LPSD iteration even if the Nietharnmer's [7] reduction scheme is applied, this probably does not justify the gain in the convergence, thus making the method less attractive than LSOR. Alternatively, LPSD should be used instead of LSSOR as both methods involve approximately the same computational work but the former possesses twice the

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

88 D. J. EVANS A N D N. M. MISSIRLIS

convergence rate than the latter for large linear systems. Moreover, since it is known that

we obtain,

for sufficiently small h. In other words, for small h, the LPSD method in the unit square, or a subset thereof yields an increase of approximate 40% in the rate of convergence over the point PSD. Also, another conclusion we have reached here is that the improvement (3.11) in the ratios of rates of convergence of the PSD methods is a fixed factor, independent of the mesh h in contrast to the AD1 schemes [9].

Finally, the semi-iterative form of LPSD can be formulated as,

where,

and

with,

This is a linear non-stationary method of second degree with a known improvement in convergence at the expense of requiring extra storage for one additional vector.

Clearly, the rate of convergence for the LPSD-SI method is given by w 1 ,

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 89

hence

for sufficiently small h. This result implies that for the unit square, or a subset thereof, there is a gain of approximately a factor of 1.2 in using the LPSD method with semi-iteration as compared with the point form. However, in order to achieve this relatively small improvement for the former scheme in terms of overall computational effort one should carry out the method using a normalised block iteration scheme (see [2]).

4. COMPUTATIONAL RESULTS

In order to test our theoretical results, the Laplace equation was solved in three different regions as shown in Figure 1. In each case, the unique solution was the vector fi with all its components equal to zero while the initial guess was the vector do) with all its components equal to unity in the interior of the regions, with zero boundary values. The criterion used for convergence was pn'llm 5 lo-6. I

Region 1: Unit square n Region 11: Unit square with

4 x 3 square removed from one corner

Region 111: Unit square with x f$ square removed

from center

FIGURE 1

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

90 D. J. EVANS AND N. M. MISSIRLlS

Although the problems considered here are relatively simple because of the boundary conditions, the general behaviour of the interative procedures is expected to be typical of more complicated problems. The only change needed would be non-trivial boundary conditions.

The ordering considered in our experiments was the natural one for both the line and point PSD methods.

Table I contains the optimum values of the preconditioning parameter wvl), the acceleration parameter .rv1), the maximum A ) , the minimum i(LB$&) eigenvalues and the P-condition number of the preconditioned matrix LB$j!), as well as the spectral radius of the LSSOR iteration matrix, E$& Also it contains the

number of iterations of LPSD and LSSOR which were applied under the same conditions to solve the previously described problems for different values of the mesh size. For comparison reasons we also present the number of iterations using the LPSD-SI method.

A study of Table I seems to imply that a monotonicity theorem for n, may be valid (and probably for any partition n). However, this remains to be proved. Also, one may notice immediately the confirmation of the fact that the LPSD method is asymptotically 2 times as fast as the LSSOR method in all the cases examined. Although we predicted theoretically an improvement of about 40% in the rate of convergence of the LPSD over the point PSD (see (3.11)), the numerical results show that this gain is slightly greater for problem 1 in the unit square. However, in order to achieve this improvement in terms of overall computational effort one should carry out the method using a normalised block iteration scheme as described in [2].

From (3.9) we have that for LPSD and for any region we can find an my1) such that the rate of convergence is O(h), hence one expects that the graph plot of log(N) versus (logh-I), where N is the number of iterations, to be a straight line with slope approximately unity. Obviously, the slope a indicates O(hU) convergence rate. From Figure 2 we see that for all regions the rate of convergence is approximately O(h).

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

TABLE I

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

D. J. EVANS AND N. M. MISSIRLIS

REGION I

Legend:

0 LSSOR

x LPSD

h - l , h mesh s i z e

FIGURE 2 Determination of rate of convergence attained for Region I using LPSD and LSSOR with optimum parameters.

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS

REGION 2

Slope

0.986

1.1030

h - l , h mesh size

FIGURE 2 Determination of rate of convergence attained for Region I1 using LPSD and LSSOR with optimum parameters.

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

D. J. EVANS AND N. M. MISSIRLIS

REGION 3

V) K .d I-' m

0, I-' .d

Slope

1.014

0.915

1 2b 36 4b sb 60 7b 8b

h - I , h mesh size

FIGURE 2 Determination of rate of convergence attained for Region 111 LPSD and LSSOR with optimum parameters.

using

Dow

nloa

ded

By: [

HEA

L- L

ink

Con

sorti

um] A

t: 16

:50

19 M

ay 2

007

BLOCK PRECONDITIONED ITERATIVE METHODS 95

References

[I] R. J. Arms, L. D. Gates and B. Zondek, A method of block iteration, J. SIAM 4 (1956), 220-229.

[2] E. H. Cuthill and R. S. Varga, A method of normalised block iteration, J.A.C.M. 6 (1959), 236244.

[3] L. W. Ehrlich, The block symmetric successive overrelaxation method, J. SIAM 12 (1964), 807-826.

[4] D. J. Evans and N. M. Missirlis, The Preconditioned Simultaneous Displacement method (PSD method) for elliptic difference equations, M.A.C.S. 22 (1980), 2 5 6 263.

[5] D. J. Evans and N. M. Missirlis, Preconditioned iterative methods for the numerical solution of elliptic partial differential equations, pp. 115-178 in Preconditioning Methods, Analysis and Applications, ed. D. J . Evans, Gordon and Breach, 1983.

[6] N. M. Missirlis and D. J. Evans, The Extrapolated Successive Overrelaxation (ESOR) method for consistently ordered matrices, to be published (1984).

[7] W. Niethammer, Relaxation bei Komplexen Matrizen, Math. Zeitsch 86 (1964), 34-40.

[8] S . Parier, Multi-line iterative methods for elliptic difference equations and fundamental frequencies, Num. Math. 3 (1961), 305-319.

[9] D. W. Peaceman and H. Rachford, The numerical solution of parabolic and elliptic differential equations, J. SIAM 3 (1955), 28-41.

[lo] R. S. Varga, Factorisation and normalised iterative methods, pp. 121-142, in Boundary Problems in Differential Equations, ed. R. E. Langer, Univ. of Wisconsin, 1960.

[ll] R. S. Varga, Matrix lterative Analysis, Prentice Hall, Englewood Cliffs, New Jersey, 1962.

[12] D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971.