13
Max norm estimation for the inverse of block matrices Ljiljana Cvetkovic ´ a,, Ksenija Doroslovac ˇki b a Department of Mathematics and Informatics, Faculty of Science, University of Novi Sad, Serbia b Faculty of Technical Sciences, University of Novi Sad, Serbia article info Keywords: Maximum norm Inverse matrix H-matrices Block H-matrices abstract Maximum norm bound of the inverse of a given matrix is an important issue in a wide range of applications. Motivated by this fact, we will extend the list of matrix classes for which upper bounds for max norms can be obtained. These classes are subclasses of block H-matrices, and they stand in a general position with corresponding point-wise classes. Efficiency of new results will be illustrated by numerical examples. Ó 2014 Elsevier Inc. All rights reserved. 1. Motivation It is well known that systems generated by discretizing partial differential equations with the finite element method or finite difference methods usually have a block structure. This was the main motivation for constructing and investigating several block matrix splitting iterative methods, such as the parallel decomposition-type relaxation methods (see [8,4]), the parallel hybrid iteration methods (see [3,6]), and the parallel blockwise matrix multisplitting and two-stage multisplit- ting iteration methods (see [14,5,7,2]). For the convergence analysis of these methods it is very useful to know a good esti- mation of the norm of the matrix inverse. On the other hand, for the error analysis for any linear system of the form Ax ¼ b, an estimation of the norm of the inverse of a matrix A play a crucial role. Up to now, the only estimation for kA 1 k 1 , where A has a block structure, is the famous Varah’s bound, [21], which is applicable only to block SDD matrices. Here, we will prove several new estimations for kA 1 k 1 , assuming that A belongs to some wider classes of block matrices. 2. Introduction We start with some preliminaries. Throughout this paper, we denote by N :¼f1; 2; ... ; ng set of indices. Having a matrix A ¼½a ij 2 C n;n we define r i ðAÞ :¼ X j2Nnfig ja ij j; i 2 N; r S i ðAÞ :¼ X n j2Snfig ja ij j; i 2 N; http://dx.doi.org/10.1016/j.amc.2014.06.035 0096-3003/Ó 2014 Elsevier Inc. All rights reserved. Corresponding author. E-mail addresses: [email protected], [email protected] (L. Cvetkovic ´). Applied Mathematics and Computation 242 (2014) 694–706 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Max norm estimation for the inverse of block matrices

  • Upload
    ksenija

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Applied Mathematics and Computation 242 (2014) 694–706

Contents lists available at ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate /amc

Max norm estimation for the inverse of block matrices

http://dx.doi.org/10.1016/j.amc.2014.06.0350096-3003/� 2014 Elsevier Inc. All rights reserved.

⇑ Corresponding author.E-mail addresses: [email protected], [email protected] (L. Cvetkovic).

Ljiljana Cvetkovic a,⇑, Ksenija Doroslovacki b

a Department of Mathematics and Informatics, Faculty of Science, University of Novi Sad, Serbiab Faculty of Technical Sciences, University of Novi Sad, Serbia

a r t i c l e i n f o

Keywords:Maximum normInverse matrixH-matricesBlock H-matrices

a b s t r a c t

Maximum norm bound of the inverse of a given matrix is an important issue in a widerange of applications. Motivated by this fact, we will extend the list of matrix classes forwhich upper bounds for max norms can be obtained. These classes are subclasses of blockH-matrices, and they stand in a general position with corresponding point-wise classes.Efficiency of new results will be illustrated by numerical examples.

� 2014 Elsevier Inc. All rights reserved.

1. Motivation

It is well known that systems generated by discretizing partial differential equations with the finite element method orfinite difference methods usually have a block structure. This was the main motivation for constructing and investigatingseveral block matrix splitting iterative methods, such as the parallel decomposition-type relaxation methods (see [8,4]),the parallel hybrid iteration methods (see [3,6]), and the parallel blockwise matrix multisplitting and two-stage multisplit-ting iteration methods (see [14,5,7,2]). For the convergence analysis of these methods it is very useful to know a good esti-mation of the norm of the matrix inverse.

On the other hand, for the error analysis for any linear system of the form Ax ¼ b, an estimation of the norm of the inverseof a matrix A play a crucial role.

Up to now, the only estimation for kA�1k1, where A has a block structure, is the famous Varah’s bound, [21], which isapplicable only to block SDD matrices. Here, we will prove several new estimations for kA�1k1, assuming that A belongsto some wider classes of block matrices.

2. Introduction

We start with some preliminaries. Throughout this paper, we denote by N :¼ f1;2; . . . ;ng set of indices. Having a matrixA ¼ ½aij� 2 Cn;n we define

riðAÞ :¼X

j2Nnfigjaijj; i 2 N;

rSi ðAÞ :¼

Xn

j2Snfigjaijj; i 2 N;

L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706 695

h1ðAÞ :¼ r1ðAÞ; hiðAÞ :¼Xi�1

j¼1

jaijjhjðAÞjajjj

þXn

j¼iþ1

jaijj; i 2 N n f1g; ð1Þ

hS1ðAÞ :¼ rS

1ðAÞ; hSi ðAÞ :¼

Xi�1

j¼1

jaijjhS

j ðAÞjajjj

þXn

j¼iþ1;j2S

jaijj; i 2 N n f1g: ð2Þ

Obviously, riðAÞ ¼ rSi ðAÞ þ rS

i ðAÞ and hiðAÞ ¼ hSi ðAÞ þ hS

i ðAÞ, where S :¼ N n S.Also, we define values ziðAÞ; i 2 N, recursively:

z1ðAÞ :¼ 1; ziðAÞ :¼Xi�1

j¼1

jaijjjajjj

zjðAÞ þ 1; i 2 N n f1g: ð3Þ

By p ¼ fpjg‘

j¼0we denote a partition of the index set N, if nonnegative numbers pj; j ¼ 1;2; . . . ; ‘, satisfy the following

condition

p0 :¼ 0 < p1 < p2 < � � � < p‘ :¼ n:

Then, by this partition, an n� n matrix A is partitioned into ‘� ‘ blocks

ð4Þ

In this paper, we will present several possibilities for estimating maximum norm of the inverse of partitioned matrices oftype (4). As a starting point, we will use some known results which are related to the point-wise case. Then, using two dif-ferent type of block generalizations, we will prove new estimations for several different classes of partitioned matrices oftype (4). Usefulness and efficiency of new estimations will be illustrated by numerical examples.

As a start, let us recall some of well-known (point-wise) classes of matrices. Among them, the widest one is the class ofnonsingular H-matrices, defined in the following way.

Definition 1. A matrix A ¼ ½aij� 2 Cn;n is called a nonsingular H-matrix if its comparison matrix MðAÞ ¼ ½aij� defined by

MðAÞ ¼ ½aij� 2 Cn;n; aij ¼jaiij; i ¼ j;

�jaijj; i – j

is an M-matrix, i.e., MðAÞ�1 P 0.A very useful property of nonsingular H-matrices is given by the following theorem (see [9]).

Theorem 1. If A ¼ ½aij� 2 Cn;n is a nonsingular H-matrix, then

jA�1j 6MðAÞ�1:

The most important subclass of nonsingular H-matrices is the class of strictly diagonally dominant (SDD) matrices,defined as:

Definition 2. A matrix A ¼ ½aij� 2 Cn;n is called SDD matrix if

jaiij > riðAÞ for all i 2 N:

Beside this class, three more subclasses of nonsingular H-matrices will be important for considerations that follow: S-SDD

class considered in [13], Nekrasov class defined in [17], and S-Nekrasov class introduced in [12].

Definition 3. A matrix A ¼ ½aij� 2 Cn;n is called S-SDD matrix if

jaiij > rSi ðAÞ for all i 2 S and

jaiij � rSi ðAÞ

� �jajjj � rS

j ðAÞ� �

> rSi ðAÞrS

j ðAÞ for all i 2 S; j 2 S;

where S is an arbitrary nonempty proper subset of N.

696 L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706

Definition 4. A matrix A ¼ ½aij� 2 Cn;n is called Nekrasov matrix if

jaiij > hiðAÞ; for all i 2 N:

Definition 5. A matrix A ¼ ½aij� 2 Cn;n is called S-Nekrasov matrix if

jaiij > hSi ðAÞ for all i 2 S and

jaiij � hSi ðAÞ

� �jajjj � hS

j ðAÞ� �

> hSi ðAÞh

Sj ðAÞ for all i 2 S; j 2 S:

S is, again, an arbitrary nonempty proper subset of N.

3. Two possibilities for block generalizations

For any block matrix A ¼ ½Aij�‘�‘ of the form (4), we will construct two, generally different, comparison ‘� ‘ real matrices.We will denote them by hAip and iAhp, in order to emphasize that they depend on the partition p. First comparison matrixhAip is constructed in the same manner as it was done in [22], while the second one iAhp is constructed as it was done in [19].

The first comparison matrix hAip ¼ ½lij� is defined in the following way:

lij :¼ðkA�1

ii k1Þ�1; i ¼ j and Aii is nonsingular;

0 i ¼ j and Aii is singular;�kAijk1; i – j:

8><>:

The second comparison matrix iAhp ¼ ½mij� is defined with:

mij :¼1; i ¼ j and Aii is nonsingular;�kA�1

ii Aijk1; i – j and Aii is nonsingular;0 otherwise:

8><>:

It is important to say that all classes of partitioned matrices that we will consider here, will have all their diagonal blocks non-singular, which means that our comparison matrices will always look like:

hAip ¼

ðkA�111 k1Þ

�1�kA12k1 � � � �kA1‘k1

�kA21k1 ðkA�122 k1Þ

�1� � � �kA2‘k1

..

. ... ..

.

�kA‘1k1 �kA‘2k1 � � � ðkA�1‘‘ k1Þ

�1

26666664

37777775;

iAhp ¼

1 �kA�111 A12k1 � � � �kA�1

11 A1‘k1�kA�1

22 A21k1 1 � � � �kA�122 A2‘k1

..

. ... ..

.

�kA�1‘‘ A‘1k1 �kA�1

‘‘ A‘2k1 � � � 1

2666664

3777775:

Obviously,

diag kA�111 k1; . . . ; kA�1

‘‘ k1� �

� hAip 6iAhp: ð5Þ

Now, we can define two block variants of point-wise properties listed in the previous section.

Definition 6. For a given partition p, block matrix A ¼ ½Aij�‘�‘ is called

� BpI SDD matrix if hAip is an SDD matrix,

� BpII SDD matrix if iAhp is an SDD matrix.

In other words, for a given partition p, matrix A is a BpI SDD matrix if all its diagonal blocks are nonsingular, and

ðkA�1ii k1Þ

�1>X

j2LnfigkAijk1 for each i 2 L; ð6Þ

while it is a BpII SDD matrix if all its diagonal blocks are nonsingular, and

L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706 697

1 >X

j2LnfigkA�1

ii Aijk1 for each i 2 L: ð7Þ

Let us comment that first ideas of generalizations of strict diagonal dominance property to block matrices (in a way whichwe denote here by Bp

I SDD) appeared in the papers of Ostrowski [20] in 1961, Fiedler and Ptak [16] in 1962, and Feingold andVarga [15] in 1962. The second variant of generalization of the same property, which we denote here by Bp

II SDD appeared inthe paper of Robert [19].

Now, we can generalize the nonsingular H-matrix property to the block case, also in two different ways.

Definition 7. For a given partition p, block matrix A ¼ ½Aij�‘�‘ is called

� BpI H matrix if hAip is a nonsingular M�matrix,

� BpII H matrix if iAhp is a nonsingular M�matrix.

From now on, we will keep in mind the following property, which follows from the above definition: If a partitioned matrixof the form (4) is a Bp

I H or BpII H matrix, then all its diagonal blocks are nonsingular. Otherwise, some of diagonal elements of hAip,

i.e. iAhp would be equal to 0, which is a contradiction to the fact that hAip, i.e. iAhp is a nonsingular M-matrix.From (5), it follows that Bp

I H is a subclass of BpII H class. Hence, in order to prove that both Bp

I H and BpII H classes are

nonsingular classes, it is sufficient to prove the following theorem.

Theorem 2. Every BpII H matrix is nonsingular.

Proof. If matrix ½Aij�‘�‘ is a BpII H-matrix, i.e., if iAhp is a nonsingular M�matrix, then there exists a positive diagonal matrix

X ¼ diagðx1; x2; . . . ; x‘Þ ð8Þ

such that iAhp X is an SDD matrix, i.e., for every i 2 L it holds that:

xi >X

j2LnfigkA�1

ii Aijk1 xj:

If we define matrix W 2 Rn�n as

W :¼ diagðx1Im1 ; x2Im2 ; . . . ; x‘Im‘Þ; ð9Þ

where Imkis the identity mk �mk matrix, and mk is the size of the block Akk; k 2 L, then

AW ¼

x1A11 x2A12 � � � x‘A1‘

x1A21 x2A22 � � � x‘A2‘

..

. ... ..

.

x1A‘1 x2A‘2 � � � x‘A‘‘

266664

377775

is a BpII SDD matrix, because its associated matrix iAWhp; is an SDD matrix. Indeed, for all i; j 2 L; j – i, we have

1 >X

j2Lnfig

xj

xikA�1

ii Aijk1 ¼X

j2LnfigkðxiAiiÞ�1ðxjAijÞk1 ¼

Xj2Lnfig

k iAWhp� ��1

ii iAWhp� �

ijk1:

Hence, AW is nonsingular, so A is nonsingular, too. h

As a consequence of the previous theorem we conclude that both BpI SDD and Bp

II SDD classes are nonsingular classes. Forthe class Bp

I SDD, this property was proven as Theorem 6.2 in [22].We end this section by listing some of subclasses of Bp

I H and BpII H matrices, respectively.

Definition 8. For a given partition p, block matrix A ¼ ½Aij�‘�‘ is said to be:

� BpI S-SDD matrix, if hAip is an S-SDD matrix,

� BpI Nekrasov matrix, if hAip is a Nekrasov matrix,

� BpI S-Nekrasov matrix, if hAip is an S-Nekrasov matrix.

Definition 9. For a given partition p, block matrix A ¼ ½Aij�‘�‘ is said to be:

� BpII S-SDD matrix, if iAhp is an S-SDD matrix,

� BpII Nekrasov matrix, if iAhp is a Nekrasov matrix,

� BpII S-Nekrasov matrix, if iAhp is an S-Nekrasov matrix.

698 L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706

4. Link between point-wise and block case(s)

In his work [19], Robert proved a result that can be considered as a generalization of Theorem 1. Namely, he proved thatfor each block H-matrix A it holds that

MðA�1Þ 6 NðAÞð Þ�1; ð10Þ

where

MðAÞ ¼

kA11k1 kA12k1 � � � kA1‘k1kA21k1 kA22k1 � � � kA2‘k1

..

. ... ..

.

kA‘1k1 kA‘2k1 � � � kA‘‘k1

266664

377775;

NðAÞ ¼ N1ðAÞ � N2ðAÞ;

N1ðAÞ ¼

kA�111 k

�11 0 � � � 0

0 kA�122 k

�11 � � � 0

..

. ... ..

.

0 0 � � � kA�1‘‘ k

�11

2666664

3777775;

N2ðAÞ ¼

1 �kA�111 A12k1 � � � �kA�1

11 A1‘k1�kA�1

22 A21k1 1 � � � �kA�122 A2‘k1

..

. ... ..

.

�kA�1‘‘ A‘1k1 �kA�1

‘‘ A‘2k1 � � � 1

2666664

3777775:

Since Robert defined matrix A to be a block H-matrix if NðAÞ is a nonsingular H-matrix (i.e. a nonsingular M-matrix), we haveto clarify relationship between block H-matrices in Robert’s sense and our Bp

I H and BpII H matrices. Since Bp

I H is a subclass ofBp

II H class, in order to use Robert’s result (10), it is sufficient to prove the following Lemma.

Lemma 1. If for a given partition p, matrix A ¼ ½Aij�‘�‘ is a BpII H matrix, then it is a block H-matrix in Robert’s sense, too.

Proof. directly follows from the observation that N2ðAÞ ¼iAhp, i.e.

NðAÞ ¼ N1ðAÞ�iAhp;

which means that if iAhp is a nonsingular M-matrix, then NðAÞ is also a nonsingular M-matrix. h

To make a link between point-wise and block case(s), we need the following well-known result from [9].

Theorem 3. If A and B are two nonsingular M-matrices, such that

A P B;

then

A�16 B�1:

Finally, here is the main result of this section.

Theorem 4. If for a given partition p, matrix A ¼ ½Aij�‘�‘ is

(i) a BpI H matrix, then

kA�1k1 6 k ðhAipÞ�1k1: ð11Þ

(ii) a BpII H matrix, then

kA�1k1 6maxi2LkA�1

ii k1 kðiAhpÞ�1k1: ð12Þ

L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706 699

Proof. From the definition of maximum norm, it obviously holds

kA�1k1 6 kMðA�1Þk1;

while, if NðAÞ is a nonsingular M-matrix, from (10) we conclude

kA�1k1 6 kMðA�1Þk1 6 k NðAÞð Þ�1k1: ð13Þ

(i) If A is a BpI H matrix, then it is a Bp

II H matrix, too. Relation (5) can be rewritten as

hAip 6 diag kA�111 k

�11 ; . . . ; kA�1

‘‘ k�11

� ��iAhp

or, in Robert’s notations:

hAip 6 N1ðAÞ � N2ðAÞ ¼ NðAÞ:

Since NðAÞ and hAip are both nonsingular M-matrices, from Theorem 3, it holds that

NðAÞð Þ�16 hAip� ��1

:

Together with (13) this completes the proof of (i).

(ii) If A is a BpII H matrix, then NðAÞ ¼ N1ðAÞ�iAhp is a nonsingular M-matrix, so (ii) follows directly from (13) and

NðAÞð Þ�1 ¼ iAhp� ��1 � N1ðAÞð Þ�1

from which we have

k NðAÞð Þ�1k1 6 kðiAhpÞ�1k1 max

i2LkA�1

ii k1: �

5. Point-wise case

This section presents the known upper bounds (for maximum norm of the inverse) in the point-wise case for SDD; S-SDD,Nekrasov and S-Nekrasov matrices.

Bound (Varah) for SDD matrices, [1]:

kA�1k1 61

mini2N jaiij � riðAÞð Þ : ðVarÞ

Bound (Kolotilina) for S-SDD matrices, [18]:

kA�1k1 6 maxi2S;j2S

max qSijðAÞ;qS

jiðAÞn o

; ðKolÞ

where

qSijðAÞ :¼

jaiij � rSi ðAÞ þ rS

j ðAÞ

jaiij � rSi ðAÞ

� �jajjj � rS

j ðAÞ� �

� rSi ðAÞrS

j ðAÞ: ð14Þ

The first bound (Cvetkovic, Dai, Doroslovacki, Li) for Nekrasov matrices, [10]:

kA�1k1 6maxi2NziðAÞ

mini2N jaiij � hiðAÞð Þ ; ðCDDL1Þ

where ziðAÞ; i 2 N is defined by (3).The second bound (Cvetkovic, Dai, Doroslovacki, Li) for Nekrasov matrices, [10]:

kA�1k1 6maxi2N

ziðAÞjaii j

1�maxi2NhiðAÞjaii j

; ðCDDL2Þ

where ziðAÞ; i 2 N is defined by (3).The first bound (Cvetkovic, Kostic, Doroslovacki) for S-Nekrasov matrices, [11]:

kA�1k1 6 maxi2N

ziðAÞ �maxi2S;j2S

max vSijðAÞ;vS

jiðAÞn o

; ðCKD1Þ

700 L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706

where ziðAÞ; i 2 N is defined by (3) and

vSijðAÞ :¼

jaiij � hSi ðAÞ þ hS

j ðAÞ

jaiij � hSi ðAÞ

� �jajjj � hS

j ðAÞ� �

� hSi ðAÞh

Sj ðAÞ

: ð15Þ

The second bound (Cvetkovic, Kostic, Doroslovacki) for S-Nekrasov matrices, [11]:

kA�1k1 6maxi2N

ziðAÞjaiij�max

i2S;j2Smax evS

ijðAÞ; evSjiðAÞ

n o; ðCKD2Þ

where ziðAÞ; i 2 N is defined by (3) and

evSijðAÞ :¼

jaiijjajjj � jajjjhSi ðAÞ þ jaiijhS

j ðAÞ

jaiij � hSi ðAÞ

� �jajjj � hS

j ðAÞ� �

� hSi ðAÞh

Sj ðAÞ

: ð16Þ

Since the mentioned classes stand in the following position it is clear that upper bounds related to a wider class of matri-ces can be applied to all its subclasses.

6. Block cases

Based on bounds (Var), (Kol), (CDDL1), (CDDL2), (CKD1) and (CKD2) for point-wise case, and Theorem 4, the followingestimations for block cases directly follow.

Theorem 5 (Varah’s bound, [21]). If a matrix A ¼ ½Aij�‘�‘ is a BpI SDD matrix, then

kA�1k1 61

min16k6‘ kA�1kk k

�11 �

Pj2LnfkgkAkjk1

� � : ðBIVarÞ

Theorem 6. If a matrix A ¼ ½Aij�‘�‘ is a BpII SDD matrix, then

kA�1k1 6maxi2LkA�1

ii k1min16k6‘ 1�

Pj2LnfkgkA

�1kk Akjk1

� � : ðBIIVarÞ

Theorem 7. If a matrix A ¼ ½Aij�‘�‘ is a BpI S-SDD matrix for some non-empty proper subset S � L, then

kA�1k1 6 maxi2S;j2S

max qSijðhAi

pÞ;qSjiðhAi

pÞn o

; ðBIKolÞ

where qSij are defined by (14).

Theorem 8. If a matrix A ¼ ½Aij�‘�‘ is a BpII S-SDD matrix for some non-empty proper subset S � L, then

kA�1k1 6maxi2LkA�1

ii k1maxi2S;j2S

max qSijðiAh

pÞ;qSjiðiAh

pÞn o

; ðBIIKolÞ

where qSij are defined by (14).

Theorem 9. If a matrix A ¼ ½Aij�‘�‘ is a BpI Nekrasov matrix, then

kA�1k1 6maxi2LziðhAipÞ

mini2L kA�1ii k

�11 � hiðhAipÞ

� � ðBICDDL1Þ

and

kA�1k1 6maxi2LkA�1

ii k1ziðhAipÞ1�maxi2LkA�1

ii k1hiðhAipÞ; ðBICDDL2Þ

where zi and hi are defined by (3) and (1).

L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706 701

Theorem 10. If a matrix A ¼ ½Aij�‘�‘ is a BpII Nekrasov matrix, then

kA�1k1 6 maxi2LkA�1

ii k1maxi2LziðiAhpÞ

mini2L 1� hiðiAhpÞ� � ; ðBIICDDLÞ

where zi and hi are defined by (3) and (1).

Theorem 11. matrix A ¼ ½Aij�‘�‘ is a BpI S-Nekrasov matrix, then

kA�1k1 6 maxi2L

ziðhAipÞmaxi2S;j2S

max vSijðhAi

pÞ;vSjiðhAi

p� �

ðBICKD1Þ

and

kA�1k1 6 maxi2LkA�1

ii k1ziðhAipÞmaxi2S;j2S

max evSijðhAi

pÞ; evSjiðhAi

p� �

; ðBICKD2Þ

where zi and hi are defined by (3) and (1), while vSij and evS

ij are defined by (15) and (16).

Theorem 12. If matrix A ¼ ½Aij�‘�‘ is a BpII S-Nekrasov matrix, then

kA�1k1 6 maxi2LkA�1

ii k1 maxi2L

ziðiAhpÞmaxi2S;j2S

max vSijðiAh

pÞ;vSjiðiAh

p� �

; ðBIICKDÞ

where zi and hi are defined by (3) and (1), while vSij and evS

ij are defined by (15) and (16).

7. Examples

It is important to emphasize that some relations between various estimations are clear, without any numerical examples.For example, if a given matrix has some of diagonal entries equal to 0, then it is not a nonsingular H-matrix, so none of thepoint-wise estimations from our list can be applied. At the same time, this matrix can belong to some of the block H-matrixsubclasses, so some of our block estimations can be applied (see Example 1). Of course, there are examples where point-wiseestimations work, while block ones do not, but here we will omit them.

It is clear that, if a given matrix belongs to only one of the mentioned classes, then the corresponding estimation is theonly one that can be applied. Although this is, by itself, a justification for such an estimation, we will omit such examples,too.

We will focus on situations when more than one estimation is applicable, and we will show that each one can work betterthan the others. This fact confirms the importance of every particular estimation.

All examples will be followed by a chart, in which the exact value of the maximum norm of the inverse matrix is pre-sented as a horizontal line, the point-wise estimations are non-shaded, the block I type estimations are shaded light (yellow),and the block II type estimations are shaded dark (blue) (see Figs. 1–9). Estimations which are not applicable are missingfrom the chart. For matrix classes depending on subset S, a choice which gives the best estimation is taken.

Example 1. Matrix

is not a nonsingular H-matrix, so there are no point-wise estimations, while some of block estimations exist.

Fig. 1. The relationship between matrix classes.

Fig. 2. Upper bounds for kA�11 k1 .

Fig. 3. Upper bounds for kA�12 k1 .

702 L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706

Example 2. Matrix

Fig. 4. Upper bounds for kA�13 k1.

Fig. 5. Upper bounds for kA�14 k1.

Fig. 6. Upper bounds for kA�15 k1.

Fig. 7. Upper bounds for kA�16 k1.

L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706 703

704 L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706

belongs to the (point-wise) nonsingular H-matrix class, but, still, none of the point-wise estimations from our list can beapplied.

Example 3. For matrix

every block estimation works better than the corresponding point-wise one.

Example 4. Matrix

shows that, although Varah’s (block) estimation can be applied, it is worth investing in more calculations, in order to obtainmuch better estimations (see block S-SDD case).

Example 5

This example shows that block I type estimations can (generally) work better than block II type.

Fig. 8. Upper bounds for kA�17 k1.

Fig. 9. Upper bounds for kA�18 k1.

L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706 705

Example 6

With this example, the importance and efficiency of block Nekrasov-type and S-Nekrasov-type estimations are illustrated.

Example 7

706 L. Cvetkovic, K. Doroslovacki / Applied Mathematics and Computation 242 (2014) 694–706

The importance of the block II type estimations is clearly visible in this example. Some of the estimations are very close tothe exact value.

Example 8

This example shows that block II type estimations can (generally) work much better than block I type.

Acknowledgments

The authors are very grateful to the anonymous referees for their valuable comments. This work is partly supported bythe Ministry of Science and Environmental Protection of Serbia, Grant 174019 and by the Provincial Secretariat of Scienceand Technological Development of Vojvodina, Serbia, Grants 3606 and 3626.

References

[1] J.H. Ahlberg, E.N. Nilson, Convergence properties of the spline fit, J. SIAM (1963) 95–104.[2] Z.-Z. Bai, Parallel matrix multisplitting block relaxation iteration methods, Math. Numer. Sin. 17 (1995) 238–252 (in Chinese).[3] Z.-Z. Bai, Parallel hybrid iteration methods for block bordered linear systems, Appl. Math. Comput. 86 (1997) 37–60.[4] Z.-Z. Bai, A class of parallel decomposition-type relaxation methods for large sparse systems of linear equations, Linear Algebra Appl. 282 (1998) 1–24.[5] Z.-Z. Bai, A class of asynchronous parallel multisplitting blockwise relaxation methods, Parallel Comput. 25 (1999) 681–701.[6] Z.-Z. Bai, A class of parallel hybrid two-stage iteration methods for the block bordered linear systems, Appl. Math. Comput. 101 (1999) 245–267.[7] Z.-Z. Bai, V. Migallón, J. Penadés, D.B. Szyld, Block and asynchronous two-stage methods for mildly nonlinear systems, Numer. Math. 82 (1999) 1–20.[8] Z.-Z. Bai, Y.-F. Su, On the convergence of a class of parallel decomposition-type relaxation methods, Appl. Math. Comput. 81 (1997) 1–21.[9] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Classics in Applied Mathematics, vol. 9, SIAM, Philadelphia, 1994.

[10] Lj. Cvetkovic, P.-F. Dai, K. Doroslovacki, Y.-T. Li, Infinity norm bounds for the inverse of Nekrasov matrices, Appl. Math. Comput. 219 (2013) 5020–5024.[11] Lj. Cvetkovic, V. Kostic, K. Doroslovacki, Max-norm bounds for the inverse of S-Nekrasov matrices, Appl. Math. Comput. 218 (2012) 9498–9503.[12] Lj. Cvetkovic, V. Kostic, S. Rauški, A new subclass of H-matrices, Appl. Math. Comput. 208 (2009) 206–210.[13] Lj. Cvetkovic, V. Kostic, R.S. Varga, A new Geršgorin-type eigenvalue inclusion area, ETNA 18 (2004) 73–80.[14] D.J. Evans, Z.-Z. Bai, Blockwise matrix multi-splitting multi-parameter block relaxation methods, Int. J. Comput. Math. 64 (1997) 103–118.[15] D.G. Feingold, R.S. Varga, Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem, Pac. J. Math. 12 (1962) 1241–

1250.[16] M. Fiedler, V. Ptak, Generalized norms of matrices and the location of the spectrum, Czech. Math. J. 12 (87) (1962) 558–571.[17] V.V. Gudkov, On a certain test for nonsingularity of matrices, in: Latv. Mat. Ezhegodnik 1965, Zinatne, Riga, 1966, pp. 385–390.[18] L.Y. Kolotilina, Bounds for the infinity norm of the inverse for certain M- and H-matrices, Linear Algebra Appl. 430 (2009) 692–702.[19] F. Robert, Blocs-H-matrices et convergence des methodes iteratives classiques par blocks, Linear Algebra Appl. 2 (1969) 223–265.[20] A.M. Ostrowski, On some metrical properties of operator matrices and matrices partitioned into blocks, J. Math. Anal. Appl. 2 (1961) 161–209.[21] J.M. Varah, A lower bound for the smallest value of a matrix, Linear Algebra Appl. 11 (1975) 3–5.[22] R.S. Varga, Geršgorin and His Circles, Springer Series in Computational Mathematics vol. 36, Springer-Verlag, Berlin, 2004.