12
A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng, Quan Lu, Zhengge Huang Abstract—In this study, by applying the minimize residual technique to a class of upper and lower triangular (ULT) methods, two types of nonstationary ULT iteration methods called the minimize residual ULT (MRULT) ones are estab- lished for solving ill-posed inverse problems. We provide the convergent analysis of the MRULT-type iteration methods, which shows that the proposed methods are convergent if the related parameter satisfies suitable restrictions. And the new methods further improve the convergence rates of the ULT ones. Finally, numerical experiments arising from a Fredholm integral equation of the first kind and image restoration are reported to further examine the feasibility and effectiveness of the proposed methods. Index Terms—upper and lower triangular splitting, minimize residual, Tikhonov regularization, ill-posed problems, iteration method. I. I NTRODUCTION C ONSIDER the problem of computing an approximate solution of large-scale least-squares problems of the form min f R n Af g2 , A R n×n , f,g R n , (1) where and throughout this paper ∥·∥ 2 denotes the Euclidean vector norm or the associated induced matrix norm. The singular values of the matrix A gradually decay to zero without a significant gap. In particular, the Moore-Penrose pseudoinverse of A, denoted by A + , is of very large norm. Hence, A is severely ill-conditioned and may be singular. Moveover, the data vector g represents available data and generally is contaminated by an error e R n that may stem from measurement inaccuracies, discretization error, and eletronic noise in the device used, i.e., g g + e, (2) where ˆ g R n stands for the (unknown) error-free vector associated with g. Let the unavailable linear system of equation with the error-free right-hand side Af g (3) Manuscript received August 5, 2019; revised October 26, 2019 and De- cember 26, 2019. This work was supported by the National Natural Science Foundations of China (No.10802068), the National Science Foundation for Young Scientists of China (No. 11901123) and the Guangxi Natural Science Foundation (No. 2018JJB110062). Jingjing Cui is with the Department of Applied Mathematics, Northwest- ern Polytechnical University, Xi’an, Shaanxi, 710072, PR China. e-mail: [email protected]. Guohua Peng (e-mail: [email protected]) and Quan Lu (e-mail: [email protected]) are with the Department of Applied Mathematics, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, PR China. Zhengge Huang (e-mail: [email protected]) is with the Faculty of Science, Guangxi University for Nationalities, Nanning, Guangxi, 530006, PR China. be consistent and we denote its solution of minimal Eu- clidean norm by ˆ f . It is our aim to determine an accurate approximation of ˆ f by computing an approximate solution of the available linear system of Equation (1). Minimization problems (1) with a matrix of this kind are commonly referred to as linear discrete ill-posed problems. They arise from the suitably discretization of ill-posed prob- lems in various scientific and engineering applications. For example, a large number of practical problems in image pro- cessing are the inverse problem of obtaining true data from observation data, and one of the basic problems is the linear inverse problem (1), such as image restoration [16], [22], [23], [25], image decomposition [7], image reconstruction [28] and so on. In addition, ill-posed problems arise from the discretization of a Fredholm integral equation of the first kind on a cube in two or more space-dimensions [14]. The solution of minimal Euclidean norm of (1), given by A + g = A + ˆ g + A + e = ˆ f + A + e, typically is not a meaningful approximate solution of the system (1), because typically A + e2 ≫∥ ˆ f 2 . To compute a useful approximation of ˆ f , the first step in our solution process is to replace (1) by a nearby problem, whose solution is less sensitive to the error e in g. This replacement is commonly referred to as regularization. One of the most popular regularization methods is due to Tikhonov [21], [24], [10]. Tikhonov regularization replaces the linear system (1) by the minimization problem min f Af g2 2 + µ 2 Lf 2 2 , (4) where µ> 0 is referred to as a regularization parameter (generally small, i.e., 0 < µ < 1) and the matrix L as a regularization matrix. The parameter µ balances the influence of the first term (the fidelity term) and the second term (the regularization term), which is determined by the regularization matrix. It is the purpose of the regularization term µ 2 Lf 2 2 to damp the propagated error in the computed approximation of ˆ f . The solution of this system (4), less sensitive to the error e in g, is considered as an approximation of the solution of error-free linear system (3). When L is the identity matrix, the Tikhonov minimization problem (4) is said to be in standard form, otherwise it is said to be in general form. In this work, we limit our discussion to L being the identity matrix. The other cases can be obtained by using the similar technique. It is easy to see that the Tikhonov minimization problem is mathematically equivalent to solving the following normal equation (A T A + µ 2 I )f = A T g. (5) IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15 Volume 47, Issue 1: March 2020 ______________________________________________________________________________________

A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

A Class of Nonstationary Upper and LowerTriangular Splitting Iteration Methods for Ill-posed

Inverse ProblemsJingjing Cui, Guohua Peng, Quan Lu, Zhengge Huang

Abstract—In this study, by applying the minimize residualtechnique to a class of upper and lower triangular (ULT)methods, two types of nonstationary ULT iteration methodscalled the minimize residual ULT (MRULT) ones are estab-lished for solving ill-posed inverse problems. We provide theconvergent analysis of the MRULT-type iteration methods,which shows that the proposed methods are convergent if therelated parameter satisfies suitable restrictions. And the newmethods further improve the convergence rates of the ULTones. Finally, numerical experiments arising from a Fredholmintegral equation of the first kind and image restoration arereported to further examine the feasibility and effectiveness ofthe proposed methods.

Index Terms—upper and lower triangular splitting, minimizeresidual, Tikhonov regularization, ill-posed problems, iterationmethod.

I. INTRODUCTION

CONSIDER the problem of computing an approximatesolution of large-scale least-squares problems of the

form

minf∈Rn

∥Af − g∥2, A ∈ Rn×n, f, g ∈ Rn, (1)

where and throughout this paper ∥ ·∥2 denotes the Euclideanvector norm or the associated induced matrix norm. Thesingular values of the matrix A gradually decay to zerowithout a significant gap. In particular, the Moore-Penrosepseudoinverse of A, denoted by A+, is of very large norm.Hence, A is severely ill-conditioned and may be singular.Moveover, the data vector g represents available data andgenerally is contaminated by an error e ∈ Rn that maystem from measurement inaccuracies, discretization error,and eletronic noise in the device used, i.e.,

g = g + e, (2)

where g ∈ Rn stands for the (unknown) error-free vectorassociated with g. Let the unavailable linear system ofequation with the error-free right-hand side

Af = g (3)

Manuscript received August 5, 2019; revised October 26, 2019 and De-cember 26, 2019. This work was supported by the National Natural ScienceFoundations of China (No.10802068), the National Science Foundation forYoung Scientists of China (No. 11901123) and the Guangxi Natural ScienceFoundation (No. 2018JJB110062).

Jingjing Cui is with the Department of Applied Mathematics, Northwest-ern Polytechnical University, Xi’an, Shaanxi, 710072, PR China. e-mail:[email protected].

Guohua Peng (e-mail: [email protected]) and Quan Lu (e-mail:[email protected]) are with the Department of Applied Mathematics,Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, PR China.Zhengge Huang (e-mail: [email protected]) is with theFaculty of Science, Guangxi University for Nationalities, Nanning, Guangxi,530006, PR China.

be consistent and we denote its solution of minimal Eu-clidean norm by f . It is our aim to determine an accurateapproximation of f by computing an approximate solutionof the available linear system of Equation (1).

Minimization problems (1) with a matrix of this kind arecommonly referred to as linear discrete ill-posed problems.They arise from the suitably discretization of ill-posed prob-lems in various scientific and engineering applications. Forexample, a large number of practical problems in image pro-cessing are the inverse problem of obtaining true data fromobservation data, and one of the basic problems is the linearinverse problem (1), such as image restoration [16], [22],[23], [25], image decomposition [7], image reconstruction[28] and so on. In addition, ill-posed problems arise fromthe discretization of a Fredholm integral equation of the firstkind on a cube in two or more space-dimensions [14].

The solution of minimal Euclidean norm of (1), given by

A+g = A+g +A+e = f +A+e,

typically is not a meaningful approximate solution of thesystem (1), because typically ∥A+e∥2 ≫ ∥f∥2. To computea useful approximation of f , the first step in our solutionprocess is to replace (1) by a nearby problem, whose solutionis less sensitive to the error e in g. This replacement iscommonly referred to as regularization. One of the mostpopular regularization methods is due to Tikhonov [21], [24],[10]. Tikhonov regularization replaces the linear system (1)by the minimization problem

minf

∥Af − g∥22 + µ2∥Lf∥22, (4)

where µ > 0 is referred to as a regularization parameter(generally small, i.e., 0 < µ < 1) and the matrix Las a regularization matrix. The parameter µ balances theinfluence of the first term (the fidelity term) and the secondterm (the regularization term), which is determined by theregularization matrix. It is the purpose of the regularizationterm µ2∥Lf∥22 to damp the propagated error in the computedapproximation of f . The solution of this system (4), lesssensitive to the error e in g, is considered as an approximationof the solution of error-free linear system (3). When Lis the identity matrix, the Tikhonov minimization problem(4) is said to be in standard form, otherwise it is said tobe in general form. In this work, we limit our discussionto L being the identity matrix. The other cases can beobtained by using the similar technique. It is easy to seethat the Tikhonov minimization problem is mathematicallyequivalent to solving the following normal equation

(ATA+ µ2I)f = AT g. (5)

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 2: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

Similar to [19], the problem (5) can be translated into anequivalent 2n× 2n augmented system(

I A−AT µ2I

)︸ ︷︷ ︸

K

(ef

)︸ ︷︷ ︸

x

=

(g0

)︸ ︷︷ ︸

b

, (6)

where I and the variable e denote the identity matrix withproper dimension and additive noise e = g−Af , respectively.Note that the coefficient matrix K of the system (6) isnon-Hermitian positive-definite. For non-Hermitian positive-definite systems, Bai et al. in [3] studied efficient iterativemethods based on the Hermitian and skew-Hermitian split-ting (HSS) of the coefficient matrix K. Due to the highefficiency and the robustness of the HSS iteration method, itattracts many researchers attentions, and many HSS variantswere proposed in recent years to solve different kinds ofsystems of linear equations [18], [11], [4]. Recently, forsolving augmented system (6), Lv et al. in [19] established aspecial HSS (SHSS) iterative method by substituting α = 1into the second step of the HSS one, which makes the eigen-values of the corresponding iteration matrix determined moreconveniently. Numerical results included in [19] validate theSHSS iteration method is superior to the HSS one. ThenCheng et al. in [6] used the idea of [19] and proposed a newspecial HSS (NSHSS) iterative method by setting α = µ2

in the second step of the HSS one. In [5], Benzi establisheda generalization of the HSS (GHSS) iteration method forsolving positive definite, non-Hermitian linear systems. It isshown that the new scheme can outperform the standard HSSmethod in some situations. Subsequently, based on GHSSiteration method proposed in [5], Aghazadeh et al. [1] splittedthe Hermitian part of K as sum of a Hermitian positivedefinite matrix and a Hermitian positive semidefinite matrix,and introduced a restricted version of the GHSS (RGHSS)iterative method for image restoration. Experimental resultsdemonstrated that the RGHSS method is more effectiveand accurate than the SHSS method. After that, Aminikhahand Yousefi in [2] first constructed a new splitting of theHermitian part of the coefficient matrix K for the GHSSmethod, and further presented a new special GHSS (SGHSS)method for solving ill-posed inverse problems. Lately, Fanet al. in [8] pointed out a class of upper and lower triangular(ULT) splitting iteration methods, the first type of the ULT(ULT-I) splitting iteration method and the second type ofthe ULT (ULT-II) splitting iteration method, for solving theaugmented systems. The ULT-I and ULT-II iteration methodsare the forms of

(I 0

−AT µ2I +Q

)x(k+ 1

2 ) =

(0 −A0 Q

)x(k) + b(

I A0 µ2I +Q

)x(k+1) =

(0 0AT Q

)x(k+ 1

2 ) + b(7)

and(

I 0−AT Q

)x(k+ 1

2 ) =

(0 −A0 Q− µ2I

)x(k) + b(

I A0 µ2I +Q

)x(k+1) =

(0 0AT Q

)x(k+ 1

2 ) + b, (8)

respectively. The convergence rates and optimal iterationparameters of the ULT iteration methods were derived. Andnumerical experiments illustrated that the versions of the

ULT methods considerably outperform the newly developedmethods such as SHSS and RGHSS methods in terms of thenumerical performance and image recovering quality.

In this paper, we continue to research on solving the 2n-by-2n linear system (6). In order to further improve the con-vergence rates of the above ULT methods, inspired by [27],[26], we introduce the control parameters into the ULT-I andULT-II splitting iteration methods to construct a class of non-stationary ULT iteration methods. The control parametersinvolved in our methods are determined by minimizing thecorresponding residual norms, thus the non-stationary ULTiteration methods are referred to as the minimize residualULT (MRULT) iteration ones. It is expected that the MRULT-type methods converge faster than the ULT ones. In addition,the convergence properties of the MRULT-type methods areanalyzed.

The arrangement of this paper is organized as follows. InSection II, we define the first type of the minimize residualULT (MRULT-I) iteration method. The property of the pa-rameters involved in the MRULT-I method is discussed andthe corresponding convergence theory is established here.The second type of the minimize residual ULT (MRULT-II)iteration method is presented to solve the augmented systems(6) in Section III. Similarly, we investigate its convergenceproperty for the ill-posed problems. Section IV is devotedto presenting numerical examples to examine the feasibilityand effectiveness of the MRULT-I and MRULT-II methods.Finally in Section V, we give a brief conclusion for thispaper.

II. THE FIRST TYPE OF THE MRULT METHOD AND ITSCONVERGENCE ANALYSIS

In [27], the authors proposed the minimize residual HSS(MRHSS) iteration method to solve non-Hermitian positivedefinite linear systems by applying the minimize residualtechnique to the HSS one. Two parameters involved inMRHSS method are adopted by minimizing the residualnorms at each step of the HSS iteration scheme. Numericalexperiments in [27] showed that the MRHSS method hasadvantages over the HSS one. Inspired by [27], [26], weintroduce the control parameters into the ULT-I iterationscheme (7) and establish the MRULT-I iteration method tofurther accelerate the convergence rate of the ULT-I iterationone in this section.

Denote r(k) = b−Kx(k), r(k+12 ) = b−Kx(k+ 1

2 ), and

M1 =

(I 0

−AT µ2I +Q

), N1 =

(0 −A0 Q

),

M2 =

(I A0 µ2I +Q

), N2 =

(0 0AT Q

), (9)

then K1 = M1 − N1 = M2 − N2. And the ULT-I iterationscheme (7) can be equivalently written as{

x(k+ 12 ) = x(k) +M−1

1 r(k)

x(k+1) = x(k+ 12 ) +M−1

2 r(k+12 )

. (10)

Enlightened by the idea of [26], [27], we introduce twoarbitrary positive parameters βk and γk into (10) to modifythe ULT-I method, which leads to a new iteration scheme ofthe form {

x(k+ 12 ) = x(k) + βkM

−11 r(k)

x(k+1) = x(k+ 12 ) + γkM

−12 r(k+

12 )

. (11)

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 3: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

As mentioned in [27], the parameters βk and γk involved inthe iteration scheme (11) are to control the step sizes. If theyare determined by minimizing the corresponding residualnorm, the convergence rate of (11) may be faster than that of(10). Thus, we adopt the parameters βk and γk to minimizethe residual norm at each step of the iteration scheme (11),and obtain the minimize residual ULT-I (MRULT-I) iterationmethod. Note that (1) is a real linear system, so it may beproper to adopt βk, γk > 0 here. The residual form of theiteration scheme (11) can be expressed as{

r(k+12 ) = r(k) − βkKM−1

1 r(k)

r(k+1) = r(k+12 ) − γkKM−1

2 r(k+12 )

. (12)

By simple calculation, the above two residual norms yield

∥r(k+12)∥22

= (r(k+12), r(k+

12))

= (r(k) − βkKM−11 r(k), r(k) − βkKM−1

1 r(k))

= (r(k), r(k))− (r(k), βkKM−11 r(k))− (βkKM−1

1 r(k), r(k))

+β2k∥KM−1

1 r(k)∥22= ∥r(k)∥22 − 2βk(r

(k),KM−11 r(k)) + β2

k∥KM−11 r(k)∥22 (13)

and

∥r(k+1)∥22= (r(k+1), r(k+1))

= (r(k+12) − γkKM−1

2 r(k+12), r(k+

12) − γkKM−1

2 r(k+12))

= (r(k+12), r(k+

12))− (r(k+

12), γkKM−1

2 r(k+12))

−(γkKM−12 r(k+

12), r(k+

12)) + γ2

k∥KM−12 r(k+

12)∥22

= ∥r(k+12)∥22 − 2γk(r

(k+ 12),KM−1

2 r(k+12))

+γ2k∥KM−1

2 r(k+12)∥22. (14)

Actually, (r(k),KM−11 r(k)) = (KM−1

1 r(k), r(k)) and(r(k+

12 ),KM−1

2 r(k+12 )) = (KM−1

2 r(k+12 ), r(k)) are due

to the fact that KM−11 ,KM−1

2 and r(k), r(k+12 ) are real

matrices. By solving the equations ∂∥r(k+12)∥2

2

∂βk= 0 and

∂∥r(k+1)∥22

∂γk= 0, one then obtains

βk =(r(k),KM−1

1 r(k))

∥KM−11 r(k)∥22

, γk =(r(k+

12),KM−1

2 r(k+12))

∥KM−12 r(k+

12)∥22

. (15)

It is not difficult to check that ∂2∥r(k+12)∥2

2

∂β2k

> 0 and∂2∥r(k+1)∥2

2

∂γ2k

> 0, which reveals that βk and γk in (15) are

the minimum points for each of the functions ∥r(k+ 12 )∥22 and

∥r(k+1)∥22, respectively.In summary, the detailed implementation of the MRULT-I

iteration method can be listed as follows.Algorithm 2.1:

1. Let β, γ > 0, and given an initial value f (0) ande(0) = g − Af (0) with g being the available vector. Givenτ > 0 and M is the maximum prescribed number of outeriterations,2. r(0) = b − Kx(0), and divide r(0) into (r

(0)1 ; r

(0)2 ) with

r(0)1 , r

(0)2 ∈ Rn;

3. For k = 0, 1, 2, · · · , until ∥r(k)∥2

∥r(0)∥2> τ or k < M ,

4. compute t1 = AT r(k)1 + r

(k)2 ;

5. solve (µ2I +Q)t2 = t1;6. compute t3 = r

(k)1 +At2 and t4 = −AT r

(k)1 + µ2t2;

7. compute the value of βk: βk =(r

(k)1 )T t3+(r

(k)2 )T t4

∥t3∥22+∥t4∥2

2;

8. compute e(k+12 ) = e(k)+βkr

(k)1 and f (k+ 1

2 ) = f (k)+βkt2;9. compute r

(k+ 12 )

1 = r(k)1 −βkt3 and r

(k+ 12 )

2 = r(k)2 −βkt4;

10. solve (µ2I +Q)t2 = r(k+ 1

2 )2 ;

11. compute t3 = r(k+ 1

2 )1 and t4 = −AT r

(k)1 + (µ2I +

ATA)t2;

12. compute the value of γk: γk =(r

(k+12)

1 )T t3+(r(k+1

2)

2 )T t4∥t3∥2

2+∥t4∥22

;

13. compute e(k+1) = e(k+12 ) + γk(r

(k+ 12 )

1 − At2) andf (k+1) = f (k+ 1

2 ) + γk t2;14. compute r

(k+1)1 = r

(k+ 12 )

1 −γk t3 and r(k+1)2 = r

(k+ 12 )

2 −γk t4;15. end for

Remark 2.1: When βk = γk = 1, the MRULT-I iterationmethod is exactly the ULT-I one presented in [8], whichimplies that the MRULT-I iteration method with properparameters may be more efficient than the ULT-I one.

In the sequel, we first discuss the properties of the param-eters βk and γk, and then analyze the convergence of theMRULT-I iteration method. It follows from the derivationsof βk and γk that the parameters βk and γk in (15) arethe minimum points of the residual norms ∥r(k+ 1

2 )∥ and∥r(k+1)∥, respectively. While, the residual norm ∥r(k+1)∥can also be viewed as a real function of the real variable(βk, γk). We will demonstrate that (βk, γk) defined by (15)is the minimum point of ∥r(k+1)∥. To this end, we start witha lemma which is useful in our following proof.

Lemma 2.1: Let K and M2 be defined as in (6) and (9),respectively, and symmetric positive definite Q ∈ Rn×n

satisfy (Q−ATA)(µ2I+Q)−1 = (µ2I+Q)−1(Q−ATA).For any vector z ∈ R2n, we have

d(z,KM−12 z)

d(∥z∥22)=

1

∥z∥22(z,KM−1

2 z).

Proof. It follows from (9) that

KM−12 = I −N2M

−12

= I −(

0 0AT Q

)(I A0 µ2I +Q

)−1

= I −(

0 0AT Q

)(I −A(µ2I +Q)−1

0 (µ2I +Q)−1

)=

(I 0

−AT I − (Q−ATA)(µ2I +Q)−1

).

We now divide the vector z into two parts(z1z2

)with

z1, z2 ∈ Rn, then the inner product (z,KM−12 z) has the

form of

(z,KM−12 z)

= zT1 z1 − zT2 AT z1 + zT2 z2 − zT2 (Q−ATA)(µ2I +Q)−1z2.

Inasmuch as (Q−ATA)(µ2I +Q)−1 = (µ2I +Q)−1(Q−ATA), the first-order derivative is as follows

d(z,KM−12 z)

dz

=

(2z1 −Az2

−AT z1 + 2z2 − 2(Q−ATA)(µ2I +Q)−1z2

). (16)

From the proof of Lemma 1 in [27], one hasdz

d(∥z∥22)=

1

2∥z∥22z,

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 4: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

which together with (16) leads to

d(z,KM−12 z)

d(∥z∥22)= (

d(z,KM−12 z)

dz,

dz

d(∥z∥22)) =

1

∥z∥22(z,KM−1

2 z).

This completes the proof of this lemma.By Lemma 2.1 and with the similar manner applied in the

proof of Theorem 1 in [27], the fact that (βk, γk) defined by(15) is the minimum point of ∥r(k+1)∥ is demonstrated inthe following theorem.

Theorem 2.1: Let the symmetric positive definite matrixQ ∈ Rn×n satisfy (Q − ATA)(µ2I + Q)−1 = (µ2I +Q)−1(Q−ATA). Then the real variable pair (βk, γk) definedby (15) is the minimum point of ∥r(k+1)∥ of the MRULT-Iiteration scheme (11), which means the values of (βk, γk)defined by (15) are optimal in the real field R.Proof. Let r(τ) = r(k) − τKM−1

1 r(k) and

ϕ(r(τ), ω) = ∥r∥22 − 2ω(r,KM−12 r) + ω2∥KM−1

2 r∥22.

Thus, we have

r(βk) = r(k+12 ), ϕ(r(βk), γk) = ∥r(k+1)∥22,

and

∥r∥2 = ∥r(k)∥22 − 2τ(r(k),KM−11 r(k)) + τ2∥KM−1

1 r(k)∥22.

By Lemma 2.1 and Lemma 1 in [27], one may deduce thefollowing result

∂ϕ

∂(∥r∥2) =∥r∥2

∥r∥2 − 2ω

∥r∥2 (r,KM−12 r) +

ω2

∥r∥2 ∥KM−12 r∥22 =

ϕ

∥r∥2 .

Denote ϕ(τ, ω) = ϕ(r(τ), ω), then the first-order partialderivatives of ϕ(τ, ω) are as follows

∂ϕ

∂τ=

∂ϕ

∂(∥r∥2)d(∥r∥2)

=2ϕ

∥r∥2(τ∥KM−1

1 r(k)∥22 − (r(k),KM−11 r(k))

),

∂ϕ

∂ω= 2

(ω∥KM−1

2 r∥22 − (r,KM−12 r)

).

It can be seen that the real number pair (βk, γk) definedby (15) is the unique stationary point of the function ϕ.Moreover, (βk, γk) is the minimum point of the function ϕ.Denote

Φ1(τ) = τ∥KM−11 r(k)∥22 − (r(k),KM−1

1 r(k))

and

Φ2(τ, ω) = ω∥KM−12 r∥22 − (r,KM−1

2 r).

Then, the second-order partial derivatives of ϕ(τ, ω) are

∂2ϕ

∂τ2= Φ1(τ)

∂τ

(2ϕ

∥r∥2

)+

∥r∥2∥KM−1

1 r(k)∥22,

∂2ϕ

∂τ∂ω= Φ1(τ)

∂ω

(2ϕ

∥r∥2

),

∂2ϕ

∂ω∂τ= 2

∂Φ2(τ, ω)

∂∥r∥2d(∥r∥2)

dτ= 2

Φ2(τ, ω)

∥r∥2d(∥r∥2)

dτ,

∂2ϕ

∂ω2= 2∥KM−1

2 r∥22.

Keep in mind that Φ1(βk) = 0 and Φ2(βk, γk) = 0, then theHessian matrix of ϕ at this stationary point (βk, γk) has theform of ∥r(k+1)∥2

2∥KM−11 r(k)∥2

2

∥r(k+12)∥2

2

0

0 2∥KM−12 r(k+

12 )∥22

.

It is easy to see that the Hessian matrix is Hermitian positivedefinite, which implies the stationary point (βk, γk) definedby (15) is the unique minimum point of the function ϕ . Upto now, the proof has been completed.

Based on Theorem 2.1, we give the following theoremabout the relation of ∥r(k+1)∥ and ∥r(k)∥.

Theorem 2.2: For the augmented system (6), if the pa-rameter matrix Q ∈ Rn×n is symmetric positive definite andsatisfies (Q−ATA)(µ2I+Q)−1 = (µ2I+Q)−1(Q−ATA),then the residual norm of the MRULT-I iteration methodsatisfies the following relation

∥r(k+1)∥2 ≤ ∥A∥2∥r(k)∥2, (17)

where

A =

(0 0

GAT G

)with G = −ATA(µ2I + Q)−1 + (Q − ATA)(µ2I +Q)−1Q(µ2I +Q)−1.Proof. From (12), the residual r(k+1) of the MRULT-Imethod yields

r(k+1)

= r(k) − βkKM−11 r(k) − γkKM−1

2

(r(k) − βkKM−1

1 r(k))

=(I − βkKM−1

1 − γkKM−12 + βkγkKM−1

2 KM−11

)r(k)

= (I − γkKM−12 )(I − βkKM−1

1 )r(k),

which together with (βk, γk) being the minimum point ofthe function ∥r(k+1)∥2 leads to

∥r(k+1)∥2 = ∥(I − γkKM−12 )(I − βkKM−1

1 )r(k)∥2≤ ∥(I −KM−1

2 )(I −KM−11 )r(k)∥2

≤ ∥(I −KM−12 )(I −KM−1

1 )∥2∥r(k)∥2= ∥N2M

−12 N1M

−11 ∥2∥r(k)∥2. (18)

The last equation is due to I − KM−12 = N2M

−12 and

I − KM−11 = N1M

−11 . Next, we analyze the structure

of the matrix N2M−12 N1M

−11 . By employing algebraic

manipulations, we obtain

N2M−12 N1M

−11

=

(0 0AT Q

)(I A0 µ2I +Q

)−1

×(

0 −A0 Q

)(I 0

−AT µ2I +Q

)−1

=

(0 0AT Q

)(I −A(µ2I +Q)−1

0 (µ2I +Q)−1

)×(

0 −A0 Q

)(I 0

(µ2I +Q)−1AT (µ2I +Q)−1

)=

(0 00 −ATA+ (Q−ATA)(µ2I +Q)−1Q

)×(

I 0(µ2I +Q)−1AT (µ2I +Q)−1

)=

(0 0

GAT G

),

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 5: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

where G = −ATA(µ2I + Q)−1 + (Q − ATA)(µ2I +Q)−1Q(µ2I + Q)−1. It follows from the above proof thatthe conclusion (17) holds.

In particular, we study the convergence of the MRULT-Iiteration method with Q = sI and Q = sI + ATA (s > 0)in detail, respectively. To this end, the following lemma isfirstly presented.

Lemma 2.2: Let X = diag(x1, x2, · · · , xn) and Y =diag(y1, y2, · · · , yn) be two real diagonal matrices, then∥∥∥∥( 0 0

XY X

)∥∥∥∥2

= max1≤i≤n

|xi|√1 + y2i .

Proof. The proof of this lemma is similar to that of Lemma4.1 in [17], so we omit it here.

Theorem 2.3: For the augmented system (6), the MRULT-I iteration method with Q = sI(s > 0) is convergent if thefollowing conditions

f1(s) = (1 + cos θ)s2 + 2s(µ2 cos θ − σ21)

+ µ2(µ2 cos θ − σ21) > 0

f2(s) = (1− cos θ)s2 − 2s(µ2 cos θ + σ2n)

− µ2(µ2 cos θ + σ2n) < 0

(19)

hold true, where tan θ = σ1.Proof. Since the matrix Q = sI satisfies (Q−ATA)(µ2I +Q)−1 = (µ2I +Q)−1(Q−ATA), it follows from Theorem2.2 that

∥r(k+1)∥2 ≤∥∥∥∥( 0 0

HAT H

)∥∥∥∥2

∥r(k)∥2

with H = −ATAµ2+s + s(sI−ATA)

(µ2+s)2 . The singular value decom-position (SVD) of A is defined as

A = UΣV T , (20)

where U = [u1, u2, · · · , un], V = [v1, v2, · · · , vn] ∈ Rn×n

are orthogonal matrices and Σ = diag[σ1, σ2, · · · , σn] ∈Rn×n has non-negative diagonal elements appearing in non-increasing order such that σ1 ≥ σ2 ≥ · · ·σn ≥ 0. The num-bers σi are the singular values of A. By (20), straightforwardcomputation reveals that

∥r(k+1)∥2

≤∥∥∥∥( 0 0

HAT H

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( U 00 V

)(0 0

ΛΣT Λ

)(UT 00 V T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( 0 0ΛΣT Λ

)∥∥∥∥2

∥r(k)∥2,

where Λ = −ΣTΣµ2+s + s(sI−ΣTΣ)

(µ2+s)2 . Therefore, by making useof Lemma 2.2, we can conduct the estimate

∥r(k+1)∥2

≤∥∥∥∥( 0 0

ΛΣT Λ

)∥∥∥∥2

∥r(k)∥2

≤ max1≤i≤n

{∣∣∣∣−σ2i (µ

2 + s) + s(s− σ2i )

(µ2 + s)2

∣∣∣∣√1 + σ2i

}∥r(k)∥2.

(21)

To make the MRULT-I iteration method with Q = sIconvergent, it is enough to have∣∣∣∣−σ2

i (µ2 + s) + s(s− σ2

i )

(µ2 + s)2

∣∣∣∣2 (1 + σ2i ) < 1 (22)

for all σi(1 ≤ i ≤ n). For notational simplicity we denoteby

ci =

∣∣∣∣−σ2i (µ

2 + s) + s(s− σ2i )

(µ2 + s)2

∣∣∣∣ .If ci < cos θ and σ1 = tan θ hold for all 1 ≤ i ≤ n, thenc2i (1+σ2

i ) < cos2 θ(1+tan2 θ) = 1, which makes Inequality(22) valid. Thus, we only solve the inequality ci < cos θ, i.e.,

− cos θ <−σ2

i (µ2 + s) + s(s− σ2

i )

(µ2 + s)2< cos θ. (23)

Due to the fact that −σ2i (µ

2+s)+s(s−σ2i )

(µ2+s)2 =−σ2

i (µ2+2s)+s2

(µ2+s)2 isa strictly monotone decreasing function with regard to σ2

i ,Inequality (23) is equivalent to

−σ21(µ

2 + s) + s(s− σ21)

(µ2 + s)2> − cos θ

and

−σ2n(µ

2 + s) + s(s− σ2n)

(µ2 + s)2< cos θ.

Directly solving the above inequalities yields the sufficientconditions (19), which guarantee the convergence of theMRULT-I iteration method with Q = sI(s > 0).

Theorem 2.4: For the augmented system (6), the MRULT-I iteration method with Q = sI+ATA(s > 0) is convergentif the following conditions

f1(s) = (1 + cos θ)s2 + 2s(µ2 + σ21) cos θ

+ (µ2 + σ21)[cos θ(µ

2 + σ21)− σ2

1 ] > 0f2(s) = (1− cos θ)s2 − 2s(µ2 + σ2

n) cos θ− (µ2 + σ2

n)[cos θ(µ2 + σ2

n) + σ2n] < 0

(24)

hold true, where tan θ = σ1.Proof. The matrix Q = sI+ATA satisfies (Q−ATA)(µ2I+Q)−1 = (µ2I +Q)−1(Q−ATA), thus, from Theorem 2.2,we have

∥r(k+1)∥2 ≤∥∥∥∥( 0 0

WAT W

)∥∥∥∥2

∥r(k)∥2, (25)

where W = −ATA[(µ2 + s)I + ATA]−1 + s[(µ2 + s)I +ATA]−1(sI+ATA)[(µ2+s)I+ATA]−1. By means of (20),concrete computations give

W = V ΛV T

and

WAT = V ΛΣTUT ,

where Λ = −ΣTΣ[(µ2 + s)I +ΣTΣ

]−1+ s

[(µ2 + s)I+

ΣTΣ]−1

(sI + ΣTΣ)[(µ2 + s)I +ΣTΣ

]−1is a diagonal

matrix. Submitting the above equations into (25) yields

∥r(k+1)∥2

≤∥∥∥∥( 0 0

V ΛΣTUT V ΛV T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( U 00 V

)(0 0

ΛΣT Λ

)(UT 00 V T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( 0 0ΛΣT Λ

)∥∥∥∥2

∥r(k)∥2.

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 6: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

Taking into account that ΛΣT and Λ are both diagonalmatrices and using Lemma 2.2, we infer that

∥r(k+1)∥2

≤ max1≤i≤n

{∣∣∣∣ (s+ σ2i )(s− σ2

i )− µ2σ2i

(µ2 + s+ σ2i )

2

∣∣∣∣√1 + σ2i

}∥r(k)∥2.

The MRULT-I iteration method with Q = sI + ATA isconvergent if and only if∣∣∣∣ (s+ σ2

i )(s− σ2i )− µ2σ2

i

(µ2 + s+ σ2i )

2

∣∣∣∣2 (1 + σ2i ) < 1 (26)

for all σi(1 ≤ i ≤ n). Let us denote

ci =

∣∣∣∣ (s+ σ2i )(s− σ2

i )− µ2σ2i

(µ2 + s+ σ2i )

2

∣∣∣∣ .Similar to the proof of Theorem 2.3, if ci < cos θ and σ1 =tan θ hold for all 1 ≤ i ≤ n, then c2i (1 + σ2

i ) < cos2 θ(1 +tan2 θ) = 1, which is exactly (26). Moreover, ci < cos θ isequivalent to

− cos θ <(s+ σ2

i )(s− σ2i )− µ2σ2

i

(µ2 + s+ σ2i )

2< cos θ. (27)

It is not difficult to verify that (s+σ2i )(s−σ2

i )−µ2σ2i

(µ2+s+σ2i)2

=s2−σ2

i (µ2+σ2

i )

(µ2+s+σ2i)2

is a strictly monotone decreasing function withregard to σ2

i , so Inequality (27) holds if and only if

(s+ σ21)(s− σ2

1)− µ2σ21

(µ2 + s+ σ21)

2> − cos θ

and

(s+ σ2n)(s− σ2

n)− µ2σ2n

(µ2 + s+ σ2n)

2< cos θ.

The sufficient conditions (24) for the convergence of theMRULT-I iteration method with Q = sI + ATA follow bysolving the above inequalities.

III. THE SECOND TYPE OF MRULT METHOD AND ITSCONVERGENCE ANALYSIS

In this section, we discuss the second type of the MRULT(MRULT-II) iteration method for solving the augmentedsystems (6) by modifying the ULT-II iteration scheme (8).Denote

K1 =

(I 0

−AT Q

), L1 =

(0 −A0 Q− µ2I

),

then an equivalent form of the ULT-II iteration scheme (8)is as follows{

x(k+ 12 ) = x(k) +K−1

1 r(k)

x(k+1) = x(k+ 12 ) +M−1

2 r(k+12 )

, (28)

where r(k) and r(k+12 ) are defined as in Section II. Using

the similar acceleration techniques of [27], [26], we proposethe MRULT-II method incorporating two positive parametersβk and γk to solve the augmented systems (6), which yieldsthe following iteration scheme{

x(k+ 12 ) = x(k) + βkK

−11 r(k)

x(k+1) = x(k+ 12 ) + γkM

−12 r(k+

12 )

. (29)

The residual form of the iteration scheme (29) can be writtenas {

r(k+12 ) = r(k) − βkKK−1

1 r(k)

r(k+1) = r(k+12 ) − γkKM−1

2 r(k+12 )

. (30)

By virtue of the same technique of the MRULT-I iterationmethod, the parameters βk and γk in the MRULT-II iterationmethod are determined by minimizing the residual norms∥r(k+ 1

2 )∥ and ∥r(k+1)∥, respectively. As a matter of fact, bydirect computation, it has

∥r(k+12)∥22

= (r(k+12), r(k+

12))

= (r(k) − βkKK−11 r(k), r(k) − βkKK−1

1 r(k))

= (r(k), r(k))− (r(k), βkKK−11 r(k))− (βkKK−1

1 r(k), r(k))

+β2k∥KK−1

1 r(k)∥22= ∥r(k)∥22 − 2βk(r

(k),KK−11 r(k)) + β2

k∥KK−11 r(k)∥22

and

∥r(k+1)∥22= (r(k+1), r(k+1))

= (r(k+12) − γkKM−1

2 r(k+12), r(k+

12) − γkKM−1

2 r(k+12))

= (r(k+12), r(k+

12))− (r(k+

12), γkKM−1

2 r(k+12))

−(γkKM−12 r(k+

12), r(k+

12)) + γ2

k∥KM−12 r(k+

12)∥22

= ∥r(k+12)∥22 − 2γk(r

(k+ 12),KM−1

2 r(k+12))

+γ2k∥KM−1

2 r(k+12)∥22.

As discussed in Section II, one then obtains

βk =(r(k),KK−1

1 r(k))

∥KK−11 r(k)∥22

, γk =(r(k+

12),KM−1

2 r(k+12))

∥KM−12 r(k+

12)∥22

. (31)

Therefore, we have the following specific description ofthe MRULT-II iteration method.

Algorithm 3.1:1. Let β, γ > 0, and given an initial value f (0) ande(0) = g − Af (0) with g being the available vector. Givenτ > 0 and M is the maximum prescribed number of outeriterations,2. r(0) = b − Kx(0), and divide r(0) into (r

(0)1 ; r

(0)2 ) with

r(0)1 , r

(0)2 ∈ Rn;

3. For k = 0, 1, 2, · · · , until ∥r(k)∥2

∥r(0)∥2> τ or k < M ,

4. compute t1 = AT r(k)1 + r

(k)2 ;

5. solve Qt2 = t1;6. compute t3 = r

(k)1 +At2 and t4 = −AT r

(k)1 + µ2t2;

7. compute the value of βk: βk =(r

(k)1 )T t3+(r

(k)2 )T t4

∥t3∥22+∥t4∥2

2;

8. compute e(k+12 ) = e(k)+βkr

(k)1 and f (k+ 1

2 ) = f (k)+βkt2;9. compute r

(k+ 12 )

1 = r(k)1 − βkt3 and r

(k+ 12 )

2 = r(k)2 − βkt4;

10. solve (µ2I +Q)t2 = r(k+ 1

2 )2 ;

11. compute t3 = r(k+ 1

2 )1 and t4 = −AT r

(k)1 + (µ2I +

ATA)t2;

12. compute the value of γk: γk =(r

(k+12)

1 )T t3+(r(k+1

2)

2 )T t4∥t3∥2

2+∥t4∥22

;

13. compute e(k+1) = e(k+12 ) + γk(r

(k+ 12 )

1 − At2) andf (k+1) = f (k+ 1

2 ) + γk t2;14. compute r

(k+1)1 = r

(k+ 12 )

1 − γk t3 and r(k+1)2 = r

(k+ 12 )

2 −γk t4;15. end for

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 7: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

Remark 3.1: If βk = γk = 1, the MRULT-II iterationmethod automatically reduces to the ULT-II one, whichmeans that the convergent rate of the MRULT-II iterationmethod is at least not slower than that of the ULT-II one.

By making using of Lemma 2.1 and the similar proofof Theorem 2.1 with only technical modifications, we candemonstrate that ∥r(k+1)∥ of the MRULT-II iteration scheme(29) can attain its minimum at the point (βk, γk) defined by(31).

Theorem 3.1: Let symmetric positive definite Q ∈ Rn×n

satisfy (Q−ATA)(µ2I+Q)−1 = (µ2I+Q)−1(Q−ATA).Then the point (βk, γk) defined by (31) is the minimum pointof ∥r(k+1)∥ of the MRULT-II iteration scheme (29), whichreveals the values of (βk, γk) defined by (31) are optimal inthe real field R.

What follows investigates the convergence properties ofthe proposed method. In particular, based on Theorem3.1, we derive the following convergence properties of theMRULT-II iteration method with Q = sI and Q = sI+ATA(s > 0), respectively.

Theorem 3.2: For the augmented system (6), if the pa-rameter matrix Q ∈ Rn×n is symmetric positive definite andsatisfies (Q−ATA)(µ2I+Q)−1 = (µ2I+Q)−1(Q−ATA),then the residual norm of the MRULT-II iteration methodsatisfies the following relation

∥r(k+1)∥2 ≤ ∥A∥2∥r(k)∥2, (32)

where

A =

(0 0

GAT G

)with G = −ATAQ−1 + (Q − ATA)(µ2I + Q)−1(Q −µ2I)Q−1.Proof. Since (βk, γk) is the minimum point of the function∥r(k+1)∥2 and the equations I − KM−1

2 = N2M−12 and

I −KK−11 = L1K

−11 hold true, it immediately leads to the

following result

∥r(k+1)∥2= ∥(I − γkKM−1

2 )(I − βkKK−11 )r(k)∥2

≤ ∥(I −KM−12 )(I −KK−1

1 )r(k)∥2≤ ∥(I −KM−1

2 )(I −KK−11 )∥2∥r(k)∥2

= ∥N2M−12 L1K

−11 ∥2∥r(k)∥2. (33)

Simple matrix calculations result in

N2M−12 N1K

−11

=

(0 0AT Q

)(I A0 µ2I +Q

)−1

×(

0 −A0 Q− µ2I

)(I 0

−AT Q

)−1

=

(0 0AT Q

)(I −A(µ2I +Q)−1

0 (µ2I +Q)−1

)×(

0 −A0 Q− µ2I

)(I 0

Q−1AT Q−1

)=

(0 00 −ATA+ (Q−ATA)(µ2I +Q)−1(Q− µ2I)

)×(

I 0Q−1AT Q−1

)=

(0 0

GAT G

),

where G = −ATAQ−1 + (Q − ATA)(µ2I + Q)−1(Q −µ2I)Q−1. We then immediately get the conclusion (32).

Theorem 3.3: For the augmented system (6), the MRULT-II iteration method with Q = sI(s > 0) is convergent if theparameter s satisfies

µ2(1− cos θ) + 2σ21

1 + cos θ< s <

µ2(1 + cos θ) + 2σ2n

1− cos θ(34)

as µ2 >σ21−σ2

n−cos θ(σ21+σ2

n)2 cos θ , where tan θ = σ1.

Proof. By Theorem 3.2, it has

∥r(k+1)∥2 ≤∥∥∥∥( 0 0

WAT W

)∥∥∥∥2

∥r(k)∥2 (35)

with W = −ATAs + (s−µ2)(sI−ATA)

s(µ2+s) . According to (20), wehave

W = V ΛV T

and

WAT = V ΛΣTUT ,

where Λ = −(µ2+s)ΣTΣ+(s−µ2)(sI−ΣTΣ)s(µ2+s) is a diagonal ma-

trix. Taking the above equations into (35) leads to

∥r(k+1)∥2

≤∥∥∥∥( 0 0

V ΛΣTUT V ΛV T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( U 00 V

)(0 0

ΛΣT Λ

)(UT 00 V T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( 0 0ΛΣT Λ

)∥∥∥∥2

∥r(k)∥2,

which gives

∥r(k+1)∥2 ≤ max1≤i≤n

{∣∣∣∣−2σ2i + s− µ2

µ2 + s

∣∣∣∣√1 + σ2i

}∥r(k)∥2

in view of Lemma 2.2. It is well known that the MRULT-IIiteration method with Q = sI is convergent if and only if∣∣∣∣−2σ2

i + s− µ2

µ2 + s

∣∣∣∣2 (1 + σ2i ) < 1 (36)

for all σi(1 ≤ i ≤ n). Denote by

ci =

∣∣∣∣−2σ2i + s− µ2

µ2 + s

∣∣∣∣ .With a quite similar strategy utilized in Theorem 2.3, if ci <cos θ and σ1 = tan θ hold for all 1 ≤ i ≤ n, then c2i (1 +σ2i ) < cos2 θ(1+tan2 θ) = 1, i.e., Inequality (36) holds true.

Solving the inequality ci < cos θ is working out

− cos θ <−2σ2

i + s− µ2

µ2 + s< cos θ. (37)

It then follows from the decreasing property of −2σ2i+s−µ2

µ2+s

with regard to σ2i that Inequality (37) is equivalent to

−2σ21 + s− µ2

µ2 + s> − cos θ and

−2σ2n + s− µ2

µ2 + s< cos θ,

from which one may deduce the sufficient condition (34)for the convergence of the MRULT-I iteration method withQ = sI .

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 8: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

Theorem 3.4: For the augmented system (6), the MRULT-II iteration method with Q = sI+ATA(s > 0) is convergentif the following conditions

f1(s) = (1 + cos θ)s2 + [(cos θ − 1)µ2 + 2σ21 cos θ]s

+ σ21(µ

2 + σ21)(cos θ − 1) > 0

f2(s) = (1− cos θ)s2 − [(cos θ + 1)µ2 + 2σ2n cos θ]s

− σ2n(µ

2 + σ2n)(cos θ + 1) < 0

(38)

hold true, where tan θ = σ1.Proof. By Theorem 3.2 and the SVD of A in (20), we get

∥r(k+1)∥2

≤∥∥∥∥( 0 0

V ΛΣTUT V ΛV T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( U 00 V

)(0 0

ΛΣT Λ

)(UT 00 V T

)∥∥∥∥2

∥r(k)∥2

=

∥∥∥∥( 0 0

ΛΣT Λ

)∥∥∥∥2

∥r(k)∥2,

where Λ = −ΣTΣ(sI + ΣTΣ)−1 + s[(µ2 + s)I +ΣTΣ]−1[(s−µ2)I+ΣTΣ](sI+ΣTΣ)−1. Based on Lemma2.2, one has

∥r(k+1)∥2

≤ max1≤i≤n

{∣∣∣∣−(µ2 + σ2i )σ

2i + s(s− µ2)

(µ2 + s+ σ2i )(s+ σ2

i )

∣∣∣∣√1 + σ2i

}∥r(k)∥2.

To guarantee the MRULT-II iteration method with Q =sI +ATA convergent is to make∣∣∣∣−(µ2 + σ2

i )σ2i + s(s− µ2)

(µ2 + s+ σ2i )(s+ σ2

i )

∣∣∣∣2 (1 + σ2i ) < 1 (39)

for all σi(1 ≤ i ≤ n). Denote by

ci =

∣∣∣∣−(µ2 + σ2i )σ

2i + s(s− µ2)

(µ2 + s+ σ2i )(s+ σ2

i )

∣∣∣∣ .By making use of the technique applied in Theorem 2.3,if ci < cos θ and σ1 = tan θ hold for all 1 ≤ i ≤ n,then c2i (1 + σ2

i ) < cos2 θ(1 + tan2 θ) = 1. The inequalityci < cos θ can be equivalently transformed into the followinginequality

− cos θ <−(µ2 + σ2

i )σ2i + s(s− µ2)

(µ2 + s+ σ2i )(s+ σ2

i )< cos θ. (40)

Taking a derivative of −(µ2+σ2i )σ

2i+s(s−µ2)

(µ2+s+σ2i)(s+σ2

i)

with respect to σ2i

reveals that it is strictly monotone decreasing about σ2i , thus

we can rewrite Inequality (40) into the following equivalentform

−(µ2 + σ21)σ

21 + s(s− µ2)

(µ2 + s+ σ21)(s+ σ2

1)> − cos θ

and

−(µ2 + σ2n)σ

2n + s(s− µ2)

(µ2 + s+ σ2n)(s+ σ2

n)< cos θ.

The conclusions (38) follow by directly solving the aboveinequalities.

IV. NUMERICAL EXAMPLES

In this section, two examples arising from a Fredholmintegral equation of the first kind and image restorationare presented to examine the effectiveness of the proposedmethods. The numerical results of the MRULT-type methodsincluding the number of iteration steps (denoted by ‘IT’) andthe total computing times in seconds (denoted by ‘CPU’)are compared with those of the special HSS (SHSS), newspecial HSS (NSHSS), restricted version of the generalizedHSS (RGHSS), special GHSS (SGHSS) and the ULT-typeones mentioned in Section I. The versions of the MRULT-I iteration method with the parameter matrices Q = sIand Q = sI + ATA are denoted by MRULT− IQ1 andMRULT− IQ2 ones, respectively, and the MRULT-II iter-ation method with the parameter matrices Q = sI andQ = sI + ATA are abbreviated as MRULT− IIQ1 andMRULT− IIQ2

ones, respectively. All experiments are im-plemented in MATLAB (version R2016b) on a personal com-puter with Intel (R) Pentium (R) CPU G3240T 2.870GHz,16.0 GB memory and Windows 10 system.

The error vector e in g has normally distributed entrieswith zero mean and unit variance and is scaled so that thecontaminated g, defined by (2), has a specified noise level

ϵ = ∥e∥/∥g∥,

where the error-free right-hand side g is defined as in (3).The initial approximate solution f (0) = 0 is used for all theiterative methods in Example 4.1, while for image restorationproblems in Example 4.2 the initial approximate solutionf (0) = g is adopted. The parameter ϵ is set to 0.001 intwo examples.

In examples, the optimal values of unknown parameters forthe SHSS, NSHSS, RGHSS, SGHSS and ULT-type methodshave been presented in [19], [6], [1], [2], [8], and the optimalparameters of the MRULT− I and MRULT− II methodsare chosen as the experimentally found optimal ones whichlead to the least number of iteration steps. The optimal valueof the regularization parameter has been investigated [12],[20], [9]. In our computation, the GCV scheme is adoptedto determine a suitable value for regularization parameter µ[9]. The best regularization parameter value µ minimizes thefollowing GCV functional

G(µ) =∥A(ATA+ µ2I)−1AT g − g∥22

(trace(I −A(ATA+ µ2I)−1AT ))2.

A quantity, the relative error (RES), is commonly appliedto measure the accuracy of these methods for ill-posedproblems and image restoration. The proposed quantity isdefined as follows:

RES =∥fnumerical − fexact∥2

∥fexact∥2,

where fnumerical and fexact are the numerical solutions (orrestored images) and exact solutions (or the original images),respectively. In image restoration problem, a smaller RES-value usually implies that the restoration is of higher quality.Somehow this may be not consistent with visual judgment.Therefore, the restored images also are displayed in thefollowing examples.

Example 4.1: (Example from Hansen Tools [13]) Consid-er the test example gravity(500,1) from Hansen Tools

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 9: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

TABLE I: Numerical results of methods of Example 4.1 with µ = 0.0068.Method Parameters IT CPU RES

SHSS α = 0.9543 500 0.2322 0.8130

NSHSS α = 4.5749e− 6 500 0.2347 0.9554

RGHSS (α, β) = (0.001, 0.001) 19 0.0108 0.0010

SGHSS (ω, τ) = (0.6, 1.0274e− 3) 217 0.1078 0.0010

ULT− IQ1 s = 0.4172 500 1.3257 4.3841

ULT− IQ2 s = 0.8 500 0.0972 0.0221

ULT− IIQ1 s = 0.4172 500 0.4823 4.7534

ULT− IIQ2 s = 0.0437 3 0.0159 0.0187

MRULT− IQ1 s = 1.9 121 0.2770 0.0233

MRULT− IQ2s = 0.01 2 0.0065 0.0158

MRULT− IIQ1 s = 1.6 121 0.2901 0.0240

MRULT− IQ2 s = 0.01 2 0.0071 0.0147

[13]. The associated perturbed data vector g is obtained byusing Matlab code g = g + 0.001 × rand(size(g)) in thetest.

We apply Algorithms 2.1 and 3.1 with Q = sI andQ = sI + ATA to Example 4.1 and compare the IT, CPUtimes and RES of the MRULT-type methods with those of theSHSS, NSHSS, RGHSS, SGHSS and ULT-type ones. For thetest problem, the regularization parameter µ determined byGCV is 0.0068. All iteration processes are terminated oncethe current residual satisfies ∥r(k)∥2/∥r(0)∥2 < 10−5 or thenumber of iterations exceeds the largest prescribed iterationstep M = 500, where r(k) = b−Kx(k) is the residual at thekth iteration. In our implementations, the linear sub-systemsare solved by the sparse Cholesky factorization when thecoefficient matrix is symmetric positive definite.

The parameters, IT, CPU times and RES of the testediteration methods are listed in Table I, and the exact andnumerical solutions are draw in Figure 1 for n = 500. FromTable 1, it can be seen that that the MRULT-type iterationmethods can successfully compute approximate solution ofhigh quality. Moreover, the MRULT-type iteration methodscan achieve smaller relative error with least IT and CPUtimes compared with the SHSS and NSHSS ones. Althoughthe relative errors of the MRULT-type iteration methodsare bigger than that of the RGHSS and SGHSS ones, theMRULT− IQ2 and MRULT− IIQ2 iteration methods needless iteration steps. Besides, from these numerical results,we can observe that the MRULT-I iteration methods withQ = sI and Q = sI + ATA are superior to the ULT-Iones with Q = sI and Q = sI + ATA, respectively, andthe conclusion also holds true for the MRULT-II iterationmethods. In order to show more clearly that the convergencerates of the MRULT-type methods are faster than that ofthe ULT-type ones, we draw the plots of their relative errorswith respect to iterations k in Figure 2. From Figure 2, it canbe seen that the MRULT-type methods further improve theconvergence rates of the ULT-type ones. Moreover, the plotsof relative errors with respect to iterations k are also drawnin Figure 3 to compare the convergence behaviour of theMRULT-I and MRULT-II methods with different parametermatrix Q. And Figure 3 indicates that the MRULT-I andMRULT-II methods are comparable. As these results, ourproposed algorithms surpass some other ones and are moreeffective for solving the ill-posed problems.

TABLE II: Numerical results of methods of Example 4.2 with µ = 0.0042.Method Parameters IT CPU RES

SHSS α = 0.3491 500 1.6402 0.0717

NSHSS α = 1.9858e− 3 500 2.9766 0.1965

RGHSS (α, β) = (0.0361, 0.001) 8 0.0267 0.0548

SGHSS (ω, τ) = (0.6, 2.0622e− 3) 9 0.0302 0.0473

ULT− IQ1 s = 1.02 500 1.7532 0.0851

ULT− IQ2 s = 12 500 2.0577 0.1953

ULT− IIQ1 s = 1.0179 500 1.2897 0.0851

ULT− IIQ2 s = 0.1338 423 1.7649 0.0565

MRULT− IQ1 s = 0.99 366 1.6182 0.0568

MRULT− IQ2s = 0.001 6 2.0308 0.0544

MRULT− IIQ1 s = 0.98 476 0.0427 0.0569

MRULT− IIQ2 s = 0.001 6 0.0429 0.0543

Example 4.2: (Image restoration) In this example, welet the exact image be the ‘brain’ image from MATLAB.It is represented by an array of 128 × 128 pixels andshown on the left-hand side of Figure 4. Choose PSF =psfDefocus([7, 7], 3) in [15] to blur the example, and a‘noisy’ available vector g is generated by using MATLABcode g = g + 0.001× rand(size(g)). The PSF and blurredimage in the example are shown on the middle side andthe right-hand side of Figure 4, respectively. The purposeof the experiment is to illustrate that the MRULT-typeiterative methods perform better than the SHSS, NSHSS,RGHSS, SGHSS and ULT-type ones for image restoration.Furthermore, the relative error of the blurred image is 0.2772.

In this test, we apply the periodic BCs to construct theblurring matrix A. Here the matrix A is block circulan-t with circulant blocks, which can perform matrix-vectormultiplications and provide the spectral factorization of theblurring matrix A via two-dimensional fast Fourier trans-formations (FFTs). Therefore the regularization parameter µcomputed by the GCV method and the optimal values ofthe unknown parameters for all iterative methods involvingthe singular values of A can be easily obtained via FFTs,see [15] for more details. Moreover, we can also use theFFTs to solve the linear systems with the symmetric positivedefinite coefficient matrices in Algorithms 2.1 and 3.1. Allcomputations for the tested iteration methods are terminatedonce the current residual satisfies ∥r(k)∥2/∥r(0)∥2 < 10−4

or the number of iterations exceeds the largest prescribediteration step M = 600, where r(k) = b − Kx(k) is theresidual at the kth iteration.

IT, CPU times and RES for the tested methods and therestored images by the twelve iterative methods are exhibitedin Table II and in Figure 5, respectively. From Table II,we can see that the MRULT− IQ2 and MRULT− IIQ2

iteration methods just need 6 iterations and less CPU as theyare terminated. Moreover, the restorations with the MRULT-type methods are of higher quality, because the relative errorsof the MRULT-type methods are smaller than some others.Figure 5 also demonstrates the higher visual quality of therestored images with the MRULT-type methods. As a whole,the proposed methods in this paper perform better than sometested ones and are effective for solving image restorationproblems.

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 10: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

0 50 100 150 200 250 300 350 400 450 500

-1

0

1

2

3

4

5

6

Exact

ULT-IQ1

ULT-IQ2

ULT-IIQ1

ULT-IIQ2

MRULT-IQ1

MRULT-IQ2

MRULT-IIQ1

MRULT-IIQ2

144.5 145 145.5 146

1.27

1.275

1.28

1.285

1.29

Exact

MRULT-IIQ1

MRULT-IQ1

ULT-IQ2

MRULT-IQ2

ULT-IIQ2

ULT-IIQ1

ULT-IIQ1

ULT-IQ1

Fig. 1: Example 4.1-gravity(500, 1) test case: the exact solution and its numerical solution.

100 200 300 400 500

k

0

1

2

3

4

5

6

RES

(a)

ULT-IQ1

MRULT-IQ1

5 10 15 20 25 30

k

0

0.05

0.1

0.15

RES

(b)

ULT-IQ2

MRULT-IQ2

100 200 300 400 500

k

0

1

2

3

4

5

6

RES

(c)

ULT-IIQ1

MRULT-IIQ1

1 1.5 2 2.5 3

k

0.014

0.016

0.018

0.02

0.022

0.024

0.026

0.028

RES

(d)

ULT-IIQ2

MRULT-IIQ2

ULT-IIQ2

ULT-IIQ1

MRULT-IIQ2

MRULT-IIQ1

ULT-IQ2

ULT-IQ1

MRULT-IQ1

MRULT-IQ2

Fig. 2: Example 4.1-gravity(500, 1) test case: the relative error versus iteration k for tested iterative methods.

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 11: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

50 100 150 200

k

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

RES

(a)

MRULT-IQ1

MRULT-IIQ1

1 1.2 1.4 1.6 1.8 2

k

0.014

0.015

0.016

0.017

0.018

0.019

0.02

0.021

RES

(b)

MRULT-IQ2

MRULT-IIQ2

MRULT-IQ1

MRULT-IQ2

MRULT-IIQ2

MRULT-IIQ1

Fig. 3: Example 4.1-gravity(500, 1) test case: the relative error versus iteration k for tested iterative methods.

(a) True Image (b) PSF (c) blurred noisy image

Fig. 4: True image, PSF and blurred image of Example 4.2.

V. CONCLUSION

Inspired by [27], this paper puts forward two types ofnon-stationary upper and lower triangular iteration methodscalled the minimize residual ULT (MRULT) ones to givethe approximate solution of for ill-posed inverse problems.By introducing the control parameters into the ULT-I andULT-II splitting iteration methods, we establish the iterativesequences (11) and (29) to accelerate the convergence ratesof the ULT-I and ULT-II ones, respectively. Two parametersinvolved in the MRULT methods are adopted by minimizingthe residual norms at each step of the MRULT ones. Andwhen the parameters are adopted as specific values, theMRULT-I and MRULT-II methods reduce to the ULT-I andULT-II ones, respectively. Thus, the convergent rates ofthe MRULT methods are at least not slower than those ofthe ULT ones. Moreover, we investigate analytically theconvergence properties of the MRULT methods. And thepresented numerical examples verify that our methods aresuperior to some existing ones and effective for solving ill-posed problems.

ACKNOWLEDGMENT

This research is supported by the National NaturalScience Foundations of China (No.10802068), the Na-tional Science Foundation for Young Scientists of China(No.11901123) and the Guangxi Natural Science Foundation(No.2018JJB110062).

REFERENCES

[1] N. Aghazadeh, M. Bastani and D. K. Salkuyeh, “Generalized Hermitianand skew-Hermitian splitting iterative method for image restoration,”Appl. Math. Model., vol. 39, pp. 6126-6138, 2015.

[2] H. Aminikhah and M. Yousefi, “A special generalized HSS method fordiscrete ill-posed problems,” Comput. Appl. Math., vol. 37, pp. 1507-1523, 2018.

[3] Z. Z. Bai, G. H. Golub and M. K. Ng, “Hermitian and skew-Hermitiansplitting methods for non-Hermitian positive definite linear systems,”SIAM J. Matrix Anal. Appl., vol. 24, pp. 603-626, 2003.

[4] Z. Z. Bai, G. H. Golub and M. K. Ng, “On successive-overrelaxationacceleration of the Hermitian and skew-Hermitian splitting iterations,”Numer. Linear Algebra Appl., vol. 14, pp. 319-335, 2007.

[5] M. Benzi, “A generalization of the Hermitian and skew-Hermitiansplitting iteration,” SIAM J. Matrix Anal. Appl., vol. 31, pp. 360-374,2009.

[6] G. H. Cheng, X. Rao and X. G. Lv, “The comparisons of two specialHermitian and skew-Hermitian splitting methods for image restoration,”Appl. Math. Model., vol. 39, pp. 1275-1280, 2015.

[7] M. J. Fadili, J. L. Starck and J. Bobin, et al., “Image decompositionand separation using sparse representations: An overview,” Proceedingsof the IEEE, vol. 98, pp. 983-994, 2010.

[8] H. T. Fan, M. Bastani, B. Zheng and X. Y. Zhu, “A class of upper andlower triangular splitting iteration methods for image restoration,” Appl.Math. Model., vol. 54, pp. 82-111, 2018.

[9] G. H. Golub, M. Heath and G. Wahba, “Generalized cross-validationas a method for choosing a good ridge parameter,” Technometrics, vol.21, pp. 215-223, 1979.

[10] C. W. Groetsch, The Theory of Tikhonov Regularization for FredholmIntegral Equations of the First Kind, Boston: Pitman, 1984.

[11] X. X. Guo and S. Wang, “Modified HSS iteration methods for aclass of non-Hermitian positive-definite linear systems,” Appl. Math.Comput., vol. 218, pp. 10122-10128, 2012.

[12] P. C. Hansen, “Analysis of discrete ill-posed problems by means ofthe L-curve,” SIAM Rev., vol. 34, pp. 561-580, 1992.

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________

Page 12: A Class of Nonstationary Upper and Lower …A Class of Nonstationary Upper and Lower Triangular Splitting Iteration Methods for Ill-posed Inverse Problems Jingjing Cui, Guohua Peng,

(a) SHSS (b) NSHSS (c) RGHSS (d) SGHSS

(e) ULT-IQ

1

(f) ULT-IQ

2

(h) ULT-IIQ

1

(i) ULT-IIQ

2

(j) MRULT-IQ

1

(l) MRULT-IQ

2

(u) MRULT-IIQ

1

(v) MRULT-IIQ

2

Fig. 5: Restored images with the tested methods of Example 4.2.

[13] P. C. Hansen, “Regularization tools: a MATLAB package for analysisand solution of discrete ill-posed problems,” Algor., vol. 6, pp. 1-35,1994.

[14] P. C. Hansen, M. Hanke and A. Neubauer, Rank-deficient and discreteill-posed problems. Numerical aspects of linear inversion, Philadel-phia: SIAM, 1997.

[15] P. C. Hansen, J. G. Nagy and D. P. OLeary, Deblurring Images:Matrices Spectra and Filtering, Philadelphia: SIAM, 2006.

[16] G. J. Hou, G. D. Wang, Z. K. Pan, B. X. Huang, H. Yang and T. Yu,“Image Enhancement and Restoration: State of the Art of VariationalRetinex Models,” IAENG International Journal of Computer Science,vol. 44, no. 4, pp. 445-455, 2017.

[17] Z. G. Huang, Z. Xu and J. J. Cui, “Preconditioned triangular splittingiteration method for a class of complex symmetric linear systems,”Calcolo, vol. 56, pp. 1-39, 2019.

[18] L. Li, T. Z. Huang and X. P. Liu, “ Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linearsystems,” Numer. Linear Algebra Appl., vol. 14, pp. 217-235, 2007.

[19] X. G. Lv, T. Z. Huang, Z. B. Xu and X. L. Zhao, “ A specialHermitian and skew-Hermitian splitting method for image restoration,”Appl. Math. Model., vol. 37, pp. 1069-1082, 2013.

[20] V. Morozov, “On the solution of functional equations by the methodof regularization,” Soviet Math. Dokl., vol. 7, pp. 414-417, 1966.

[21] D. L. Phillips, “A technique for the numerical solution of certainintegral equations of the first kind,” J. Assoc. Comput. Mach., vol. 9,pp. 84-97, 1962.

[22] Z. Sufyanu, F. S. Mohamad, A. A. Yusuf and M. B. Mamat, “EnhancedFace Recognition Using Discrete Cosine Transform,” Engineering Let-ters, vol. 24, no. 1, pp. 52-61, 2016.

[23] M. H. Talukder, M. Ogiya and M. Takanoku, “Hybrid Technique forDespeckling Medical Ultrasound Images,” Lecture Notes in Engineeringand Computer Science: Proceedings of The International MultiConfer-

ence of Engineers and Computer Scientists 2018, IMECS 2018, 14-16March, 2018, Hong Kong, pp. 358-363.

[24] A. N. Tikhonov and V. Y. Arsenin, Solutions of ill-posed problems,Winston, Washington, DC, 1977.

[25] S. Xiao, F. Z. Ge and Y. Zhou, “Blind Image Deblurring via Regular-ization and Split Bregman,” IAENG International Journal of ComputerScience, vol. 45, no. 4, pp. 584-591, 2018.

[26] A. L. Yang, “On the convergence of the minimum residual HSSiteration method,” Appl. Math. Lett., vol. 94, pp. 210-216, 2019.

[27] A. L. Yang, Y. Cao, and Y. J. Wu, “Minimum residual Hermitian andskew-Hermitian splitting iteration method for non-Hermitian positivedefinite linear systems,” BIT Numer. Math., vol. 59, pp. 299-319, 2019.

[28] H. T. Yao, F. Dai and S. L. Zhang, et al., “DR2-Net: Deep ResidualReconstruction Network for image compressive sensing,” Neurocom-puting, vol. 359, pp. 483-493, 2019.

IAENG International Journal of Computer Science, 47:1, IJCS_47_1_15

Volume 47, Issue 1: March 2020

______________________________________________________________________________________